
Preprints with The Lancet is a collaboration between The Lancet Group of journals and SSRN to facilitate the open sharing of preprints for early engagement, community comment, and collaboration. Preprints available here are not Lancet publications or necessarily under review with a Lancet journal. These preprints are early-stage research papers that have not been peer-reviewed. The usual SSRN checks and a Lancet-specific check for appropriateness and transparency have been applied. The findings should not be used for clinical or public health decision-making or presented without highlighting these facts. For more information, please see the FAQs.
Toward Quantitative Burn Diagnosis Based on Image Reconstructions Derived from Photoacoustic Signals
23 Pages Posted: 1 May 2024
More...Abstract
Background: The accurate diagnosis of superficial and deep second-degree burns, which is predominately determined by clinical visual inspection and palpation, is complicated by subtle differences in skin injury. A quantitative diagnosis forms the foundation of evidence-based burn medicine. By combining optical and ultrasonic signals, photoacoustic technology attains a greater depth of detection and a higher lateral resolution. Reconstructed photoacoustic images reveal the microcirculation of subcutaneous tissue and can be employed for the quantitative diagnosis of burn injuries. However, conventional photoacoustic image reconstruction methods exhibit certain shortcomings, including limited visibility and ill-posedness. Furthermore, to date, most studies have focused primarily on photoacoustic image reconstruction as an independent task.
Method: This paper proposes a multi-tasking model for photoacoustic image reconstruction and segmentation, leveraging the advancements in artificial intelligence. A neural network named inception block with sharing parameters is applied to the above two image processing tasks, which contributes to extract the common high-level image features. Diffusion model is used for image segmentation to wipe out the background of the raw images. Through the shared parameters of the inception block, the accuracy of image reconstruction is significantly improved, attributable to the shared latent features of initial patches.
Result: Three measurements including structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and signal-to-noise ratio (SNR) are used to evaluate the performance of the proposed model. Extensive experiments reveal that our algorithm outperforms existing photoacoustic image reconstruction models, and our results are SSIM≈0.963, PSNR≈29.642, SNR≈10.752, respectively.
Conclusion: This paper proposes a multi-tasking framework for photoacoustic image reconstruction and segmentation that incorporates a parameter-sharing neural network for latent feature extraction. Some previous methods which are Ki-GAN and Y-net are selected for comparable experiments. Our model has achieved the state-of-the-art results compare to previous algorithms.
Funding: This work was supported by the National Natural Science Foundation of China (Grant No.82302842, No.82172205), GuangDong Basic and Applied Basic Research Foundation (No. 2023A1515140123, No.2021A1515011453, No.2022A1515012160), The Medical Clinical Key Specialty Construction Project for 14th Five-Year Plan of Guangdong Province, The Medical High level - Key Specialty Construction Project for 14th Five-Year Plan of Foshan City.
Declaration of Interest: The authors declare that there is no conflict of interest.
Ethical Approval: This study for animal experiments was approved by the Ethics Committee of the Guangdong Medical Laboratory Animal Center and the ethics number is “C202303-35R”. This study includes some images of human wound surfaces, all the data of the patients are sufficiently noninvasive, non-contacted and anonymised, Ethics approval for human is not needed after consulting the Ethics Committee of the First Peoples’ Hospital of Foshan.
Keywords: burn diagnosis, photoacoustic, image reconstruction, image segmentation, Deep learning
Suggested Citation: Suggested Citation