Coronary angiography (CAG) as a typical imaging modality for the diagnosis of coronary diseases hasbeen widely employed in clinical practices. For CAG-based computer-aided diagnosis systems, accurate vessel segmentation plays a fundamental role. However, patients with bradycardia usually have a pacemaker which frequently interferes the vessel segmentation. In this case, the segmentation of vessels will be hard. To mitigate interferences of pacemakers and then extract main vessels more effectively in CAG images, we propose an approach. At first, a pseudo CAG (pCAG) image is generated through a part of a CAG sequence, in which the pacemaker exists. Then, a local feature descriptor is employed to register the relative location of pacemaker between the pCAG image and the target CAG image. Finally, combining the registration result and segmentation results of main vessels and pacemaker, interferences of pacemaker are removed and the segmentation of main vessels is improved. The proposed method is evaluated based on 11 CAG images with pacemakers acquired in clinical practices. An optimization ratio of the Dice coefficient is 12.04%, which demonstrates that our method can remove overlapping pacemakers and achieve the improvement of main vessel segmentation in CAG images.Our method can further become a helpful component in a CAG-based computer-aided diagnosis system, improving its diagnosis accuracy and efficiency.
Hepatocellular carcinoma (HCC) is the most common liver malignancy, where HCC segmentation and prediction of the degree of pathological differentiation are two important tasks in surgical treatment and prognosis evaluation. Existing methods usually solve these two problems independently without considering the correlation of the two tasks. In this paper, we propose a multi-task learning model that aims to accomplish the segmentation task and classification task simultaneously. The model consists of a segmentation subnet and a classification subnet. A multi-scale feature fusion method is proposed in the classification subnet to improve the classification accuracy, and a boundary-aware attention is designed in the segmentation subnet to solve the problem of tumor over-segmentation. A dynamic weighted average multi-task loss is used to make the model achieve optimal performance in both tasks simultaneously. The experimental results of this method on 295 HCC patients are superior to other multi-task learning methods, with a Dice similarity coefficient (Dice) of (83.9 ± 0.88)% on the segmentation task, while the average recall is (86.08 ± 0.83)% and an F1 score is (80.05 ± 1.7)% on the classification task. The results show that the multi-task learning method proposed in this paper can perform the classification task and segmentation task well at the same time, which can provide theoretical reference for clinical diagnosis and treatment of HCC patients.
Objective To automatically segment diabetic retinal exudation features from deep learning color fundus images. Methods An applied study. The method of this study is based on the U-shaped network model of the Indian Diabetic Retinopathy Image Dataset (IDRID) dataset, introduces deep residual convolution into the encoding and decoding stages, which can effectively extract seepage depth features, solve overfitting and feature interference problems, and improve the model's feature expression ability and lightweight performance. In addition, by introducing an improved context extraction module, the model can capture a wider range of feature information, enhance the perception ability of retinal lesions, and perform excellently in capturing small details and blurred edges. Finally, the introduction of convolutional triple attention mechanism allows the model to automatically learn feature weights, focus on important features, and extract useful information from multiple scales. Accuracy, recall, Dice coefficient, accuracy and sensitivity were used to evaluate the ability of the model to detect and segment the automatic retinal exudation features of diabetic patients in color fundus images. Results After applying this method, the accuracy, recall, dice coefficient, accuracy and sensitivity of the improved model on the IDRID dataset reached 81.56%, 99.54%, 69.32%, 65.36% and 78.33%, respectively. Compared with the original model, the accuracy and Dice index of the improved model are increased by 2.35% , 3.35% respectively. Conclusion The segmentation method based on U-shaped network can automatically detect and segment the retinal exudation features of fundus images of diabetic patients, which is of great significance for assisting doctors to diagnose diseases more accurately.
This article aims to combine deep learning with image analysis technology and propose an effective classification method for distal radius fracture types. Firstly, an extended U-Net three-layer cascaded segmentation network was used to accurately segment the most important joint surface and non joint surface areas for identifying fractures. Then, the images of the joint surface area and non joint surface area separately were classified and trained to distinguish fractures. Finally, based on the classification results of the two images, the normal or ABC fracture classification results could be comprehensively determined. The accuracy rates of normal, A-type, B-type, and C-type fracture on the test set were 0.99, 0.92, 0.91, and 0.82, respectively. For orthopedic medical experts, the average recognition accuracy rates were 0.98, 0.90, 0.87, and 0.81, respectively. The proposed automatic recognition method is generally better than experts, and can be used for preliminary auxiliary diagnosis of distal radius fractures in scenarios without expert participation.