Focus on the inconsistency of the shape, location and size of brain glioma, a dual-channel 3-dimensional (3D) densely connected network is proposed to automatically segment brain glioma tumor on magnetic resonance images. Our method is based on a 3D convolutional neural network frame, and two convolution kernel sizes are adopted in each channel to extract multi-scale features in different scales of receptive fields. Then we construct two densely connected blocks in each pathway for feature learning and transmission. Finally, the concatenation of two pathway features was sent to classification layer to classify central region voxels to segment brain tumor automatically. We train and test our model on open brain tumor segmentation challenge dataset, and we also compared our results with other models. Experimental results show that our algorithm can segment different tumor lesions more accurately. It has important application value in the clinical diagnosis and treatment of brain tumor diseases.
Objective To propose an innovative self-supervised learning method for vascular segmentation in computed tomography angiography (CTA) images by integrating feature reconstruction with masked autoencoding. Methods A 3D masked autoencoder-based framework was developed, where in 3D histogram of oriented gradients (HOG) was utilized for multi-scale vascular feature extraction. During pre-training, random masking was applied to local patches of CTA images, and the model was trained to jointly reconstruct original voxels and HOG features of masked regions. The pre-trained model was further fine-tuned on two annotated datasets for clinical-level vessel segmentation. Results Evaluated on two independent datasets (30 labeled CTA images each), our method achieved superior segmentation accuracy to the supervised neural network U-Net (nnU-Net) baseline, with Dice similarity coefficients of 91.2% vs. 89.7% (aorta) and 84.8% vs. 83.2% (coronary arteries). Conclusion The proposed self-supervised model significantly reduces manual annotation costs without compromising segmentation precision, showing substantial potential for enhancing clinical workflows in vascular disease management.
Cardiac enlargement is an important symptom of vascular and heart disease. The cardiothoracic ratio (CTR) is an important index used to measure the size of heart. The aim of this study was to assess the relationship between aging and cardiothoracic ratio. This paper also presents an improved C-V level set method to segment lung tissue based on X-ray image, which used to automatically compute CTR. In the investigation carried out in our school, we got more than 3 120 chest radiographs from medical examination of the working population in Beijing, and we systematically studied the effects of age and gender on the CTR to obtain reference values for each group. The reference values established in this study can be useful for recording and quantifying the cardiac enlargement, so that it may be useful for calling attention to the cardiovascular diseases and the heart diseases.
Glioma is a primary brain tumor with high incidence rate. High-grade gliomas (HGG) are those with the highest degree of malignancy and the lowest degree of survival. Surgical resection and postoperative adjuvant chemoradiotherapy are often used in clinical treatment, so accurate segmentation of tumor-related areas is of great significance for the treatment of patients. In order to improve the segmentation accuracy of HGG, this paper proposes a multi-modal glioma semantic segmentation network with multi-scale feature extraction and multi-attention fusion mechanism. The main contributions are, (1) Multi-scale residual structures were used to extract features from multi-modal gliomas magnetic resonance imaging (MRI); (2) Two types of attention modules were used for features aggregating in channel and spatial; (3) In order to improve the segmentation performance of the whole network, the branch classifier was constructed using ensemble learning strategy to adjust and correct the classification results of the backbone classifier. The experimental results showed that the Dice coefficient values of the proposed segmentation method in this article were 0.909 7, 0.877 3 and 0.839 6 for whole tumor, tumor core and enhanced tumor respectively, and the segmentation results had good boundary continuity in the three-dimensional direction. Therefore, the proposed semantic segmentation network has good segmentation performance for high-grade gliomas lesions.
The count and recognition of white blood cells in blood smear images play an important role in the diagnosis of blood diseases including leukemia. Traditional manual test results are easily disturbed by many factors. It is necessary to develop an automatic leukocyte analysis system to provide doctors with auxiliary diagnosis, and blood leukocyte segmentation is the basis of automatic analysis. In this paper, we improved the U-Net model and proposed a segmentation algorithm of leukocyte image based on dual path and atrous spatial pyramid pooling. Firstly, the dual path network was introduced into the feature encoder to extract multi-scale leukocyte features, and the atrous spatial pyramid pooling was used to enhance the feature extraction ability of the network. Then the feature decoder composed of convolution and deconvolution was used to restore the segmented target to the original image size to realize the pixel level segmentation of blood leukocytes. Finally, qualitative and quantitative experiments were carried out on three leukocyte data sets to verify the effectiveness of the algorithm. The results showed that compared with other representative algorithms, the proposed blood leukocyte segmentation algorithm had better segmentation results, and the mIoU value could reach more than 0.97. It is hoped that the method could be conducive to the automatic auxiliary diagnosis of blood diseases in the future.
The existing retinal vessels segmentation algorithms have various problems that the end of main vessels are easy to break, and the central macula and the optic disc boundary are likely to be mistakenly segmented. To solve the above problems, a novel retinal vessels segmentation algorithm is proposed in this paper. The algorithm merged together vessels contour information and conditional generative adversarial nets. Firstly, non-uniform light removal and principal component analysis were used to process the fundus images. Therefore, it enhanced the contrast between the blood vessels and the background, and obtained the single-scale gray images with rich feature information. Secondly, the dense blocks integrated with the deep separable convolution with offset and squeeze-and-exception (SE) block were applied to the encoder and decoder to alleviate the gradient disappearance or explosion. Simultaneously, the network focused on the feature information of the learning target. Thirdly, the contour loss function was added to improve the identification ability of the blood vessels information and contour information of the network. Finally, experiments were carried out on the DRIVE and STARE datasets respectively. The value of area under the receiver operating characteristic reached 0.982 5 and 0.987 4, respectively, and the accuracy reached 0.967 7 and 0.975 6, respectively. Experimental results show that the algorithm can accurately distinguish contours and blood vessels, and reduce blood vessel rupture. The algorithm has certain application value in the diagnosis of clinical ophthalmic diseases.
Corona virus disease 2019 (COVID-19) is an acute respiratory infectious disease with strong contagiousness, strong variability, and long incubation period. The probability of misdiagnosis and missed diagnosis can be significantly decreased with the use of automatic segmentation of COVID-19 lesions based on computed tomography images, which helps doctors in rapid diagnosis and precise treatment. This paper introduced the level set generalized Dice loss function (LGDL) in conjunction with the level set segmentation method based on COVID-19 lesion segmentation network and proposed a dual-path COVID-19 lesion segmentation network (Dual-SAUNet++) to address the pain points such as the complex symptoms of COVID-19 and the blurred boundaries that are challenging to segment. LGDL is an adaptive weight joint loss obtained by combining the generalized Dice loss of the mask path and the mean square error of the level set path. On the test set, the model achieved Dice similarity coefficient of (87.81 ± 10.86)%, intersection over union of (79.20 ± 14.58)%, sensitivity of (94.18 ± 13.56)%, specificity of (99.83 ± 0.43)% and Hausdorff distance of 18.29 ± 31.48 mm. Studies indicated that Dual-SAUNet++ has a great anti-noise capability and it can segment multi-scale lesions while simultaneously focusing on their area and border information. The method proposed in this paper assists doctors in judging the severity of COVID-19 infection by accurately segmenting the lesion, and provides a reliable basis for subsequent clinical treatment.
The skin is the largest organ of the human body, and many visceral diseases will be directly reflected on the skin, so it is of great clinical significance to accurately segment the skin lesion images. To address the characteristics of complex color, blurred boundaries, and uneven scale information, a skin lesion image segmentation method based on dense atrous spatial pyramid pooling (DenseASPP) and attention mechanism is proposed. The method is based on the U-shaped network (U-Net). Firstly, a new encoder is redesigned to replace the ordinary convolutional stacking with a large number of residual connections, which can effectively retain key features even after expanding the network depth. Secondly, channel attention is fused with spatial attention, and residual connections are added so that the network can adaptively learn channel and spatial features of images. Finally, the DenseASPP module is introduced and redesigned to expand the perceptual field size and obtain multi-scale feature information. The algorithm proposed in this paper has obtained satisfactory results in the official public dataset of the International Skin Imaging Collaboration (ISIC 2016). The mean Intersection over Union (mIOU), sensitivity (SE), precision (PC), accuracy (ACC), and Dice coefficient (Dice) are 0.901 8, 0.945 9, 0.948 7, 0.968 1, 0.947 3, respectively. The experimental results demonstrate that the method in this paper can improve the segmentation effect of skin lesion images, and is expected to provide an auxiliary diagnosis for professional dermatologists.
Objective To develop a neural network architecture based on deep learning to assist knee CT images automatic segmentation, and validate its accuracy. Methods A knee CT scans database was established, and the bony structure was manually annotated. A deep learning neural network architecture was developed independently, and the labeled database was used to train and test the neural network. Metrics of Dice coefficient, average surface distance (ASD), and Hausdorff distance (HD) were calculated to evaluate the accuracy of the neural network. The time of automatic segmentation and manual segmentation was compared. Five orthopedic experts were invited to score the automatic and manual segmentation results using Likert scale and the scores of the two methods were compared. Results The automatic segmentation achieved a high accuracy. The Dice coefficient, ASD, and HD of the femur were 0.953±0.037, (0.076±0.048) mm, and (3.101±0.726) mm, respectively; and those of the tibia were 0.950±0.092, (0.083±0.101) mm, and (2.984±0.740) mm, respectively. The time of automatic segmentation was significantly shorter than that of manual segmentation [(2.46±0.45) minutes vs. (64.73±17.07) minutes; t=36.474, P<0.001). The clinical scores of the femur were 4.3±0.3 in the automatic segmentation group and 4.4±0.2 in the manual segmentation group, and the scores of the tibia were 4.5±0.2 and 4.5±0.3, respectively. There was no significant difference between the two groups (t=1.753, P=0.085; t=0.318, P=0.752). Conclusion The automatic segmentation of knee CT images based on deep learning has high accuracy and can achieve rapid segmentation and three-dimensional reconstruction. This method will promote the development of new technology-assisted techniques in total knee arthroplasty.
In this paper, we propose a new active contour algorithm, i.e. hierarchical contextual active contour (HCAC), and apply it to automatic liver segmentation from three-dimensional CT (3D-CT) images. HCAC is a learning-based method and can be divided into two stages. At the first stage, i.e. the training stage, given a set of abdominal 3D-CT training images and the corresponding manual liver labels, we tried to establish a mapping between automatic segmentations (in each round) and manual reference segmentations via context features, and obtained a series of self-correcting classifiers. At the second stage, i.e. the segmentation stage, we firstly used the basic active contour to segment the image and subsequently used the contextual active contour (CAC) iteratively, which combines the image information and the current shape model, to improve the segmentation result. The current shape model is produced by the corresponding self-correcting classifier (the input is the previous automatic segmentation result). The proposed method was evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results showed that we would get more and more accurate segmentation results by the iterative steps and the satisfied results would be obtained after about six rounds of iterations.