The existing retinal vessels segmentation algorithms have various problems that the end of main vessels are easy to break, and the central macula and the optic disc boundary are likely to be mistakenly segmented. To solve the above problems, a novel retinal vessels segmentation algorithm is proposed in this paper. The algorithm merged together vessels contour information and conditional generative adversarial nets. Firstly, non-uniform light removal and principal component analysis were used to process the fundus images. Therefore, it enhanced the contrast between the blood vessels and the background, and obtained the single-scale gray images with rich feature information. Secondly, the dense blocks integrated with the deep separable convolution with offset and squeeze-and-exception (SE) block were applied to the encoder and decoder to alleviate the gradient disappearance or explosion. Simultaneously, the network focused on the feature information of the learning target. Thirdly, the contour loss function was added to improve the identification ability of the blood vessels information and contour information of the network. Finally, experiments were carried out on the DRIVE and STARE datasets respectively. The value of area under the receiver operating characteristic reached 0.982 5 and 0.987 4, respectively, and the accuracy reached 0.967 7 and 0.975 6, respectively. Experimental results show that the algorithm can accurately distinguish contours and blood vessels, and reduce blood vessel rupture. The algorithm has certain application value in the diagnosis of clinical ophthalmic diseases.
Atrial fibrillation (AF) is a common arrhythmia, which can lead to thrombosis and increase the risk of a stroke or even death. In order to meet the need for a low false-negative rate (FNR) of the screening test in clinical application, a convolutional neural network with a low false-negative rate (LFNR-CNN) was proposed. Regularization coefficients were added to the cross-entropy loss function which could make the cost of positive and negative samples different, and the penalty for false negatives could be increased during network training. The inter-patient clinical database of 21 077 patients (CD-21077) collected from the large general hospital was used to verify the effectiveness of the proposed method. For the convolutional neural network (CNN) with the same structure, the improved loss function could reduce the FNR from 2.22% to 0.97% compared with the traditional cross-entropy loss function. The selected regularization coefficient could increase the sensitivity (SE) from 97.78% to 98.35%, and the accuracy (ACC) was 96.62%, which was an increase from 96.49%. The proposed algorithm can reduce the FNR without losing ACC, and reduce the possibility of missed diagnosis to avoid missing the best treatment period. Meanwhile, it provides a universal loss function for the clinical auxiliary diagnosis of other diseases.
Corona virus disease 2019 (COVID-19) is an acute respiratory infectious disease with strong contagiousness, strong variability, and long incubation period. The probability of misdiagnosis and missed diagnosis can be significantly decreased with the use of automatic segmentation of COVID-19 lesions based on computed tomography images, which helps doctors in rapid diagnosis and precise treatment. This paper introduced the level set generalized Dice loss function (LGDL) in conjunction with the level set segmentation method based on COVID-19 lesion segmentation network and proposed a dual-path COVID-19 lesion segmentation network (Dual-SAUNet++) to address the pain points such as the complex symptoms of COVID-19 and the blurred boundaries that are challenging to segment. LGDL is an adaptive weight joint loss obtained by combining the generalized Dice loss of the mask path and the mean square error of the level set path. On the test set, the model achieved Dice similarity coefficient of (87.81 ± 10.86)%, intersection over union of (79.20 ± 14.58)%, sensitivity of (94.18 ± 13.56)%, specificity of (99.83 ± 0.43)% and Hausdorff distance of 18.29 ± 31.48 mm. Studies indicated that Dual-SAUNet++ has a great anti-noise capability and it can segment multi-scale lesions while simultaneously focusing on their area and border information. The method proposed in this paper assists doctors in judging the severity of COVID-19 infection by accurately segmenting the lesion, and provides a reliable basis for subsequent clinical treatment.
Atrial fibrillation (AF) is a life-threatening heart condition, and its early detection and treatment have garnered significant attention from physicians in recent years. Traditional methods of detecting AF heavily rely on doctor’s diagnosis based on electrocardiograms (ECGs), but prolonged analysis of ECG signals is very time-consuming. This paper designs an AF detection model based on the Inception module, constructing multi-branch detection channels to process raw ECG signals, gradient signals, and frequency signals during AF. The model efficiently extracted QRS complex and RR interval features using gradient signals, extracted P-wave and f-wave features using frequency signals, and used raw signals to supplement missing information. The multi-scale convolutional kernels in the Inception module provided various receptive fields and performed comprehensive analysis of the multi-branch results, enabling early AF detection. Compared to current machine learning algorithms that use only RR interval and heart rate variability features, the proposed algorithm additionally employed frequency features, making fuller use of the information within the signals. For deep learning methods using raw and frequency signals, this paper introduced an enhanced method for the QRS complex, allowing the network to extract features more effectively. By using a multi-branch input mode, the model comprehensively considered irregular RR intervals and P-wave and f-wave features in AF. Testing on the MIT-BIH AF database showed that the inter-patient detection accuracy was 96.89%, sensitivity was 97.72%, and specificity was 95.88%. The proposed model demonstrates excellent performance and can achieve automatic AF detection.