Attention deficit/hyperactivity disorder (ADHD) is a behavioral disorder syndrome found mainly in school-age population. At present, the diagnosis of ADHD mainly depends on the subjective methods, leading to the high rate of misdiagnosis and missed-diagnosis. To solve these problems, we proposed an algorithm for classifying ADHD objectively based on convolutional neural network. At first, preprocessing steps, including skull stripping, Gaussian kernel smoothing, et al., were applied to brain magnetic resonance imaging (MRI). Then, coarse segmentation was used for selecting the right caudate nucleus, left precuneus, and left superior frontal gyrus region. Finally, a 3 level convolutional neural network was used for classification. Experimental results showed that the proposed algorithm was capable of classifying ADHD and normal groups effectively, the classification accuracies obtained by the right caudate nucleus and the left precuneus brain regions were greater than the highest classification accuracy (62.52%) in the ADHD-200 competition, and among 3 brain regions in ADHD and the normal groups, the classification accuracy from the right caudate nucleus was the highest. It is well concluded that the method for classification of ADHD and normal groups proposed in this paper utilizing the coarse segmentation and deep learning is a useful method for the purpose. The classification accuracy of the proposed method is high, and the calculation is simple. And the method is able to extract the unobvious image features better, and can overcome the shortcomings of traditional methods of MRI brain area segmentation, which are time-consuming and highly complicate. The method provides an objective diagnosis approach for ADHD.
Ultrasound is the best way to diagnose thyroid nodules. To discriminate benign and malignant nodules, calcification is an important characteristic. However, calcification in ultrasonic images cannot be extracted accurately because of capsule wall and other internal tissue. In this paper, deep learning was first proposed to extract calcification, and two improved methods were proposed on the basis of Alexnet convolutional neural network. First, adding the corresponding anti-pooling (unpooling) and deconvolution layers (deconv2D) made the network to be trained for the required features and finally extract the calcification feature. Second, modifying the number of convolution templates and full connection layer nodes made feature extraction more refined. The final network was the combination of two improved methods above. To verify the method presented in this article, we got 8 416 images with calcification, and 10 844 without calcification. The result showed that the accuracy of the calcification extraction was 86% by using the improved Alexnet convolutional neural network. Compared with traditional methods, it has been improved greatly, which provides effective means for the identification of benign and malignant thyroid nodules.
With the development of image-guided surgery and radiotherapy, the demand for medical image registration is stronger and the challenge is greater. In recent years, deep learning, especially deep convolution neural networks, has made excellent achievements in medical image processing, and its research in registration has developed rapidly. In this paper, the research progress of medical image registration based on deep learning at home and abroad is reviewed according to the category of technical methods, which include similarity measurement with an iterative optimization strategy, direct estimation of transform parameters, etc. Then, the challenge of deep learning in medical image registration is analyzed, and the possible solutions and open research are proposed.
Alzheimer's disease (AD) is a typical neurodegenerative disease, which is clinically manifested as amnesia, loss of language ability and self-care ability, and so on. So far, the cause of the disease has still been unclear and the course of the disease is irreversible, and there has been no cure for the disease yet. Hence, early prognosis of AD is important for the development of new drugs and measures to slow the progression of the disease. Mild cognitive impairment (MCI) is a state between AD and healthy controls (HC). Studies have shown that patients with MCI are more likely to develop AD than those without MCI. Therefore, accurate screening of MCI patients has become one of the research hotspots of early prognosis of AD. With the rapid development of neuroimaging techniques and deep learning, more and more researchers employ deep learning methods to analyze brain neuroimaging images, such as magnetic resonance imaging (MRI), for early prognosis of AD. Hence, in this paper, a three-dimensional multi-slice classifiers ensemble based on convolutional neural network (CNN) and ensemble learning for early prognosis of AD has been proposed. Compared with the CNN classification model based on a single slice, the proposed classifiers ensemble based on multiple two-dimensional slices from three dimensions could use more effective information contained in MRI to improve classification accuracy and stability in a parallel computing mode.
The segmentation of organs at risk is an important part of radiotherapy. The current method of manual segmentation depends on the knowledge and experience of physicians, which is very time-consuming and difficult to ensure the accuracy, consistency and repeatability. Therefore, a deep convolutional neural network (DCNN) is proposed for the automatic and accurate segmentation of head and neck organs at risk. The data of 496 patients with nasopharyngeal carcinoma were reviewed. Among them, 376 cases were randomly selected for training set, 60 cases for validation set and 60 cases for test set. Using the three-dimensional (3D) U-NET DCNN, combined with two loss functions of Dice Loss and Generalized Dice Loss, the automatic segmentation neural network model for the head and neck organs at risk was trained. The evaluation parameters are Dice similarity coefficient and Jaccard distance. The average Dice Similarity coefficient of the 19 organs at risk was 0.91, and the Jaccard distance was 0.15. The results demonstrate that 3D U-NET DCNN combined with Dice Loss function can be better applied to automatic segmentation of head and neck organs at risk.
When applying deep learning to the automatic segmentation of organs at risk in medical images, we combine two network models of Dense Net and V-Net to develop a Dense V-network for automatic segmentation of three-dimensional computed tomography (CT) images, in order to solve the problems of degradation and gradient disappearance of three-dimensional convolutional neural networks optimization as training samples are insufficient. This algorithm is applied to the delineation of pelvic endangered organs and we take three representative evaluation parameters to quantitatively evaluate the segmentation effect. The clinical result showed that the Dice similarity coefficient values of the bladder, small intestine, rectum, femoral head and spinal cord were all above 0.87 (average was 0.9); Jaccard distance of these were within 2.3 (average was 0.18). Except for the small intestine, the Hausdorff distance of other organs were less than 0.9 cm (average was 0.62 cm). The Dense V-Network has been proven to achieve the accurate segmentation of pelvic endangered organs.
Alzheimer’s disease (AD) is a progressive and irreversible neurodegenerative disease. Neuroimaging based on magnetic resonance imaging (MRI) is one of the most intuitive and reliable methods to perform AD screening and diagnosis. Clinical head MRI detection generates multimodal image data, and to solve the problem of multimodal MRI processing and information fusion, this paper proposes a structural and functional MRI feature extraction and fusion method based on generalized convolutional neural networks (gCNN). The method includes a three-dimensional residual U-shaped network based on hybrid attention mechanism (3D HA-ResUNet) for feature representation and classification for structural MRI, and a U-shaped graph convolutional neural network (U-GCN) for node feature representation and classification of brain functional networks for functional MRI. Based on the fusion of the two types of image features, the optimal feature subset is selected based on discrete binary particle swarm optimization, and the prediction results are output by a machine learning classifier. The validation results of multimodal dataset from the AD Neuroimaging Initiative (ADNI) open-source database show that the proposed models have superior performance in their respective data domains. The gCNN framework combines the advantages of these two models and further improves the performance of the methods using single-modal MRI, improving the classification accuracy and sensitivity by 5.56% and 11.11%, respectively. In conclusion, the gCNN-based multimodal MRI classification method proposed in this paper can provide a technical basis for the auxiliary diagnosis of Alzheimer’s disease.