With the rapid development of network structure, convolutional neural networks (CNN) consolidated its position as a leading machine learning tool in the field of image analysis. Therefore, semantic segmentation based on CNN has also become a key high-level task in medical image understanding. This paper reviews the research progress on CNN-based semantic segmentation in the field of medical image. A variety of classical semantic segmentation methods are reviewed, whose contributions and significance are highlighted. On this basis, their applications in the segmentation of some major physiological and pathological anatomical structures are further summarized and discussed. Finally, the open challenges and potential development direction of semantic segmentation based on CNN in the area of medical image are discussed.
UK Biobank (UKB) is a forward-looking epidemiological project with over 500, 000 people aged 40 to 69, whose image extension project plans to re-invite 100, 000 participants from UKB to perform multimodal brain magnetic resonance imaging. Large-scale multimodal neuroimaging combined with large amounts of phenotypic and genetic data provides great resources to conduct brain health-related research. This article provides an in-depth overview of UKB in the field of neuroimaging. Firstly, neuroimage collection and imaging-derived phenotypes are summarized. Secondly, typical studies of UKB in neuroimaging areas are introduced, which include cardiovascular risk factors, regulatory factors, brain age prediction, normality, successful and morbid brain aging, environmental and genetic factors, cognitive ability and gender. Lastly, the open challenges and future directions of UKB are discussed. This article has the potential to open up a new research field for the prevention and treatment of neurological diseases.
Glioma is a primary brain tumor with high incidence rate. High-grade gliomas (HGG) are those with the highest degree of malignancy and the lowest degree of survival. Surgical resection and postoperative adjuvant chemoradiotherapy are often used in clinical treatment, so accurate segmentation of tumor-related areas is of great significance for the treatment of patients. In order to improve the segmentation accuracy of HGG, this paper proposes a multi-modal glioma semantic segmentation network with multi-scale feature extraction and multi-attention fusion mechanism. The main contributions are, (1) Multi-scale residual structures were used to extract features from multi-modal gliomas magnetic resonance imaging (MRI); (2) Two types of attention modules were used for features aggregating in channel and spatial; (3) In order to improve the segmentation performance of the whole network, the branch classifier was constructed using ensemble learning strategy to adjust and correct the classification results of the backbone classifier. The experimental results showed that the Dice coefficient values of the proposed segmentation method in this article were 0.909 7, 0.877 3 and 0.839 6 for whole tumor, tumor core and enhanced tumor respectively, and the segmentation results had good boundary continuity in the three-dimensional direction. Therefore, the proposed semantic segmentation network has good segmentation performance for high-grade gliomas lesions.
The human brain deteriorates as we age, and the rate and the trajectories of these changes significantly vary among brain regions and among individuals. Because neuroimaging data are potentially important indicators of individual's brain health, they are commonly used in brain age prediction. In this review, we summarize brain age prediction model from neuroimaging-based studies in the last ten years. The studies are categorized based on their image modalities and feature types. The results indicate that the prediction frameworks based on neuroimaging holds promise toward individualized brain age prediction. Finally, we addressed the challenges in brain age prediction and suggested some future research directions.
Computer-aided diagnosis based on computed tomography (CT) image can realize the detection and classification of pulmonary nodules, and improve the survival rate of early lung cancer, which has important clinical significance. In recent years, with the rapid development of medical big data and artificial intelligence technology, the auxiliary diagnosis of lung cancer based on deep learning has gradually become one of the most active research directions in this field. In order to promote the deep learning in the detection and classification of pulmonary nodules, we reviewed the research progress in this field based on the relevant literatures published at domestic and overseas in recent years. This paper begins with a brief introduction of two widely used lung CT image databases: lung image database consortium and image database resource initiative (LIDC-IDRI) and Data Science Bowl 2017. Then, the detection and classification of pulmonary nodules based on different network structures are introduced in detail. Finally, some problems of deep learning in lung CT image nodule detection and classification are discussed and conclusions are given. The development prospect is also forecasted, which provides reference for future application research in this field.