Electroencephalogram (EEG) classification for brain-computer interface (BCI) is a new way of realizing human-computer interreaction. In this paper the application of semi-supervised sparse representation classifier algorithms based on help training to EEG classification for BCI is reported. Firstly, the correlation information of the unlabeled data is obtained by sparse representation classifier and some data with high correlation selected. Secondly, the boundary information of the selected data is produced by discriminative classifier, which is the Fisher linear classifier. The final unlabeled data with high confidence are selected by a criterion containing the information of distance and direction. We applied this novel method to the three benchmark datasets, which were BCIⅠ, BCIⅡ_Ⅳ and USPS. The classification rate were 97%,82% and 84.7%, respectively. Moreover the fastest arithmetic rate was just about 0.2 s. The classification rate and efficiency results of the novel method are both better than those of S3VM and SVM, proving that the proposed method is effective.
In order to solve the problem of early classification of Alzheimer’s disease (AD), the conventional linear feature extraction algorithm is difficult to extract the most discriminative information from the high-dimensional features to effectively classify unlabeled samples. Therefore, in order to reduce the redundant features and improve the recognition accuracy, this paper used the supervised locally linear embedding (SLLE) algorithm to transform multivariate data of regional brain volume and cortical thickness to a locally linear space with fewer dimensions. The 412 individuals were collected from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) including stable mild cognitive impairment (sMCI, n = 93), amnestic mild cognitive impairment (aMCI, n = 96), AD (n = 86) and cognitive normal controls (CN, n = 137). The SLLE algorithm used in this paper is to calculate the nearest neighbors of each sample point by adding the distance correction term, and the locally linear reconstruction weight matrix was obtained from its nearest neighbors, then the low dimensional mapping of the high dimensional data can be calculated. In order to verify the validity of SLLE in the task of classification, the feature extraction algorithms such as principal component analysis (PCA), Neighborhood MinMax Projection (NMMP), locally linear mapping (LLE) and SLLE were respectively combined with support vector machines (SVM) classifier to obtain the accuracy of classification of CN and sMCI, CN and aMCI, CN and AD, sMCI and aMCI, sMCI and AD, and aMCI and AD, respectively. Experimental results showed that our method had improvements (accuracy/sensitivity/specificity: 65.16%/63.33%/67.62%) on the classification of sMCI and aMCI by comparing with the combination algorithm of LLE and SVM (accuracy/sensitivity/specificity: 64.08%/66.14%/62.77%) and SVM (accuracy/sensitivity/specificity: 57.25%/56.28%/58.08%). In detail the accuracy of the combination algorithm of SLLE and SVM is 1.08% higher than the combination algorithm of LLE and SVM, and 7.91% higher than SVM. Thus, the combination of SLLE and SVM is more effective in the early diagnosis of Alzheimer’s disease.
The extraction of pulse rate variability(PRV) in daily life is often affected by exercise and blood perfusion. Therefore, this paper proposes a method of detecting pulse signal and extracting PRV in post-ear, which could improve the accuracy and stability of PRV in daily life. First, the post-ear pulse signal detection system suitable for daily use was developed, which can transmit data to an Android phone by Bluetooth for daily PRV extraction. Then, according to the state of daily life, nine experiments were designed under the situation of static, motion, chewing, and talking states, respectively. Based on the results of these experiments, synchronous data acquisition of the single-lead electrocardiogram (ECG) signal and the pulse signal collected by the commercial pulse sensor on the finger were compared with the post-auricular pulse signal. According to the results of signal wave, amplitude and frequency-amplitude characteristic, the post-ear pulse signal was significantly steady and had more information than finger pulse signal in the traditional way. The PRV extracted from post-ear pulse signal has high accuracy, and the accuracy of the nine experiments is higher than 98.000%. The method of PRV extraction from post-ear has the characteristics of high accuracy, good stability and easy use in daily life, which can provide new ideas and ways for accurate extraction of PRV under unsupervised conditions.
The application of minimally invasive surgical tool detection and tracking technology based on deep learning in minimally invasive surgery is currently a research hotspot. This paper firstly expounds the relevant technical content of the minimally invasive surgery tool detection and tracking, which mainly introduces the advantages based on deep learning algorithm. Then, this paper summarizes the algorithm for detection and tracking surgical tools based on fully supervised deep neural network and the emerging algorithm for detection and tracking surgical tools based on weakly supervised deep neural network. Several typical algorithm frameworks and their flow charts based on deep convolutional and recurrent neural networks are summarized emphatically, so as to enable researchers in relevant fields to understand the current research progress more systematically and provide reference for minimally invasive surgeons to select navigation technology. In the end, this paper provides a general direction for the further research of minimally invasive surgical tool detection and tracking technology based on deep learning.
Image registration is of great clinical importance in computer aided diagnosis and surgical planning of liver diseases. Deep learning-based registration methods endow liver computed tomography (CT) image registration with characteristics of real-time and high accuracy. However, existing methods in registering images with large displacement and deformation are faced with the challenge of the texture information variation of the registered image, resulting in subsequent erroneous image processing and clinical diagnosis. To this end, a novel unsupervised registration method based on the texture filtering is proposed in this paper to realize liver CT image registration. Firstly, the texture filtering algorithm based on L0 gradient minimization eliminates the texture information of liver surface in CT images, so that the registration process can only refer to the spatial structure information of two images for registration, thus solving the problem of texture variation. Then, we adopt the cascaded network to register images with large displacement and large deformation, and progressively align the fixed image with the moving one in the spatial structure. In addition, a new registration metric, the histogram correlation coefficient, is proposed to measure the degree of texture variation after registration. Experimental results show that our proposed method achieves high registration accuracy, effectively solves the problem of texture variation in the cascaded network, and improves the registration performance in terms of spatial structure correspondence and anti-folding capability. Therefore, our method helps to improve the performance of medical image registration, and make the registration safely and reliably applied in the computer-aided diagnosis and surgical planning of liver diseases.
O6-carboxymethyl guanine(O6-CMG) is a highly mutagenic alkylation product of DNA that causes gastrointestinal cancer in organisms. Existing studies used mutant Mycobacterium smegmatis porin A (MspA) nanopore assisted by Phi29 DNA polymerase to localize it. Recently, machine learning technology has been widely used in the analysis of nanopore sequencing data. But the machine learning always need a large number of data labels that have brought extra work burden to researchers, which greatly affects its practicability. Accordingly, this paper proposes a nano-Unsupervised-Deep-Learning method (nano-UDL) based on an unsupervised clustering algorithm to identify methylation events in nanopore data automatically. Specially, nano-UDL first uses the deep AutoEncoder to extract features from the nanopore dataset and then applies the MeanShift clustering algorithm to classify data. Besides, nano-UDL can extract the optimal features for clustering by joint optimizing the clustering loss and reconstruction loss. Experimental results demonstrate that nano-UDL has relatively accurate recognition accuracy on the O6-CMG dataset and can accurately identify all sequence segments containing O6-CMG. In order to further verify the robustness of nano-UDL, hyperparameter sensitivity verification and ablation experiments were carried out in this paper. Using machine learning to analyze nanopore data can effectively reduce the additional cost of manual data analysis, which is significant for many biological studies, including genome sequencing.
Blood velocity inversion based on magnetoelectric effect is helpful for the development of daily monitoring of vascular stenosis, but the accuracy of blood velocity inversion and imaging resolution still need to be improved. Therefore, a convolutional neural network (CNN) based inversion imaging method for intravascular blood flow velocity was proposed in this paper. Firstly, unsupervised learning CNN is constructed to extract weight matrix representation information to preprocess voltage data. Then the preprocessing results are input to supervised learning CNN, and the blood flow velocity value is output by nonlinear mapping. Finally, angiographic images are obtained. In this paper, the validity of the proposed method is verified by constructing data set. The results show that the correlation coefficients of blood velocity inversion in vessel location and stenosis test are 0.884 4 and 0.972 1, respectively. The above research shows that the proposed method can effectively reduce the information loss during the inversion process and improve the inversion accuracy and imaging resolution, which is expected to assist clinical diagnosis.
Recently, deep learning has achieved impressive results in medical image tasks. However, this method usually requires large-scale annotated data, and medical images are expensive to annotate, so it is a challenge to learn efficiently from the limited annotated data. Currently, the two commonly used methods are transfer learning and self-supervised learning. However, these two methods have been little studied in multimodal medical images, so this study proposes a contrastive learning method for multimodal medical images. The method takes images of different modalities of the same patient as positive samples, which effectively increases the number of positive samples in the training process and helps the model to fully learn the similarities and differences of lesions on images of different modalities, thus improving the model's understanding of medical images and diagnostic accuracy. The commonly used data augmentation methods are not suitable for multimodal images, so this paper proposes a domain adaptive denormalization method to transform the source domain images with the help of statistical information of the target domain. In this study, the method is validated with two different multimodal medical image classification tasks: in the microvascular infiltration recognition task, the method achieves an accuracy of (74.79 ± 0.74)% and an F1 score of (78.37 ± 1.94)%, which are improved as compared with other conventional learning methods; for the brain tumor pathology grading task, the method also achieves significant improvements. The results show that the method achieves good results on multimodal medical images and can provide a reference solution for pre-training multimodal medical images.
Computed tomography (CT) imaging is a vital tool for the diagnosis and assessment of lung adenocarcinoma, and using CT images to predict the recurrence-free survival (RFS) of lung adenocarcinoma patients post-surgery is of paramount importance in tailoring postoperative treatment plans. Addressing the challenging task of accurate RFS prediction using CT images, this paper introduces an innovative approach based on self-supervised pre-training and multi-task learning. We employed a self-supervised learning strategy known as “image transformation to image restoration” to pretrain a 3D-UNet network on publicly available lung CT datasets to extract generic visual features from lung images. Subsequently, we enhanced the network’s feature extraction capability through multi-task learning involving segmentation and classification tasks, guiding the network to extract image features relevant to RFS. Additionally, we designed a multi-scale feature aggregation module to comprehensively amalgamate multi-scale image features, and ultimately predicted the RFS risk score for lung adenocarcinoma with the aid of a feed-forward neural network. The predictive performance of the proposed method was assessed by ten-fold cross-validation. The results showed that the consistency index (C-index) of the proposed method for predicting RFS and the area under curve (AUC) for predicting whether recurrence occurs within three years reached 0.691 ± 0.076 and 0.707 ± 0.082, respectively, and the predictive performance was superior to that of existing methods. This study confirms that the proposed method has the potential of RFS prediction in lung adenocarcinoma patients, which is expected to provide a reliable basis for the development of individualized treatment plans.
Accurate reconstruction of tissue elasticity modulus distribution has always been an important challenge in ultrasound elastography. Considering that existing deep learning-based supervised reconstruction methods only use simulated displacement data with random noise in training, which cannot fully provide the complexity and diversity brought by in-vivo ultrasound data, this study introduces the use of displacement data obtained by tracking in-vivo ultrasound radio frequency signals (i.e., real displacement data) during training, employing a semi-supervised approach to enhance the prediction accuracy of the model. Experimental results indicate that in phantom experiments, the semi-supervised model augmented with real displacement data provides more accurate predictions, with mean absolute errors and mean relative errors both around 3%, while the corresponding data for the fully supervised model are around 5%. When processing real displacement data, the area of prediction error of semi-supervised model was less than that of fully supervised model. The findings of this study confirm the effectiveness and practicality of the proposed approach, providing new insights for the application of deep learning methods in the reconstruction of elastic distribution from in-vivo ultrasound data.