Emotion plays an important role in people's cognition and communication. By analyzing electroencephalogram (EEG) signals to identify internal emotions and feedback emotional information in an active or passive way, affective brain-computer interactions can effectively promote human-computer interaction. This paper focuses on emotion recognition using EEG. We systematically evaluate the performance of state-of-the-art feature extraction and classification methods with a public-available dataset for emotion analysis using physiological signals (DEAP). The common random split method will lead to high correlation between training and testing samples. Thus, we use block-wise K fold cross validation. Moreover, we compare the accuracy of emotion recognition with different time window length. The experimental results indicate that 4 s time window is appropriate for sampling. Filter-bank long short-term memory networks (FBLSTM) using differential entropy features as input was proposed. The average accuracy of low and high in valance dimension, arousal dimension and combination of the four in valance-arousal plane is 78.8%, 78.4% and 70.3%, respectively. These results demonstrate the advantage of our emotion recognition model over the current studies in terms of classification accuracy. Our model might provide a novel method for emotion recognition in affective brain-computer interactions.
Automated characterization of different vessel wall tissues including atherosclerotic plaques, branchings and stents from intravascular ultrasound (IVUS) gray-scale images was addressed. The texture features of each frame were firstly detected with local binary pattern (LBP), Haar-like and Gabor filter in the present study. Then, a Gentle Adaboost classifier was designed to classify tissue features. The methods were validated with clinically acquired image data. The manual characterization results obtained by experienced physicians were adopted as the golden standard to evaluate the accuracy. Results indicated that the recognition accuracy of lipidic plaques reached 94.54%, while classification precision of fibrous and calcified plaques reached 93.08%. High recognition accuracy can be reached up to branchings 93.20% and stents 93.50%, respectively.
In order to solve the problem of early classification of Alzheimer’s disease (AD), the conventional linear feature extraction algorithm is difficult to extract the most discriminative information from the high-dimensional features to effectively classify unlabeled samples. Therefore, in order to reduce the redundant features and improve the recognition accuracy, this paper used the supervised locally linear embedding (SLLE) algorithm to transform multivariate data of regional brain volume and cortical thickness to a locally linear space with fewer dimensions. The 412 individuals were collected from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) including stable mild cognitive impairment (sMCI, n = 93), amnestic mild cognitive impairment (aMCI, n = 96), AD (n = 86) and cognitive normal controls (CN, n = 137). The SLLE algorithm used in this paper is to calculate the nearest neighbors of each sample point by adding the distance correction term, and the locally linear reconstruction weight matrix was obtained from its nearest neighbors, then the low dimensional mapping of the high dimensional data can be calculated. In order to verify the validity of SLLE in the task of classification, the feature extraction algorithms such as principal component analysis (PCA), Neighborhood MinMax Projection (NMMP), locally linear mapping (LLE) and SLLE were respectively combined with support vector machines (SVM) classifier to obtain the accuracy of classification of CN and sMCI, CN and aMCI, CN and AD, sMCI and aMCI, sMCI and AD, and aMCI and AD, respectively. Experimental results showed that our method had improvements (accuracy/sensitivity/specificity: 65.16%/63.33%/67.62%) on the classification of sMCI and aMCI by comparing with the combination algorithm of LLE and SVM (accuracy/sensitivity/specificity: 64.08%/66.14%/62.77%) and SVM (accuracy/sensitivity/specificity: 57.25%/56.28%/58.08%). In detail the accuracy of the combination algorithm of SLLE and SVM is 1.08% higher than the combination algorithm of LLE and SVM, and 7.91% higher than SVM. Thus, the combination of SLLE and SVM is more effective in the early diagnosis of Alzheimer’s disease.
Human motion recognition (HAR) is the technological base of intelligent medical treatment, sports training, video monitoring and many other fields, and it has been widely concerned by all walks of life. This paper summarized the progress and significance of HAR research, which includes two processes: action capture and action classification based on deep learning. Firstly, the paper introduced in detail three mainstream methods of action capture: video-based, depth camera-based and inertial sensor-based. The commonly used action data sets were also listed. Secondly, the realization of HAR based on deep learning was described in two aspects, including automatic feature extraction and multi-modal feature fusion. The realization of training monitoring and simulative training with HAR in orthopedic rehabilitation training was also introduced. Finally, it discussed precise motion capture and multi-modal feature fusion of HAR, as well as the key points and difficulties of HAR application in orthopedic rehabilitation training. This article summarized the above contents to quickly guide researchers to understand the current status of HAR research and its application in orthopedic rehabilitation training.
It is of great clinical significance in the differential diagnosis of primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM) because there are enormous differences between them in terms of therapeutic regimens. In this paper, we propose a system based on sparse representation for automatic classification of PCNSL and GBM. The proposed system distinguishes the two tumors by using of the different texture detail information of the two tumors on T1 contrast magnetic resonance imaging (MRI) images. First, inspired by the process of radiomics, we designed a dictionary learning and sparse representation-based method to extract texture information, and with this approach, the tumors with different volume and shape were transformed into 968 quantitative texture features. Next, aiming at the problem of the redundancy in the extracted features, feature selection based on iterative sparse representation was set up to select some key texture features with high stability and discrimination. Finally, the selected key features are used for differentiation based on sparse representation classification (SRC) method. By using ten-fold cross-validation method, the differentiation based on the proposed approach presents accuracy of 96.36%, sensitivity 96.30%, and specificity 96.43%. Experimental results show that our approach not only effectively distinguish the two tumors but also has strong robustness in practical application since it avoids the process of parameter extraction on advanced MRI images.
The purpose of using brain-computer interface (BCI) is to build a bridge between brain and computer for the disable persons, in order to help them to communicate with the outside world. Electroencephalography (EEG) has low signal to noise ratio (SNR), and there exist some problems in the traditional methods for the feature extraction of EEG, such as low classification accuracy, lack of spatial information and huge amounts of features. To solve these problems, we proposed a new method based on time domain, frequency domain and space domain. In this study, independent component analysis (ICA) and wavelet transform were used to extract the temporal, spectral and spatial features from the original EEG signals, and then the extracted features were classified with the method combined support vector machine (SVM) with genetic algorithm (GA). The proposed method displayed a better classification performance, and made the mean accuracy of the Graz datasets in the BCI Competitions of 2003 reach 96%. The classification results showed that the proposed method with the three domains could effectively overcome the drawbacks of the traditional methods based solely on time-frequency domain when the EEG signals were used to describe the characteristics of the brain electrical signals.
Biometrics plays an important role in information society. As a new type of biometrics, electroencephalogram (EEG) signals have special advantages in terms of versatility, durability, and safety. At present, the researches on individual identification approaches based on EEG signals draw lots of attention. Identity feature extraction is an important step to achieve good identification performance. How to combine the characteristics of EEG data to better extract the difference information in EEG signals is a research hotspots in the field of identity identification based on EEG in recent years. This article reviewed the commonly used identity feature extraction methods based on EEG signals, including single-channel features, inter-channel features, deep learning methods and spatial filter-based feature extraction methods, etc. and explained the basic principles application methods and related achievements of various feature extraction methods. Finally, we summarized the current problems and forecast the development trend.
Skin aging is the most intuitive and obvious sign of the human aging processes. Qualitative and quantitative determination of skin aging is of particular importance for the evaluation of human aging and anti-aging treatment effects. To solve the problem of subjectivity of conventional skin aging grading methods, the self-organizing map (SOM) network was used to explore an automatic method for skin aging grading. First, the ventral forearm skin images were obtained by a portable digital microscope and two texture parameters, i.e., mean width of skin furrows and the number of intersections were extracted by image processing algorithm. Then, the values of texture parameters were taken as inputs of SOM network to train the network. The experimental results showed that the network achieved an overall accuracy of 80.8%, compared with the aging grading results by human graders. The designed method appeared to be rapid and objective, which can be used for quantitative analysis of skin images, and automatic assessment of skin aging grading.
As an important component of the event related potential (ERP), late positive potential (LPP) is an ideal component for studying emotion regulation. This study was focused on processing and analysing the LPP component of the emotional cognitive reappraisal electroencephalogram (EEG) signal. Firstly, we used independent component analysis (ICA) algorithm to remove electrooculogram, electromyogram and some other artifacts based on 16 subjects' EEG data by using EGI 64-channal EEG acquisition system. Secondly, we processed feature extraction of the EEG signal at Pz electrode by using one versus the rest common spatial patterns (OVR-CSP) algorithm. Finally, the extracted LPP component was analysed both in time domain and spatial domain. The results indicated that ① From the perspective of amplitude comparison, the LPP amplitude, which was induced by cognitive reappraisal, was much higher than the amplitude under the condition of watching neural stimuli, but lower than the amplitude under condition of watching negative stimuli; ② from the perspective of time process, the difference between cognitive reappraisal and watching after processing with OVR-CSP algorithm was in the process of range between 0.3 s and 1.5 s; but the difference between cognitive reappraisal and watching after processing with averaging method was during the process between 0.3 s and 1.25 s. The results suggested that OVR-CSP algorithm could not only accurately extract the LPP component with fewer trials compared with averaging method so that it provided a better method for the follow-up study of cognitive reappraisal strategy, but also provide neurophysiological basis for cognitive reappraisal in emotional regulation.
Brain-computer interaction (BCI) is a transformative human-computer interaction, which aims to bypass the peripheral nerve and muscle system and directly convert the perception, imagery or thinking activities of cranial nerves into actions for further improving the quality of human life. Magnetoencephalogram (MEG) measures the magnetic field generated by the electrical activity of neurons. It has the unique advantages of non-contact measurement, high temporal and spatial resolution, and convenient preparation. It is a new BCI driving signal. MEG-BCI research has important brain science significance and potential application value. So far, few documents have elaborated the key technical issues involved in MEG-BCI. Therefore, this paper focuses on the key technologies of MEG-BCI, and details the signal acquisition technology involved in the practical MEG-BCI system, the design of the MEG-BCI experimental paradigm, the MEG signal analysis and decoding key technology, MEG-BCI neurofeedback technology and its intelligent method. Finally, this paper also discusses the existing problems and future development trends of MEG-BCI. It is hoped that this paper will provide more useful ideas for MEG-BCI innovation research.