OBJECTIVE: To investigate the role of direct and indirect recognition in pig-to-man xenotransplantation. METHODS: Taken the peripheral blood lymphocytes (PBLC) from three Neijiang pigs and two humans as stimulators and respondors, the one-way mixed lymphatic reactions (MLR) of xenograft were carried out, and allo- and self-PBLC as control. RESULTS: Among the three patterns of MLR, syngeneic was MLR the lowest in proliferation, the allogenic MLR was the highest, and the xenogenic MLR was medium. The PBLCs from humans and pigs were matched on HLA-A, B, DR and DQ by means of modified Terasaki assay. The match on pigs was failure because of the pre-existing natural xenogenic antibody in the testing serum. CONCLUSION: The results suggest that the degree of MHC matching still affect the rejection in xenotransplantation, but the present serum assay of MHC matching is not fit for pig.
Existing emotion recognition research is typically limited to static laboratory settings and has not fully handle the changes in emotional states in dynamic scenarios. To address this problem, this paper proposes a method for dynamic continuous emotion recognition based on electroencephalography (EEG) and eye movement signals. Firstly, an experimental paradigm was designed to cover six dynamic emotion transition scenarios including happy to calm, calm to happy, sad to calm, calm to sad, nervous to calm, and calm to nervous. EEG and eye movement data were collected simultaneously from 20 subjects to fill the gap in current multimodal dynamic continuous emotion datasets. In the valence-arousal two-dimensional space, emotion ratings for stimulus videos were performed every five seconds on a scale of 1 to 9, and dynamic continuous emotion labels were normalized. Subsequently, frequency band features were extracted from the preprocessed EEG and eye movement data. A cascade feature fusion approach was used to effectively combine EEG and eye movement features, generating an information-rich multimodal feature vector. This feature vector was input into four regression models including support vector regression with radial basis function kernel, decision tree, random forest, and K-nearest neighbors, to develop the dynamic continuous emotion recognition model. The results showed that the proposed method achieved the lowest mean square error for valence and arousal across the six dynamic continuous emotions. This approach can accurately recognize various emotion transitions in dynamic situations, offering higher accuracy and robustness compared to using either EEG or eye movement signals alone, making it well-suited for practical applications.
Studying the ability of the brain to recognize different odors is of great significance in the assessment and diagnosis of olfactory dysfunction. The wavelet energy moment (WEM) was proposed as a feature of olfactory electroencephalogram (EEG) signal and used for odor classification. Firstly, the olfactory evoked EEG data of 13 odors were collected by an experiment. Secondly, the WEM was extracted from olfactory evoked EEG data as the signal feature, and the power spectrum density (PSD), approximate entropy, sample entropy and wavelet entropy were used as the contrast features. Finally, k-nearest neighbor (k-NN), support vector machine (SVM), random forest (RF) and decision tree classifier were used to identify different odors. The results showed that using the above four classifiers, the classification accuracy of WEM feature was higher than other features, and the k-NN classifier combined with WEM feature had the highest classification accuracy (91.07%). This paper further explored the characteristics of different EEG frequency bands, and found that most of the classification accuracy based on the features of γ band was better than that of the full band and other bands, among which the WEM feature of the γ band combined with the k-NN classifier had the highest classification accuracy (93.89 %). The research results of this paper could provide a new objective basis for the evaluation of olfactory function. On the other hand, it could also provide new ideas for the study of olfactory-induced emotions.
Brain-computer interface (BCI) provides a direct communicating and controlling approach between the brain and surrounding environment, which attracts a wide range of interest in the fields of brain science and artificial intelligence. It is a core to decode the electroencephalogram (EEG) feature in the BCI system. The decoding efficiency highly depends on the feature extraction and feature classification algorithms. In this paper, we first introduce the commonly-used EEG features in the BCI system. Then we introduce the basic classical algorithms and their advanced versions used in the BCI system. Finally, we present some new BCI algorithms proposed in recent years. We hope this paper can spark fresh thinking for the research and development of high-performance BCI system.
Emotion plays an important role in people's cognition and communication. By analyzing electroencephalogram (EEG) signals to identify internal emotions and feedback emotional information in an active or passive way, affective brain-computer interactions can effectively promote human-computer interaction. This paper focuses on emotion recognition using EEG. We systematically evaluate the performance of state-of-the-art feature extraction and classification methods with a public-available dataset for emotion analysis using physiological signals (DEAP). The common random split method will lead to high correlation between training and testing samples. Thus, we use block-wise K fold cross validation. Moreover, we compare the accuracy of emotion recognition with different time window length. The experimental results indicate that 4 s time window is appropriate for sampling. Filter-bank long short-term memory networks (FBLSTM) using differential entropy features as input was proposed. The average accuracy of low and high in valance dimension, arousal dimension and combination of the four in valance-arousal plane is 78.8%, 78.4% and 70.3%, respectively. These results demonstrate the advantage of our emotion recognition model over the current studies in terms of classification accuracy. Our model might provide a novel method for emotion recognition in affective brain-computer interactions.
Objective To explore the effect of preoperative hypothyroidism on postoperative cognition dysfunction (POCD) in elderly patients after on-pump cardiac surgery. Methods Patients who were no younger than 50 years and scheduled to have on-pump cardiac surgeries were selected in West China Hospital from March 2016 to December 2017. Based on hormone levels, patients were divided into two groups: a hypo group (hypothyroidism group, thyroid stimulating hormone (TSH) >4.2 mU/L or free triiodothyronine 3 (FT3) <3.60 pmol/L or FT4 <12.0 pmol/L) and an eu group (euthyroidism group, normal TSH, FT3 and FT4). The mini-mental state examination (MMSE) test and a battery of neuropsychological tests were used by a fixed researcher to assess cognitive function on 1 day before operation and 7 days after operation. Primer outcome was the incidence of POCD. Secondary outcomes were the incidence of cognitive degradation, scores or time cost in every aspect of cognitive function. Results No matter cognitive function was assessed by MMSE or a battery of neuropsychological tests, the incidence of POCD in the hypo group was higher than that of the eu group. The statistical significance existed when using MMSE (55.56% vs. 26.67%, P=0.014) but was absent when using a battery of neuropsychological tests (55.56% vs. 44.44%, P=0.361). The incidence of cognitive deterioration in the hypo group was higher than that in the eu group in verbal fluency test (48.15% vs. 20.00%, P=0.012). The cognitive deterioration incidence between the hypo group and the eu group was not statistically different in the other aspects of cognitive function. There was no statistical difference about scores or time cost between the hypo group and the eu group in all the aspects of cognitive function before surgery. After surgery, the scores between the hypo group and the eu group was statistically different in verbal fluency test (26.26±6.55 vs. 30.23±8.00, P=0.023) while was not statistically significant in other aspects of cognitive function. Conclusion The incidence of POCD is high in the elderly patients complicated with hypothyroidism after on-pump cardiac surgery and words reserve, fluency, and classification of cognitive function are significantly impacted by hypothyroidism over than other domains, which indicates hypothyroidism may have close relationship with POCD in this kind of patients.
Steady-state visual evoked potential (SSVEP) is one of the commonly used control signals in brain-computer interface (BCI) systems. The SSVEP-based BCI has the advantages of high information transmission rate and short training time, which has become an important branch of BCI research field. In this review paper, the main progress on frequency recognition algorithm for SSVEP in past five years are summarized from three aspects, i.e., unsupervised learning algorithms, supervised learning algorithms and deep learning algorithms. Finally, some frontier topics and potential directions are explored.
The acoustic detection method based on machine learning and signal processing is an important method of pathological voice detection and the extraction of voice features is one of the most important. Currently, the features widely used have disadvantage of dependence on the fundamental frequency extraction, being easily affected by noise and high computational complexity. In view of these shortcomings, a new method of pathological voice detection based on multi-band analysis and chaotic analysis is proposed. The gammatone filter bank was used to simulate the human ear auditory characteristics to analyze different frequency bands and obtain the signals in different frequency bands. According to the characteristics that turbulence noise caused by chaos in voice will worsen the spectrum convergence, we applied short time Fourier transform to each frequency band of the voice signal, then the feature gammatone short time spectral self-similarity (GSTS) was extracted, and the chaos degree of each band signal was analyzed to distinguish normal and pathological voice. The experimental results showed that combined with traditional machine learning methods, GSTS reached the accuracy of 99.50% in the pathological voice database of Massachusetts Eye and Ear Infirmary (MEEI) and had an improvement of 3.46% compared with the best existing features. Also, the time of the extraction of GSTS was far less than that of traditional nonlinear features. These results show that GSTS has higher extraction efficiency and better recognition effect than the existing features.
Leukemia is a common, multiple and dangerous blood disease, whose early diagnosis and treatment are very important. At present, the diagnosis of leukemia heavily relies on morphological examination of blood cell images by pathologists, which is tedious and time-consuming. Meanwhile, the diagnostic results are highly subjective, which may lead to misdiagnosis and missed diagnosis. To address the gap above, we proposed an improved Vision Transformer model for blood cell recognition. First, a faster R-CNN network was used to locate and extract individual blood cell slices from original images. Then, we split the single-cell image into multiple image patches and put them into the encoder layer for feature extraction. Based on the self-attention mechanism of the Transformer, we proposed a sparse attention module which could focus on the discriminative parts of blood cell images and improve the fine-grained feature representation ability of the model. Finally, a contrastive loss function was adopted to further increase the inter-class difference and intra-class consistency of the extracted features. Experimental results showed that the proposed module outperformed the other approaches and significantly improved the accuracy to 91.96% on the Munich single-cell morphological dataset of leukocytes, which is expected to provide a reference for physicians’ clinical diagnosis.
The causes of mental disorders are complex, and early recognition and early intervention are recognized as effective way to avoid irreversible brain damage over time. The existing computer-aided recognition methods mostly focus on multimodal data fusion, ignoring the asynchronous acquisition problem of multimodal data. For this reason, this paper proposes a framework of mental disorder recognition based on visibility graph (VG) to solve the problem of asynchronous data acquisition. First, time series electroencephalograms (EEG) data are mapped to spatial visibility graph. Then, an improved auto regressive model is used to accurately calculate the temporal EEG data features, and reasonably select the spatial metric features by analyzing the spatiotemporal mapping relationship. Finally, on the basis of spatiotemporal information complementarity, different contribution coefficients are assigned to each spatiotemporal feature and to explore the maximum potential of feature so as to make decisions. The results of controlled experiments show that the method in this paper can effectively improve the recognition accuracy of mental disorders. Taking Alzheimer's disease and depression as examples, the highest recognition rates are 93.73% and 90.35%, respectively. In summary, the results of this paper provide an effective computer-aided tool for rapid clinical diagnosis of mental disorders.