Both spike and local field potential (LFP) signals are two of the most important candidate signals for neural decoding. At present there are numerous studies on their decoding performance in mammals, but the decoding performance in birds is still not clear. We analyzed the decoding performance of both signals recorded from nidopallium caudolaterale area in six pigeons during the goal-directed decision-making task using the decoding algorithm combining leave-one-out and k-nearest neighbor (LOO-kNN). And the influence of the parameters, include the number of channels, the position and size of decoding window, and the nearest neighbor k value, on the decoding performance was also studied. The results in this study have shown that the two signals can effectively decode the movement intention of pigeons during the this task, but in contrast, the decoding performance of LFP signal is higher than that of spike signal and it is less affected by the number of channels. The best decoding window is in the second half of the goal-directed decision-making process, and the optimal decoding window size of LFP signal (0.3 s) is shorter than that of spike signal (1 s). For the LOO-kNN algorithm, the accuracy is inversely proportional to the k value. The smaller the k value is, the larger the accuracy of decoding is. The results in this study will help to parse the neural information processing mechanism of brain and also have reference value for brain-computer interface.
With the advantage of providing more natural and flexible control manner, brain-computer interface systems based on motor imagery electroencephalogram (EEG) have been widely used in the field of human-machine interaction. However, due to the lower signal-noise ratio and poor spatial resolution of EEG signals, the decoding accuracy is relative low. To solve this problem, a novel convolutional neural network based on temporal-spatial feature learning (TSCNN) was proposed for motor imagery EEG decoding. Firstly, for the EEG signals preprocessed by band-pass filtering, a temporal-wise convolution layer and a spatial-wise convolution layer were respectively designed, and temporal-spatial features of motor imagery EEG were constructed. Then, 2-layer two-dimensional convolutional structures were adopted to learn abstract features from the raw temporal-spatial features. Finally, the softmax layer combined with the fully connected layer were used to perform decoding task from the extracted abstract features. The experimental results of the proposed method on the open dataset showed that the average decoding accuracy was 80.09%, which is approximately 13.75% and 10.99% higher than that of the state-of-the-art common spatial pattern (CSP) + support vector machine (SVM) and filter bank CSP (FBCSP) + SVM recognition methods, respectively. This demonstrates that the proposed method can significantly improve the reliability of motor imagery EEG decoding.
The electroencephalogram (EEG) signal is the key signal carrier of the brain-computer interface (BCI) system. The EEG data collected by the whole-brain electrode arrangement is conducive to obtaining higher information representation. Personalized electrode layout, while ensuring the accuracy of EEG signal decoding, can also shorten the calibration time of BCI and has become an important research direction. This paper reviews the EEG signal channel selection methods in recent years, conducts a comparative analysis of the combined effects of different channel selection methods and different classification algorithms, obtains the commonly used channel combinations in motor imagery, P300 and other paradigms in BCI, and explains the application scenarios of the channel selection method in different paradigms are discussed, in order to provide stronger support for a more accurate and portable BCI system.
In the field of brain-computer interfaces (BCIs) based on functional near-infrared spectroscopy (fNIRS), traditional subject-specific decoding methods suffer from the limitations of long calibration time and low cross-subject generalizability, which restricts the promotion and application of BCI systems in daily life and clinic. To address the above dilemma, this study proposes a novel deep transfer learning approach that combines the revised inception-residual network (rIRN) model and the model-based transfer learning (TL) strategy, referred to as TL-rIRN. This study performed cross-subject recognition experiments on mental arithmetic (MA) and mental singing (MS) tasks to validate the effectiveness and superiority of the TL-rIRN approach. The results show that the TL-rIRN significantly shortens the calibration time, reduces the training time of the target model and the consumption of computational resources, and dramatically enhances the cross-subject decoding performance compared to subject-specific decoding methods and other deep transfer learning methods. To sum up, this study provides a basis for the selection of cross-subject, cross-task, and real-time decoding algorithms for fNIRS-BCI systems, which has potential applications in constructing a convenient and universal BCI system.