Artificial prosthesis is an important tool to help amputees to gain or partially obtain abled human limb functions. Compared with traditional prosthesis which is only for decoration or merely has feedforward control channel, the perception and feedback function of prosthesis is an important guarantee for its normal use and self-safety. And this includes the information of position, force, texture, roughness, temperature and so on. This paper mainly summarizes the development and current status of artificial prostheses in the field of perception and feedback technology in recent years, which is derived from two aspects: the recognition way of perception signals and the feedback way of perception signals. Among the part of recognition way of perception signals, the current commonly adopted sensors related to perception information acquisition and their application status in prosthesis are overviewed. Additionally, from the aspects of force feedback stimulation, invasive/non-invasive electrical stimulation, and vibration stimulation, the feedback methods of perception signals are summarized and analyzed. Finally, some problems existing in the perception and feedback technology of artificial prosthesis are proposed, and their development trends are also prospected.
With the advantage of providing more natural and flexible control manner, brain-computer interface systems based on motor imagery electroencephalogram (EEG) have been widely used in the field of human-machine interaction. However, due to the lower signal-noise ratio and poor spatial resolution of EEG signals, the decoding accuracy is relative low. To solve this problem, a novel convolutional neural network based on temporal-spatial feature learning (TSCNN) was proposed for motor imagery EEG decoding. Firstly, for the EEG signals preprocessed by band-pass filtering, a temporal-wise convolution layer and a spatial-wise convolution layer were respectively designed, and temporal-spatial features of motor imagery EEG were constructed. Then, 2-layer two-dimensional convolutional structures were adopted to learn abstract features from the raw temporal-spatial features. Finally, the softmax layer combined with the fully connected layer were used to perform decoding task from the extracted abstract features. The experimental results of the proposed method on the open dataset showed that the average decoding accuracy was 80.09%, which is approximately 13.75% and 10.99% higher than that of the state-of-the-art common spatial pattern (CSP) + support vector machine (SVM) and filter bank CSP (FBCSP) + SVM recognition methods, respectively. This demonstrates that the proposed method can significantly improve the reliability of motor imagery EEG decoding.