Brain control is a new control method. The traditional brain-controlled robot is mainly used to control a single robot to accomplish a specific task. However, the brain-controlled multi-robot cooperation (MRC) task is a new topic to be studied. This paper presents an experimental research which received the "Innovation Creative Award" in the brain-computer interface (BCI) brain-controlled robot contest at the World Robot Contest. Two effective brain switches were set: total control brain switch and transfer switch, and BCI based steady-state visual evoked potentials (SSVEP) was adopted to navigate a humanoid robot and a mechanical arm to complete the cooperation task. Control test of 10 subjects showed that the excellent SSVEP-BCI can be used to achieve the MRC task by appropriately setting up the brain switches. This study is expected to provide inspiration for the future practical brain-controlled MRC task system.
Attention can concentrate our mental resources on processing certain interesting objects, which is an important mental behavior and cognitive process. Recognizing attentional states have great significance in improving human’s performance and reducing errors. However, it still lacks a direct and standardized way to monitor a person’s attentional states. Based on the fact that visual attention can modulate the steady-state visual evoked potential (SSVEP), we designed a go/no-go experimental paradigm with 10 Hz steady state visual stimulation in background to investigate the separability of SSVEP features modulated by different visual attentional states. The experiment recorded the EEG signals of 15 postgraduate volunteers under high and low visual attentional states. High and low visual attentional states are determined by behavioral responses. We analyzed the differences of SSVEP signals between the high and low attentional levels, and applied classification algorithms to recognize such differences. Results showed that the discriminant canonical pattern matching (DCPM) algorithm performed better compared with the linear discrimination analysis (LDA) algorithm and the canonical correlation analysis (CCA) algorithm, which achieved up to 76% in accuracy. Our results show that the SSVEP features modulated by different visual attentional states are separable, which provides a new way to monitor visual attentional states.
Brain-controlled wheelchair (BCW) is one of the important applications of brain-computer interface (BCI) technology. The present research shows that simulation control training is of great significance for the application of BCW. In order to improve the BCW control ability of users and promote the application of BCW under the condition of safety, this paper builds an indoor simulation training system based on the steady-state visual evoked potentials for BCW. The system includes visual stimulus paradigm design and implementation, electroencephalogram acquisition and processing, indoor simulation environment modeling, path planning, and simulation wheelchair control, etc. To test the performance of the system, a training experiment involving three kinds of indoor path-control tasks is designed and 10 subjects were recruited for the 5-day training experiment. By comparing the results before and after the training experiment, it was found that the average number of commands in Task 1, Task 2, and Task 3 decreased by 29.5%, 21.4%, and 25.4%, respectively (P < 0.001). And the average number of commands used by the subjects to complete all tasks decreased by 25.4% (P < 0.001). The experimental results show that the training of subjects through the indoor simulation training system built in this paper can improve their proficiency and efficiency of BCW control to a certain extent, which verifies the practicability of the system and provides an effective assistant method to promote the indoor application of BCW.
Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.