The kinematic model parameter deviation is the main factor affecting the positioning accuracy of neurosurgical robots. To obtain more realistic kinematic model parameters, this paper proposes an automatic parameters identification and accuracy evaluation method. First, an identification equation contains all robot kinematics parameter was established. Second, a multiple-pivot strategy was proposed to find the relationship between end-effector and tracking marker. Then, the relative distance error and the inverse kinematic coincidence error were designed to evaluate the identification accuracy. Finally, an automatic robot parameter identification and accuracy evaluation system were developed. We tested our method on both laboratory prototypes and real neurosurgical robots. The results show that this method can realize the neurosurgical robot kinematics model parameters identification and evaluation stably and quickly. Using the identified parameters to control the robot can reduce the robot relative distance error by 33.96% and the inverse kinematics consistency error by 67.30%.
Brain-computer interface (BCI) has great potential to replace lost upper limb function. Thus, there has been great interest in the development of BCI-controlled robotic arm. However, few studies have attempted to use noninvasive electroencephalography (EEG)-based BCI to achieve high-level control of a robotic arm. In this paper, a high-level control architecture combining augmented reality (AR) BCI and computer vision was designed to control a robotic arm for performing a pick and place task. A steady-state visual evoked potential (SSVEP)-based BCI paradigm was adopted to realize the BCI system. Microsoft's HoloLens was used to build an AR environment and served as the visual stimulator for eliciting SSVEPs. The proposed AR-BCI was used to select the objects that need to be operated by the robotic arm. The computer vision was responsible for providing the location, color and shape information of the objects. According to the outputs of the AR-BCI and computer vision, the robotic arm could autonomously pick the object and place it to specific location. Online results of 11 healthy subjects showed that the average classification accuracy of the proposed system was 91.41%. These results verified the feasibility of combing AR, BCI and computer vision to control a robotic arm, and are expected to provide new ideas for innovative robotic arm control approaches.