• 1. School of Mechanical and Electrical Engineering and Automation, Shanghai University, Shanghai 200444, P. R. China;
  • 2. Shanghai Sensing Technology Company, Shanghai 201900, P. R. China;
  • 3. Wenzhou People’s Hospital of Zhejiang Province, Wenzhou, Zhejiang 325041, P. R. China;
YANG Banghua, Email: yangbanghua@shu.edu.cn
Export PDF Favorites Scan Get Citation

This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects’ brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.

Citation: GUO Meng’ao, YANG Banghua, GENG Yiting, JIE Rongxin, ZHANG Yonghuai, ZHENG Yanyan. Visual object detection system based on augmented reality and steady-state visual evoked potential. Journal of Biomedical Engineering, 2024, 41(4): 684-691. doi: 10.7507/1001-5515.202403041 Copy

  • Previous Article

    A deep transfer learning approach for cross-subject recognition of mental tasks based on functional near-infrared spectroscopy
  • Next Article

    Early classification and recognition algorithm for sudden cardiac arrest based on limited electrocardiogram data trained with a two-stages convolutional neural network