• School of Information Science and Engineering, Shenyang University of Technology, Shenyang 110870, P. R. China;
SHI Wentao, Email: 1737638110@qq.com
Export PDF Favorites Scan Get Citation

Recent studies have introduced attention models for medical visual question answering (MVQA). In medical research, not only is the modeling of “visual attention” crucial, but the modeling of “question attention” is equally significant. To facilitate bidirectional reasoning in the attention processes involving medical images and questions, a new MVQA architecture, named MCAN, has been proposed. This architecture incorporated a cross-modal co-attention network, FCAF, which identifies key words in questions and principal parts in images. Through a meta-learning channel attention module (MLCA), weights were adaptively assigned to each word and region, reflecting the model’s focus on specific words and regions during reasoning. Additionally, this study specially designed and developed a medical domain-specific word embedding model, Med-GloVe, to further enhance the model’s accuracy and practical value. Experimental results indicated that MCAN proposed in this study improved the accuracy by 7.7% on free-form questions in the Path-VQA dataset, and by 4.4% on closed-form questions in the VQA-RAD dataset, which effectively improves the accuracy of the medical vision question answer.

Citation: CUI Wencheng, SHI Wentao, SHAO Hong. A medical visual question answering approach based on co-attention networks. Journal of Biomedical Engineering, 2024, 41(3): 560-568, 576. doi: 10.7507/1001-5515.202307057 Copy

  • Previous Article

    An identification method of chromatin topological associated domains based on spatial density clustering
  • Next Article

    Study on the mesoscopic dynamic effects of tumor treating fields on cell tubulin