Organoids are an in vitro model that can simulate the complex structure and function of tissues in vivo. Functions such as classification, screening and trajectory recognition have been realized through organoid image analysis, but there are still problems such as low accuracy in recognition classification and cell tracking. Deep learning algorithm and organoid image fusion analysis are the most advanced organoid image analysis methods. In this paper, the organoid image depth perception technology is investigated and sorted out, the organoid culture mechanism and its application concept in depth perception are introduced, and the key progress of four depth perception algorithms such as organoid image and classification recognition, pattern detection, image segmentation and dynamic tracking are reviewed respectively, and the performance advantages of different depth models are compared and analyzed. In addition, this paper also summarizes the depth perception technology of various organ images from the aspects of depth perception feature learning, model generalization and multiple evaluation parameters, and prospects the development trend of organoids based on deep learning methods in the future, so as to promote the application of depth perception technology in organoid images. It provides an important reference for the academic research and practical application in this field.
In order to address the issues of spatial induction bias and lack of effective representation of global contextual information in colon polyp image segmentation, which lead to the loss of edge details and mis-segmentation of lesion areas, a colon polyp segmentation method that combines Transformer and cross-level phase-awareness is proposed. The method started from the perspective of global feature transformation, and used a hierarchical Transformer encoder to extract semantic information and spatial details of lesion areas layer by layer. Secondly, a phase-aware fusion module (PAFM) was designed to capture cross-level interaction information and effectively aggregate multi-scale contextual information. Thirdly, a position oriented functional module (POF) was designed to effectively integrate global and local feature information, fill in semantic gaps, and suppress background noise. Fourthly, a residual axis reverse attention module (RA-IA) was used to improve the network’s ability to recognize edge pixels. The proposed method was experimentally tested on public datasets CVC-ClinicDB, Kvasir, CVC-ColonDB, and EITS, with Dice similarity coefficients of 94.04%, 92.04%, 80.78%, and 76.80%, respectively, and mean intersection over union of 89.31%, 86.81%, 73.55%, and 69.10%, respectively. The simulation experimental results show that the proposed method can effectively segment colon polyp images, providing a new window for the diagnosis of colon polyps.
ObjectiveTo investigate the differences in self-perception level of asthma control and the factors affecting the ability of self-perception in patients with bronchial asthma. MethodsA total of 322 patients who were diagnosed with bronchial asthma at the First Affiliated Hospital of Harbin Medical University from March 2013 to February 2015 were recruited in the study. The clinical data were collected, including the demographic characteristics of the patients, the Asthma Control Test (ACT) and results of routine blood test and pulmonary function test on the same day that they were required to fill out the ACT. Then they were followed up at the 1st, 3rd, 6th, 12th months, and required to fill out the ACT again, and underwent the blood routine test and lung function test. In addition, health education about asthma was offered regularly during these visits. ResultsA total of 226 patients met the inclusion criteria of the study. The patients with asthma had significant differences between self-perception control level and real symptoms control level (P<0.05). The patients who were 65 years old or older perceived their symptoms of bronchial asthma rather poorly (P=0.000). The patients who received senior high school or higher education had a higher ability of self-perceived asthma control (P=0.005). The patients with allergic rhinitis combined were less likely to correctly perceive their illness compared with those who did not suffered from allergic rhinitis, and the difference was statistically significant (P=0.001). In addition, for those diagnosed with allergic rhinitis combined with bronchial asthma, regular treatment also made difference--longer treatment for rhinitis means a higher ability of self-perceived asthma control (P=0.000). The health education did play a constructive role in helping patients correctly perceive their illness (P=0.000). There was no correlation between the absolute value of peripheral blood eosinophils and the accuracy of self-perceived asthma control. Nevertheless,there was a noticeable correlation between the ability of peripheral blood eosinophils of patients with asthma and acute attack of bronchial asthma (P=0.003),which was a meaningful finding in assessing the risk of future acute attack of bronchial asthma (P=0.469). ConclusionsThere is a significant difference between self-perception control level and symptom control level in patients with asthma. The self-perception control level of asthma patients who are elderly, the low degree of educational level, merged allergic rhinitis, and lack of health education are associated with lower accuracy of self-perception control level. The absolute value of peripheral blood eosinophils of the patients with asthma can be used to assess the risk of asthma acute attack in the future, but has no significant correlation with the accuracy of self-perception control level.
Objective To explore the impact of hospital staff’s risk perception on their emergency responses, and provide reference for future responses to public health emergencies. Methods Based on participatory observation and in-depth interviews, the staff of the First Affiliated Hospital of Guangzhou Medical University who participated in the prevention and control of the coronavirus disease 2019 from April to September 2020 were selected. The information on risk perception and emergency responses of hospital staff was collected. Results A total of 61 hospital staff were included. The positions of hospital staff were involved including hospital leading group, hospital office, medical department, logistics support department and outpatient isolation area. The interview results showed that both individual and organizational factors of hospital staff would affect the risk perception of hospital staff, thus affecting the emergency responses of hospital staff, mainly reflected in the psychological and behavioral aspects. Among them, their psychological reactions were manifested as more confidence, sensitivity, and sense of responsibility and mission; The behavior aspects was mainly reflected in the initiation time, execution ability, and standardization level of emergency responses actions. Conclusion Therefore, relevant departments should pay attention to the risk perception of hospital staff, improve the risk perception and emergency responses of hospital staff by influencing the individual and organizational factors of hospital staff, so as to respond more effectively to future public health emergencies and reduce the adverse impact of public health emergencies on the work of hospital staff.
In deep learning-based image registration, the deformable region with complex anatomical structures is an important factor affecting the accuracy of network registration. However, it is difficult for existing methods to pay attention to complex anatomical regions of images. At the same time, the receptive field of the convolutional neural network is limited by the size of its convolution kernel, and it is difficult to learn the relationship between the voxels with far spatial location, making it difficult to deal with the large region deformation problem. Aiming at the above two problems, this paper proposes a cascaded multi-level registration network model based on transformer, and equipped it with a difficult deformable region perceptron based on mean square error. The difficult deformation perceptron uses sliding window and floating window techniques to retrieve the registered images, obtain the difficult deformation coefficient of each voxel, and identify the regions with the worst registration effect. In this study, the cascaded multi-level registration network model adopts the difficult deformation perceptron for hierarchical connection, and the self-attention mechanism is used to extract global features in the basic registration network to optimize the registration results of different scales. The experimental results show that the method proposed in this paper can perform progressive registration of complex deformation regions, thereby optimizing the registration results of brain medical images, which has a good auxiliary effect on the clinical diagnosis of doctors.
目的 了解成都市社区老年慢性病患者对关爱的感知和需求,为更好地关爱老年慢性病患者提供依据。 方法 于2011年8月-10月采用随机抽样和问卷调查的方法,对成都市玉林社区、二仙桥社区、草堂街社区和驷马桥社区的180名老年慢性病患者的关爱感知和需求进行调查,并根据调查结果提出相应对策。 结果 180例老年慢性病患者中有98.89%能感受到关爱,1.11%自觉缺乏关爱;感知到的关爱主要来源于家庭成员,占91.01%,其次来源于亲戚朋友和邻居,占7.87%,最少来源于单位同事,占1.12%。关爱需求主要为家人团聚、关心体贴、尊重理解、日常照顾和心理情感支持、帮助解决困难、给予经济资助、提供情感支持等;护理关爱需求以尊重理解排在首位,其次是慢性病日常护理、慢性病的防治、老年保健和慢性病基本知识等。 结论 加强对社区卫生服务人员的能力培训,强化尊老爱老家庭氛围和社会风气,提高老年慢性病患者的关爱感知,有效地为老年慢性病患者提供关爱,更好地促进他们的健康。
Medical visual question answering (MVQA) plays a crucial role in the fields of computer-aided diagnosis and telemedicine. Due to the limited size and uneven annotation quality of the MVQA datasets, most existing methods rely on additional datasets for pre-training and use discriminant formulas to predict answers from a predefined set of labels. This approach makes the model prone to overfitting in low resource domains. To cope with the above problems, we propose an image-aware generative MVQA method based on image caption prompts. Firstly, we combine a dual visual feature extractor with a progressive bilinear attention interaction module to extract multi-level image features. Secondly, we propose an image caption prompt method to guide the model to better understand the image information. Finally, the image-aware generative model is used to generate answers. Experimental results show that our proposed method outperforms existing models on the MVQA task, realizing efficient visual feature extraction, as well as flexible and accurate answer outputs with small computational costs in low-resource domains. It is of great significance for achieving personalized precision medicine, reducing medical burden, and improving medical diagnosis efficiency.
Emotion can reflect the psychological and physiological health of human beings, and the main expression of human emotion is voice and facial expression. How to extract and effectively integrate the two modes of emotion information is one of the main challenges faced by emotion recognition. In this paper, a multi-branch bidirectional multi-scale time perception model is proposed, which can detect the forward and reverse speech Mel-frequency spectrum coefficients in the time dimension. At the same time, the model uses causal convolution to obtain temporal correlation information between different scale features, and assigns attention maps to them according to the information, so as to obtain multi-scale fusion of speech emotion features. Secondly, this paper proposes a two-modal feature dynamic fusion algorithm, which combines the advantages of AlexNet and uses overlapping maximum pooling layers to obtain richer fusion features from different modal feature mosaic matrices. Experimental results show that the accuracy of the multi-branch bidirectional multi-scale time sensing dual-modal emotion recognition model proposed in this paper reaches 97.67% and 90.14% respectively on the two public audio and video emotion data sets, which is superior to other common methods, indicating that the proposed emotion recognition model can effectively capture emotion feature information and improve the accuracy of emotion recognition.