ST segment morphology is closely related to cardiovascular disease. It is used not only for characterizing different diseases, but also for predicting the severity of the disease. However, the short duration, low energy, variable morphology and interference from various noises make ST segment morphology classification a difficult task. In this paper, we address the problems of single feature extraction and low classification accuracy of ST segment morphology classification, and use the gradient of ST surface to improve the accuracy of ST segment morphology multi-classification. In this paper, we identify five ST segment morphologies: normal, upward-sloping elevation, arch-back elevation, horizontal depression, and arch-back depression. Firstly, we select an ST segment candidate segment according to the QRS wave group location and medical statistical law. Secondly, we extract ST segment area, mean value, difference with reference baseline, slope, and mean squared error features. In addition, the ST segment is converted into a surface, the gradient features of the ST surface are extracted, and the morphological features are formed into a feature vector. Finally, the support vector machine is used to classify the ST segment, and then the ST segment morphology is multi-classified. The MIT-Beth Israel Hospital Database (MITDB) and the European ST-T database (EDB) were used as data sources to validate the algorithm in this paper, and the results showed that the algorithm in this paper achieved an average recognition rate of 97.79% and 95.60%, respectively, in the process of ST segment recognition. Based on the results of this paper, it is expected that this method can be introduced in the clinical setting in the future to provide morphological guidance for the diagnosis of cardiovascular diseases in the clinic and improve the diagnostic efficiency.
Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.
Pathological images of gastric cancer serve as the gold standard for diagnosing this malignancy. However, the recurrence prediction task often encounters challenges such as insignificant morphological features of the lesions, insufficient fusion of multi-resolution features, and inability to leverage contextual information effectively. To address these issues, a three-stage recurrence prediction method based on pathological images of gastric cancer is proposed. In the first stage, the self-supervised learning framework SimCLR was adopted to train low-resolution patch images, aiming to diminish the interdependence among diverse tissue images and yield decoupled enhanced features. In the second stage, the obtained low-resolution enhanced features were fused with the corresponding high-resolution unenhanced features to achieve feature complementation across multiple resolutions. In the third stage, to address the position encoding difficulty caused by the large difference in the number of patch images, we performed position encoding based on multi-scale local neighborhoods and employed self-attention mechanism to obtain features with contextual information. The resulting contextual features were further combined with the local features extracted by the convolutional neural network. The evaluation results on clinically collected data showed that, compared with the best performance of traditional methods, the proposed network provided the best accuracy and area under curve (AUC), which were improved by 7.63% and 4.51%, respectively. These results have effectively validated the usefulness of this method in predicting gastric cancer recurrence.