In order to address the issues of spatial induction bias and lack of effective representation of global contextual information in colon polyp image segmentation, which lead to the loss of edge details and mis-segmentation of lesion areas, a colon polyp segmentation method that combines Transformer and cross-level phase-awareness is proposed. The method started from the perspective of global feature transformation, and used a hierarchical Transformer encoder to extract semantic information and spatial details of lesion areas layer by layer. Secondly, a phase-aware fusion module (PAFM) was designed to capture cross-level interaction information and effectively aggregate multi-scale contextual information. Thirdly, a position oriented functional module (POF) was designed to effectively integrate global and local feature information, fill in semantic gaps, and suppress background noise. Fourthly, a residual axis reverse attention module (RA-IA) was used to improve the network’s ability to recognize edge pixels. The proposed method was experimentally tested on public datasets CVC-ClinicDB, Kvasir, CVC-ColonDB, and EITS, with Dice similarity coefficients of 94.04%, 92.04%, 80.78%, and 76.80%, respectively, and mean intersection over union of 89.31%, 86.81%, 73.55%, and 69.10%, respectively. The simulation experimental results show that the proposed method can effectively segment colon polyp images, providing a new window for the diagnosis of colon polyps.
The existing retinal vessels segmentation algorithms have various problems that the end of main vessels are easy to break, and the central macula and the optic disc boundary are likely to be mistakenly segmented. To solve the above problems, a novel retinal vessels segmentation algorithm is proposed in this paper. The algorithm merged together vessels contour information and conditional generative adversarial nets. Firstly, non-uniform light removal and principal component analysis were used to process the fundus images. Therefore, it enhanced the contrast between the blood vessels and the background, and obtained the single-scale gray images with rich feature information. Secondly, the dense blocks integrated with the deep separable convolution with offset and squeeze-and-exception (SE) block were applied to the encoder and decoder to alleviate the gradient disappearance or explosion. Simultaneously, the network focused on the feature information of the learning target. Thirdly, the contour loss function was added to improve the identification ability of the blood vessels information and contour information of the network. Finally, experiments were carried out on the DRIVE and STARE datasets respectively. The value of area under the receiver operating characteristic reached 0.982 5 and 0.987 4, respectively, and the accuracy reached 0.967 7 and 0.975 6, respectively. Experimental results show that the algorithm can accurately distinguish contours and blood vessels, and reduce blood vessel rupture. The algorithm has certain application value in the diagnosis of clinical ophthalmic diseases.