Table 1 Literature survey
From: Retinal vessel segmentation using multi scale feature attention with MobileNetV2 encoder
Reference number | Datasets used | Technique used | Summary |
|---|---|---|---|
DRIVE, STARE | Grey relational analysis | Grey Relational Analysis is used as a novel approach to retinal vein segmentation. Utilizing the DRIVE and STARE retinal vascular datasets, the method is evaluated to assess its performance on segmenting single samples. Results using different vessel configurations are shown. | |
DRIVE, STARE, CHASE-DB1 | Region guided attention network | Using a region guided attention network it gives more accurate segmentation of the retina’s blood vessels. In an attempt to get better segmentation accuracy, the model uses an attention mechanism that takes a focus on the relevant area. The approach was tested on the following datasets: DRIVE, STARE, and CHASE-DB1. | |
DRIVE, STARE | Multi-scale position-aware cyclic convolutional cetwork (MPCCN) | Retinal vascular segmentation is performed using the Multi-Scale Position Aware Cyclical Convolutional Network (MPCCN) model which utilizes these techniques. The suggested approach is shown to improve segmentation accuracy in the DRIVE and STARE datasets. On many datasets this proves to have outstanding functionality. | |
DRIVE, STARE | AutoMorph (Automated morphology-based segmentation) | This study investigates the effect of segmentation metrics on the reproducibility of retinal vascular segmentation retest. Upon use of the AutoMorph tool to examine a wide variety of retinal vascular data sets such as DRIVE and STARE, the aim is not only to provide a tool that accurately demonstrates morphologic difference across retinal vascular disease, but to do so in order to assure therapeutic relevance. Reliability and consistency are in opposition. | |
OCTA | U-Net deep learning model | A U-Net deep learning model for retinal vessel segmentation from Optical Coherence Tomography Angiography photographs is presented in this work. The model is designed to increase the segmentation accuracy of retinal vascular structure in OCTA images. The results show suitability of the model for modern imaging techniques. | |
DRIVE, STARE | ANSAN-infused retinal vessel segmentation | In this work, the methodology is presented to exploit this retinal vascular segmentation feature, injected with ANSAN, for an early diagnosis of glaucoma. In retinal images, this method improves accuracy of segmentation of early-stage glaucoma. Finally, test how well and how accurately the performance works using the DRIVE and STARE datasets. | |
Fluorescein angiography | Hybrid segmentation method | A hybrid segmentation method is proposed to accurately identify retinal blood vessels in Fluorescein Angiography images. Then trained the model on images of people with diabetic retinopathy and were able to make our segmentation method more precise. The method works like a charm on real world clinical datasets. | |
DRIVE, STARE | Morphology cascaded features and supervised learning | Supervised learning with morphological cascaded characteristics is combined to segment the retinal blood vessels in this study. Features used in the approach are engineered in order to increase the accuracy of the segmentation. Results on DRIVE and STARE retinal vessel datasets are given to validate the approach. | |
DRIVE, STARE | Spatial attention U-Net (SA-UNet) | The Spatial Attention U-Net (SA-UNet) was developed for retinal vascular segmentation, where segmentation outcomes are improved by an emphasis on spatial attention processes. First, the model improves segmentation by boosting it to the relevant areas of retinal images. The DRIVE and STARE datasets are used for testing SA-UNet. | |
DRIVE, STARE | Genetic U-Net | The Genetic U-Net model uses genetic algorithms to determine automatically deep networks for retinal vessel segmentation. In the strategy, instead of building the network to reduce the influence of noise to improve the segmentation accuracy, the network is made optimal for the purpose of increasing the accuracy of the segmentation. The model developed is tested using results of DRIVE and STARE datasets of the retinal vessels, and the result is promising. | |
3D retinal vessel datasets | Spider U-Net (LSTM for 3D segmentation) | Using its LSTM for inter slice communication, Spider U-Net makes 3D retinal blood vessel segmentation possible. The model improves 3D segmentation by taking into account inter-slice interactions. The approach is evaluated on a 3D retinal vascular dataset for improved outcomes. | |
DRIVE, STARE | MRU-Net (U-Net Variant) | For retinal vessel segmentation, a variant of U-Net, MRU-Net, is preferred because it can handle the complicated vascular structure better. The model has a modified U-shaped architecture to improve accuracy. The DRIVE and STARE datasets got a better performance in terms of segmentation. | |
DRIVE, STARE | M2U-Net | as efficient and effective retinal vascular segmentation, a model, called M2U-Net was developed. The modified U-Net architecture was used by the model for the best segmentation performance. The presented method was validated on two widely used retinal vessel datasets: DRIVE and STARE. | |
DRIVE, STARE | Context encoder network (CE-Net) | CE Net tries to enhance retinal vascular segmentation using a context encoder network to 2D medical image segmentation. Encoder decoder architecture along with contextual information is used to improve segmentation accuracy in the model. The DRIVE and STARE databases provide information of retinal vessels and evaluations are based on these two databases. | |
Fundus images | LUVS-Net (Lightweight U-Net) | Specifically for fundus image retinal vasculature identification, a lightweight U-Net model named LUVS-Net was designed. The goal of the model is to achieve excellent segmentation performance with low computational complexity. When used on images datasets of fundus images, it has provided promising results. | |
DRIVE, STARE, CHASE_DB1 | Multi-scale feature fusion and attention (MFA-UNet) | MFA-UNet integrates a Multi-scale Fusion Self-Attention Module (MSAM) in skip connections to capture global dependencies and retain vessel details, and a Multi-Branch Decoding Module (MBDM) with deep supervision to separately guide macrovessel and microvessel learning. A Parallel Attention Module (PAM) is also used in the decoder to suppress redundant information. The model outperformed existing methods, particularly in preserving thin vessels, making it effective for clinical application. |