Figure 6
From: Unfolded deep kernel estimation-attention UNet-based retinal image segmentation

The feature attention module (FAM) obtains an attention feature map after a series of operations: ReLU and sigmoid activations and elementwise multiplication along with concatenation of both input and output feature maps followed by 3 × 3 and 1 × 1 convolution operations giving attention features.