Fig. 4 | Scientific Reports

Fig. 4

From: ConIQA: A deep learning method for perceptual image quality assessment with limited data

Fig. 4

ConIQA network structure and the consistency training paradigm. (a) ConIQA is a Siamese-style network where the same network is applied to both the target (top) and the difference (bottom) images. The difference image is defined as \(|I_t-I_r|\). The final estimated perceptual quality, \(M_p\), is calculated by averaging the cosine similarity of feature maps in the two branches. (b) To train ConIQA, we compute a supervised loss, calculated over the HQA1k dataset, as well as an unsupervised loss which is calculated over a large unlabeled dataset of target-rendering pairs. The unsupervised loss is calculated by comparing the network predictions before and after image transformations are applied to each unlabeled sample.

Back to article page