Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Data
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific data
  3. data descriptors
  4. article
RAID-Dataset: human responses to affine image distortions and Gaussian noise
Download PDF
Download PDF
  • Data Descriptor
  • Open access
  • Published: 15 January 2026

RAID-Dataset: human responses to affine image distortions and Gaussian noise

  • Paula Daudén-Oliver  ORCID: orcid.org/0009-0009-1026-64551,
  • David Agost-Beltran2,3,
  • Emilio Sansano-Sansano2,3,
  • Raul Montoliu  ORCID: orcid.org/0000-0002-8467-391X2,3,
  • Valero Laparra1,
  • Jesús Malo1 &
  • …
  • Marina Martínez-Garcia2,4,5 

Scientific Data , Article number:  (2026) Cite this article

  • 848 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Data acquisition
  • Network models

Abstract

Image quality datasets are used to train and evaluate predictive models of subjective human perception. However, most existing datasets focus on distortions commonly found in digital media and not in natural conditions. Affine transformations are particularly relevant for study, as they are among the most commonly encountered by human observers in everyday life. This Data Descriptor presents a set of human responses to suprathreshold affine image transformations (rotation, translation, scaling) and Gaussian noise as convenient reference to compare with existing image quality datasets. The responses were measured using well-established psychophysics: the Maximum Likelihood Difference Scaling (MLDS). The set contains responses to 864 distorted images. The experiments involved 210 observers and over 40,000 image quadruple comparisons. The dataset is validated by two facts: (a) the responses reproduce classical absolute detection thresholds of the affine and Gaussian distortions, and (b) the responses to Gaussian distortion are correlated to the Mean Opinion Score (MOS) of conventional image quality databases for that distortion. Moreover, the classical Piéron’s law applies to the reaction times of the dataset, and Group-MAD adversarial stimuli reveal that MLDS perceptual scales are more accurate than the conventional MOS.

Similar content being viewed by others

High-resolution single-photon imaging with physics-informed deep learning

Article Open access 22 September 2023

Noise reduction by adaptive-SIN filtering for retinal OCT images

Article Open access 30 September 2021

Innovative adaptive edge detection for noisy images using wavelet and Gaussian method

Article Open access 18 February 2025

Data availability

The dataset and accompanying materials are publicly available on Zenodo at [https://doi.org/10.5281/zenodo.17348027]38, ensuring long-term preservation, accessibility, and reproducibility. All data are released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.

Code availability

The source code supporting this work is available on GitHub at [https://github.com/paudauo/BBDD_Affine_Transformations], where detailed documentation is provided in the README file. Together with the dataset, we provide coding examples in Python. In particular, we show how to read the different versions of the dataset. We also demonstrate how to read the raw data specifically and compute the MLDS perceptual scales for a single image and a single distortion. Other libraries could be used to compute the MLDS perceptual scales, such as the one from the original authors46, or a wrapper to use it in Python47. Additionally, we provide an example of reading the MLDS perceptual scales already computed for all images and all distortions. We also show how to convert the MLDS data to MOS values.

The structure of the code is as follows:

• Load_DDBB_example.ipynb: Load images and responses.

• Load_RAW_data_and_compute_MLDS.ipynb: Compute MLDS curves from raw data.

• Load_MLDS_data_and_plot_curves.ipynb: Plot normalized perceptual scale curves.

• Convert_MLDS_to_MOS.ipynb: Convert MLDS perceptual scales to MOS (aligned with TID2013).

• Load_RAW_data_and_plot_left_right_RT.ipynb: Analyze reaction times and decision patterns.

References

  1. Wang, Z. & C. Bovik, A. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE Signal Processing Magazine 26, 98–117, https://doi.org/10.1109/MSP.2008.930649 (2009).

    Google Scholar 

  2. Zhang, R., Isola, P., Efros, A. A., Shechtman, E. & Wang, O. The unreasonable effectiveness of deep features as a perceptual metric, in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 586–595, https://doi.org/10.1109/CVPR.2018.00068 (2018).

  3. Watson, A. & Malo, J. Video quality measures based on the standard spatial observer, in Proceedings. International Conference on Image Processing, vol. 3, pp. III–III, https://doi.org/10.1109/ICIP.2002.1038898 (2002).

  4. Laparra, V., Munoz-Marí, J. & Malo, J. Divisive normalization image quality metric revisited. J. Opt. Soc. Am. A 27, 852–864, https://doi.org/10.1364/JOSAA.27.000852 (2010).

    Google Scholar 

  5. Hepburn, A., Laparra, V., Malo, J., McConville, R. & Santos-Rodriguez, R. Perceptnet: A human visual system inspired neural network for estimating perceptual distance, in 2020 IEEE International Conference on Image Processing (ICIP), pp. 121–125, https://doi.org/10.1109/ICIP40778.2020.9190691 (2020).

  6. Laparra, V., Berardino, A., Ballé, J. & Simoncelli, E. P. Perceptually optimized image rendering. J. Opt. Soc. Am. A 34, 1511–1525, https://doi.org/10.1364/JOSAA.34.001511 (2017).

    Google Scholar 

  7. Martinez-Garcia, M., Cyriac, P., Batard, T., Bertalmío, M. & Malo, J. Derivatives and inverse of cascaded linear+nonlinear neural models. PLoS ONE 13, e0201326, https://doi.org/10.1371/journal.pone.0201326 (2018).

    Google Scholar 

  8. Kumar, M., Houlsby, N., Kalchbrenner, N. & Cubuk, E. D. Do better imagenet classifiers assess perceptual similarity better?, Transactions on Machine Learning Research (2022).

  9. Hernández-Cámara, P., Vila-Tomás, J., Laparra, V. & Malo, J. Dissecting the effectiveness of deep features as metric of perceptual image quality. Neural Networks 185, 107189, https://doi.org/10.1016/j.neunet.2025.107189 (2025).

    Google Scholar 

  10. Martinez-Garcia, M., Bertalmío, M. & Malo, J. In praise of artifice reloaded: Caution with natural image databases in modeling vision, Frontiers in Neuroscience 13, https://doi.org/10.3389/fnins.2019.00008 (2019).

  11. Lin, H., Hosu, V. & Saupe, D. Kadid-10k: A large-scale artificially distorted iqa database, in 2019 Tenth International Conference on Quality of Multimedia Experience (QoMEX), pp. 1–3, IEEE, (2019).

  12. Gu, J. et al. Pipal: a large-scale image quality assessment dataset for perceptual image restoration, in European Conference on Computer Vision (ECCV) 2020, pp. 633–651, Springer, https://doi.org/10.1007/978-3-030-58621-8_37 (2020).

  13. Ponomarenko, N. et al. Image database TID2013: Peculiarities, results and perspectives. Signal Processing: Image Communication 30, 57–77, https://doi.org/10.1016/j.image.2014.10.009 (2015).

    Google Scholar 

  14. Torralba, A., Isola, P. & Freeman, W. Foundations of Computer Vision, MIT Press, (2024).

  15. Liu, X., Pedersen, M. & Hardeberg, J. Y. Cid:iq – a new image quality database, in Image and Signal Processing, pp. 193–202, Springer, https://doi.org/10.1007/978-3-319-07998-1_22 (2014).

  16. Abrams, A. B., Hillis, J. M. & Brainard, D. H. The relation between color discrimination and color constancy: When is optimal adaptation task dependent? Neural Computation 19, 2610–2637, https://doi.org/10.1162/neco.2007.19.10.2610 (2007).

    Google Scholar 

  17. Chromatic Adaptation Models, chap. 9, pp. 181–198, John Wiley & Sons, Ltd, (2013).

  18. Laparra, V., Jiménez, S., Camps-Valls, G. & Malo, J. Nonlinearities and adaptation of color vision from sequential principal curves analysis. Neural Computation 24, 2751–2788, https://doi.org/10.1162/NECO_a_00342 (2012).

    Google Scholar 

  19. Atherton, T. Energy and phase orientation mechanisms: a computational model. Spatial Vision 15, 415–441, https://doi.org/10.1163/156856802320401892 (2002).

    Google Scholar 

  20. Todorović, D. Extension of a computational model of a class of orientation illusions. Vision Research 223, 108459, https://doi.org/10.1016/j.visres.2024.108459 (2024).

    Google Scholar 

  21. Frisby, J. P. & Stone, J. V. Seeing: The computational approach to biological vision, MIT Press, (2010).

  22. Hansard, M. & Horaud, R. A differential model of the complex cell. Neural Computation 23, 2324–2357, https://doi.org/10.1162/NECO_a_00163 (2011).

    Google Scholar 

  23. Langley, K., Lefebvre, V. & Anderson, S. J. Cascaded bayesian processes: An account of bias in orientation perception. Vision Research 49, 2453–2474, https://doi.org/10.1016/j.visres.2009.07.015 (2009).

    Google Scholar 

  24. Bruna, J. & Mallat, S. Invariant scattering convolution networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 35, 1872–1886, https://doi.org/10.1109/TPAMI.2012.230 (2013).

    Google Scholar 

  25. Bouvrie, J., Rosasco, L. & Poggio, T. On invariance in hierarchical models, in Advances in Neural Information Processing Systems, vol. 22, Curran Associates, (2009).

  26. Alabau-Bosque, N., Daudén-Oliver, P., Vila-Tomás, J., Laparra, V. & Malo, J. Invariance of deep image quality metrics to affine transformations, https://doi.org/10.48550/arXiv.2407.17927 (2024).

  27. International Telecommunication Union (ITU), Recommendation ITU-R BT.500-13: Methodology for the subjective assessment of the quality of television pictures, Tech. Rep., (2012).

  28. Kingdom, F. A. & Prins, N. Psychophysics, 2nd ed., Academic Press, UK, (2016).

  29. Maloney, L. T. & Yang, J. N. Maximum likelihood difference scaling. Journal of Vision 3, 5–5, https://doi.org/10.1167/3.8.5 (2003).

    Google Scholar 

  30. Ma, K. et al. Group maximum differentiation competition: Model comparison with few samples. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 851–864, https://doi.org/10.1109/TPAMI.2018.2889948 (2020).

    Google Scholar 

  31. Eastman Kodak Company, Kodak lossless true color image suite, 1999. https://r0k.us/graphics/kodak/.

  32. Piéron, H. II. recherches sur les lois de variation des temps de latence sensorielle en fonction des intensités excitatrices. L’année psychologique 20, 17–96 (1913).

    Google Scholar 

  33. Daly, S. Application of a Noise Adaptive Contrast Sensitivity Function to Image Data Compression, in Human Vision, Visual Processing, and Digital Display, vol. 1077, pp. 217–227, SPIE, https://doi.org/10.1117/12.952720 (1989).

  34. Wagenmakers, E.-J. & Brown, S. On the linear relation between the mean and the standard deviation of a response time distribution. Psychological Review 114, 830–841, https://doi.org/10.1037/0033-295x.114.3.830 (2007).

    Google Scholar 

  35. Kepecs, A., Uchida, N., Zariwala, H. A. & Mainen, Z. F. Neural correlates, computation and behavioural impact of decision confidence. Nature 455, 227–231, https://doi.org/10.1038/nature07200 (2008).

    Google Scholar 

  36. Wang, Z. & Simoncelli, E. P. Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual quantities. Journal of Vision 8, 8–8, https://doi.org/10.1167/8.12.8 (2008).

    Google Scholar 

  37. Malo, J. & Simoncelli, E. P. Geometrical and statistical properties of vision models obtained via maximum differentiation, in Proc SPIE Conf on Human Vision and Electronic Imaging (HVEI XX), vol. 9394, Optical Society of America, https://doi.org/10.1117/12.2085653 (2015).

  38. Daudén-Oliver, P. et al. RAID-Dataset: human responses to affine image distortions and Gaussian noise, https://doi.org/10.5281/zenodo.17348027 (2025).

  39. Regan, D., Gray, R. & Hamstra, S. Evidence for a neural mechanism that encodes angles. Vision Research 36, 323–IN3, https://doi.org/10.1016/0042-6989(95)00113-E (1996).

    Google Scholar 

  40. Legge, G. E. & Campbell, F. Displacement detection in human vision. Vision Research 21, 205–213, https://doi.org/10.1016/0042-6989(81)90114-0 (1981).

    Google Scholar 

  41. Baldwin, A., Fu, M., Farivar, R. & Hess, R. The equivalent internal orientation and position noise for contour integration. Scientific Reports 7, 2045–2322, https://doi.org/10.1038/s41598-017-13244-z (2017).

    Google Scholar 

  42. Teghtsoonian, R. On the exponents in Stevens’ law and the constant in Ekman’s law, Psychological Review, https://doi.org/10.1037/h0030300 (1971).

  43. Aguilar, G., Wichmann, F. A. & Maertens, M. Comparing sensitivity estimates from MLDS and forced-choice methods in a slant-from-texture experiment. Journal of Vision 17, 37–37, https://doi.org/10.1167/17.1.37 (2017).

    Google Scholar 

  44. Devinck, F. & Knoblauch, K. A common signal detection model accounts for both perception and discrimination of the watercolor effect. Journal of Vision 12, 19–19, https://doi.org/10.1167/12.3.19 (2012).

    Google Scholar 

  45. Campbell, F. W. & Robson, J. G. Application of Fourier analysis to the visibility of gratings. Journal of Physiology 197, 551–566, https://doi.org/10.1113/jphysiol.1968.sp008574 (1968).

    Google Scholar 

  46. Knoblauch, K. & Maloney, L. T. MLDS: Maximum likelihood difference scaling in R. Journal of Statistical Software 25, 1–26, https://doi.org/10.18637/jss.v025.i02 (2008).

    Google Scholar 

  47. Aguilar, G. Python wrapper for MLDS R package, https://github.com/computational-psychology/mlds (2022).

Download references

Acknowledgements

This work was partially funded by the Valencian local government (GVA) under the grant CIGE/2022/066 (grupos emergentes), by the Universitat Jaume I (UJI) under the grant UJI-A2022-12, by the Ministerio de Ciencia e Innovación grants number PID2020-118071GB-I00 and PDC2021-121522-C21, PID2023-152133NB-I00, and by the grant BBVA Foundations of Science program: Mathematics, Statistics, Computational Sciences and Artificial Intelligence (VIS4NN).

Author information

Authors and Affiliations

  1. Universitat de València, Image Processing Laboratory, València, 46980, Spain

    Paula Daudén-Oliver, Valero Laparra & Jesús Malo

  2. Universitat Jaume I, Castelló, 12071, Spain

    David Agost-Beltran, Emilio Sansano-Sansano, Raul Montoliu & Marina Martínez-Garcia

  3. Institute of New Imaging Technologies, Universitat Jaume I, Castellón, 12071, Spain

    David Agost-Beltran, Emilio Sansano-Sansano & Raul Montoliu

  4. Institut de Matemàtiques de Castelló, Universitat Jaume I, Castelló, 12071, Spain

    Marina Martínez-Garcia

  5. Institut d’estudis feministes Purificación Ecribano, Universitat Jaume I, Castelló, 12071, Spain

    Marina Martínez-Garcia

Authors
  1. Paula Daudén-Oliver
    View author publications

    Search author on:PubMed Google Scholar

  2. David Agost-Beltran
    View author publications

    Search author on:PubMed Google Scholar

  3. Emilio Sansano-Sansano
    View author publications

    Search author on:PubMed Google Scholar

  4. Raul Montoliu
    View author publications

    Search author on:PubMed Google Scholar

  5. Valero Laparra
    View author publications

    Search author on:PubMed Google Scholar

  6. Jesús Malo
    View author publications

    Search author on:PubMed Google Scholar

  7. Marina Martínez-Garcia
    View author publications

    Search author on:PubMed Google Scholar

Contributions

P.D. - data acquisition, data processing, validation, writing. D.A. - data acquisition, data processing, validation. E.S. - software development, data acquisition, writing. R.M. - writing. V.L - data processing, validation, writing. J.M. - writing, validation. M.M. - data acquisition, validation, writing. All authors read, edited, and approved the final manuscript.

Corresponding author

Correspondence to Paula Daudén-Oliver.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Daudén-Oliver, P., Agost-Beltran, D., Sansano-Sansano, E. et al. RAID-Dataset: human responses to affine image distortions and Gaussian noise. Sci Data (2026). https://doi.org/10.1038/s41597-026-06581-0

Download citation

  • Received: 12 May 2025

  • Accepted: 08 January 2026

  • Published: 15 January 2026

  • DOI: https://doi.org/10.1038/s41597-026-06581-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Follow us on Twitter
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims and scope
  • Editors & Editorial Board
  • Journal Metrics
  • Policies
  • Open Access Fees and Funding
  • Calls for Papers
  • Contact

Publish with us

  • Submission Guidelines
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Data (Sci Data)

ISSN 2052-4463 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing