Abstract
Organisations must share facial imagery that remains useful for analysis while protecting identity. Many current methods fail to strike this balance: reconstruction-centred encoder–decoder designs tend to blur salient detail, whereas latent edits in pretrained generators often retain or drift identity cues, undermining privacy and utility. We present ReFaceX, a reversible anonymisation framework that separates what to protect from what to preserve. A donor identity code steers a U-Net anonymiser with Identity Feature Fusion to change identity while retaining non-identity content such as pose, background and expression. A learned steganographic channel carries a compact recovery payload, and reconstruction gradients are blocked at the stego image so the anonymiser is never rewarded for keeping identity. The threat model is stated explicitly and outcomes are audited with strong recognisers. On LFW and CelebA-HQ datasets at \(256\times 256\), ReFaceX reduces identity similarity across FaceNet, ArcFace and AdaFace, and improves recovered-image quality (SSIM \(0.9378\), LPIPS \(0.1002\), PSNR \(23.97\) dB), while operating in real time on a single RTX 3090. Robustness to common JPEG re-encoding is also demonstrated. By turning the privacy–utility balance into an explicit and auditable operating choice, ReFaceX provides a practical template for responsible release of facial imagery and a foundation for extensions to video, higher resolutions and stronger recovery guarantees.
Similar content being viewed by others
Data availability
All datasets used in this study are publicly available and have been cited accordingly. The Flickr-Faces-HQ (FFHQ) dataset can be accessed at https://www.kaggle.com/datasets/arnaud58/flickrfaceshq-dataset-ffhq, the CelebA-HQ dataset is available at https://www.kaggle.com/datasets/badasstechie/celebahq-resized-256x256, and the Labeled Faces in the Wild (LFW) dataset can be obtained from https://www.kaggle.com/datasets/jessicali9530/lfw-dataset. The source code supporting the findings of this work is available from the corresponding author upon reasonable request.
References
Huang, G. B., Mattar, M., Berg, T. & Learned-Miller, E. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In Workshop on Faces in Real-Life Images (2008).
Hukkelås, H., Mester, R. & Lindseth, F. Deepprivacy: A generative adversarial network for face anonymization. In ISVC (2019).
Deng, J., Guo, J., Xue, N. & Zafeiriou, S. Arcface: Additive angular margin loss for deep face recognition. In CVPR (2019).
Wang, H. et al. Cosface: Large margin cosine loss for deep face recognition. In CVPR (2018).
Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In CVPR (2017).
Ledig, C. et al. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR (2017).
Baluja, S. Hiding images in plain sight: Deep steganography. In NeurIPS (2017).
Zhou, Z., Han, S. & Liu, X. A security analysis of generative steganography. IEEE Access (2018).
Hayes, J. & Danezis, G. Generating steganographic images via adversarial training. In NeurIPS Workshop on Machine Deception (2017).
Zhang, R., Isola, P., Efros, A. A., Shechtman, E. & Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. CVPR (2018).
Tov, O., Alaluf, Y., Nitzan, Y., Patashnik, O. & Cohen-Or, D. Designing an encoder for stylegan image manipulation. In SIGGRAPH (2021).
Alaluf, Y., Patashnik, O. & Cohen-Or, D. Restyle: A residual-based stylegan encoder via iterative refinement. In ICCV (2021).
Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In MICCAI (2015).
Maximov, M., Elezi, I. & Leal-Taixé, L. Ciagan: Conditional identity anonymization with generative adversarial networks. In CVPR Workshops (2020).
Meden, B. et al. Privacy-enhancing face biometrics: A comprehensive survey. IEEE Trans. Inf. Forensics Secur. 16, 4147–4183 (2021).
Barattin, S. et al. Attribute-preserving face dataset anonymization via latent code optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2023).
Li, D. & et al. Riddle: Reversible and diversified de-identification with latent encryptor. arXiv:2303.05171 (2023).
Zhu, J., Kaplan, R., Johnson, J. & Fei-Fei, L. Hidden: Hiding data with deep networks. In ECCV (2018).
Tancik, M. et al. Stegastamp: Invisible hyperlinks in physical photographs. In CVPR (2020).
Tancik, M., Mildenhall, B. & Ng, R. Stegastamp: Invisible hyperlinks in physical photographs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2117–2126 (2020).
Kingma, D. P. & Ba, J. Adam: A method for stochastic optimisation. In International Conference on Learning Representations (ICLR) arXiv:1412.6980 (2015).
Karras, T., Laine, S. & Aila, T. A style-based generator architecture for generative adversarial networks. In Proc. CVPR, 4401–4410 (2019).
Zhang, K., Zhang, Z., Li, Z. & Qiao, Y. Joint face detection and alignment using multitask cascaded convolutional networks. In: Proc. IEEE SPL 23, 1499–1503 (2016).
Karras, T., Aila, T., Laine, S. & Lehtinen, J. Progressive growing of GANs for improved quality, stability, and variation. In Proc. ICLR (2018).
Liu, Z., Luo, P., Wang, X. & Tang, X. Deep learning face attributes in the wild. In Proc. ICCV, 3730–3738 (2015).
Huang, G. B., Ramesh, M., Berg, T. & Learned-Miller, E. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. In Proc. Workshop on Faces in ’Real-Life’ Images (2007).
Kärkkäinen, K. & Joo, J. Fairface: Face attribute dataset for balanced race, gender, and age. arXiv:1908.04913 (2021).
Gu, X. et al. Password-conditioned anonymization and deanonymization with face identity transformers. In European Conference on Computer Vision (ECCV), Proceedings of ECCV 2020 (Springer International Publishing, 2020).
Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. High-resolution image synthesis with latent diffusion models. In Proc. CVPR, 10684–10695 (2022).
Wang, L. et al. Adding conditional control to text-to-image diffusion models. arXiv:2302.05543 (2023).
Muhammad, D. & Bendechache, M. Can ai be faster, accurate, and explainable? spikenet makes it happen. In Annual Conference on Medical Image Understanding and Analysis, 43–57 (Springer, 2025).
Zhang, Z., Zhang, Z., Zhou, H. & Hu, N. Steganogan: High capacity image steganography with GANs. In Proc. ICASSP, 2087–2091 (2019).
Muhammad, D., Keles, A. & Bendechache, M. Towards explainable deep learning in oncology: Integrating efficientnet-b7 with xai techniques for acute lymphoblastic leukaemia. In Proceedings of the 27th European Conference on Artificial Intelligence (ECAI) (Spain, 2024) (2024).
Ali, M., Muhammad, D., Khalaf, O. I. & Habib, R. Optimizing mobile cloud computing: A comparative analysis and innovative cost-efficient partitioning model. SN Comput. Sci. 6, 1–25 (2025).
Muhammad, D., Salman, M. & Bendechache, M. High cost, low trust? msa-pnet fixes both for medical imaging. In Proceedings of the Second Workshop on Explainable Artificial Intelligence for the medical domain-25-30 October (2025).
Muhammad, D., Salman, M. & Bendechache, M. Cracking the clinical code: A scoping review on mechanistic interpretability in medical report generation. Comput. Struct. Biotechnol. Rep. 100066 (2025).
Wang, Z., Bovik, A. C., Sheikh, H. R. & Simoncelli, E. P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612. https://doi.org/10.1109/TIP.2003.819861 (2004).
Hore, A. & Ziou, D. Image quality metrics: Psnr vs. ssim. In Proceedings of the 20th International Conference on Pattern Recognition (ICPR), 2366–2369, https://doi.org/10.1109/ICPR.2010.579 (IEEE, 2010).
Ke, W. et al. Improving the transferability of adversarial examples through neighborhood attribution. Knowl.-Based Syst. 296, 111909 (2024).
Zheng, D. et al. Enhancing the transferability of adversarial attacks via multi-feature attention. IEEE Trans. Inf. Forensics Secur. (2025).
Acknowledgements
This research was supported by Taighde Éireann – Research Ireland under grant numbers GOIPG/2025/8471, 18/CRT/6223 (RI Centre for Research Training in Artificial Intelligence), 13/RC/2106/\(P\_2\) (ADAPT Centre) and 13/RC/2094/\(P\_2\) (Lero Centre). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Funding
This research was supported by Taighde Éireann – Research Ireland under grant numbers GOIPG/2025/8471, 18/CRT/6223 (RI Centre for Research Training in Artificial Intelligence), 13/RC/2106/\(P\_2\) (ADAPT Centre) and 13/RC/2094/\(P\_2\) (Lero Centre).
Author information
Authors and Affiliations
Contributions
Dost Muhammad: Conceptualisation, Methodology, Writing-Original draft preparation, Investigation, Funding. Muhammad Salman: Data curation, Writing- Original draft preparation. S.M Haider: Visualisation, Dataset analysis, Writing-Original draft preparation.Malika Bendechache: Supervision, Writing- Reviewing and Editing, Validation, and Funding.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Muhammad, D., Salman, M., Shah, S.M.H. et al. ReFaceX: donor-driven reversible face anonymisation with detached recovery. Sci Rep (2026). https://doi.org/10.1038/s41598-026-39337-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-39337-2


