Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Feature-indistinguishable machine unlearning via negative-hot label encoding and class weight masking
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 03 March 2026

Feature-indistinguishable machine unlearning via negative-hot label encoding and class weight masking

  • Jiali Wang1,
  • Hongxia Bie1,
  • Zhao Jing1 &
  • …
  • Yichen Zhi1 

Scientific Reports , Article number:  (2026) Cite this article

  • 601 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Engineering
  • Mathematics and computing

Abstract

With the growing importance of data privacy and regulatory compliance, machine unlearning has become a critical requirement in deep learning. However, existing approaches often require access to the original training data, incur substantial computational costs, or compromise performance on retained data. To address these limitations, we propose a novel unlearning framework that integrates label encoding fine-tuning with class weight masking, enabling efficient and selective forgetting of specific classes. In particular, we introduce Negative-Hot Label Encoding (NHLE), which suppresses the discriminability of target classes in the feature space, thereby weakening their representations. Our method requires only a small number of samples from the forgotten classes for iterative fine-tuning. Extensive experiments on multiple visual datasets show that the proposed framework achieves near-zero classification accuracy on forgotten data, while reducing accuracy on retained data by no more than 0.035.

Similar content being viewed by others

A deep learning framework for non-functional requirement classification

Article Open access 08 February 2024

Uncertainty-weighted semi-supervised learning with dynamic entropy masking and Bhattacharyya-regularized loss

Article Open access 27 November 2025

Inversion dynamics of class manifolds in deep learning reveals tradeoffs underlying generalization

Article 08 January 2024

Data availability

The datasets used in this study are all publicly available: CIFAR-10 and CIFAR-100 [https://www.cs.toronto.edu/kriz/cifar.html], SVHN [http://ufldl.stanford.edu/housenumbers/], Fashion-MNIST [https://github.com/zalandoresearch/fashion-mnist], and MNIST [http://yann.lecun.com/exdb/mnist/]

References

  1. Krizhevsky, A., Sutskever, I. & Hinton, G. E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25 (2012).

  2. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 770–778 (2016).

  3. Ren, S., He, K., Girshick, R. & Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28 (2015).

  4. Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 779–788 (2016).

  5. Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 3431–3440 (2015).

  6. Schroff, F., Kalenichenko, D. & Philbin, J. FaceNet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 815–823 (2015).

  7. Voigt, P. & Von dem Bussche, A. The EU General Data Protection Regulation (GDPR). A Practical Guide 1st ed 5555 (Springer Int. Publ., 2017).

  8. Ginart, A., Guan, M., Valiant, G. & Zou, J. Y. Making AI forget you: Data deletion in machine learning. Adv. Neural Inf. Process. Syst. 32 (2019).

  9. Bourtoule, L. et al. Machine unlearning. In 2021 IEEE Symposium on Security And Privacy (SP) 141–159 (IEEE, 2021).

  10. Ma, Z. et al. Learn to forget: Machine unlearning via neuron masking. IEEE Trans. Depend. Secure Comput. 20, 3194–3207 (2022).

    Google Scholar 

  11. Foster, J., Schoepf, S. & Brintrup, A. Fast machine unlearning without retraining through selective synaptic dampening. In Proceedings of the AAAI Conference on Artificial Intelligence 38, 12043–12051 (2024).

  12. Mehta, R., Pal, S., Singh, V. & Ravi, S. N. Deep unlearning via randomized conditionally independent hessians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 10422–10431 (2022).

  13. Peste, A., Alistarh, D. & Lampert, C. H. SSSE: Efficiently erasing samples from trained machine learning models. arXiv preprint arXiv:2107.03860 (2021).

  14. Wu, Y., Dobriban, E. & Davidson, S. DeltaGrad: Rapid retraining of machine learning models. In International Conference on Machine Learning 10355–10366 (PMLR, 2020).

  15. Golatkar, A., Achille, A. & Soatto, S. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 9304–9312 (2020).

  16. Suriyakumar, V. & Wilson, A. C. Algorithms that approximate data removal: New results and limitations. Adv. Neural. Inf. Process. Syst. 35, 18892–18903 (2022).

    Google Scholar 

  17. Warnecke, A., Pirch, L., Wressnegger, C. & Rieck, K. Machine unlearning of features and labels. arXiv preprint arXiv:2108.11577 (2021).

  18. Wu, G., Hashemi, M. & Srinivasa, C. PUMA: Performance unchanged model augmentation for training data removal. In Proceedings of the AAAI Conference on Artificial Intelligence 36, 8675–8682 (2022).

  19. Golatkar, A., Achille, A. & Soatto, S. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In European Conference on Computer Vision 383–398 (Springer, 2020).

  20. Golatkar, A., Achille, A., Ravichandran, A., Polito, M. & Soatto, S. Mixed-privacy forgetting in deep networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 792–801 (2021).

  21. Li, G., Hsu, H., Chen, C.-F. & Marculescu, R. Fast-NTK: Parameter-efficient unlearning for large-scale models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 227–234 (2024).

  22. Chen, K., Zhang, D., Mi, B., Huang, Y. & Li, Z. Fast yet versatile machine unlearning for deep neural networks. Neural Netw. 190, 107648 (2025).

    Google Scholar 

  23. Jung, D. EntUn: Mitigating the forget-retain dilemma in unlearning via entropy. ICT Express (2025).

  24. Seo, S., Kim, D. & Han, B. Revisiting machine unlearning with dimensional alignment. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 3206–3215 (IEEE, 2025).

  25. Panda, S. et al. Partially Blinded Unlearning: Class unlearning for deep networks from bayesian perspective. In Proceedings of the AAAI Conference on Artificial Intelligence 39, 6372–6380 (2025).

  26. Trippa, D., Campagnano, C., Bucarelli, M. S., Tolomei, G. & Silvestri, F. \(\nabla \tau\): Gradient-based and task-agnostic machine unlearning. CoRR (2024).

  27. Foster, J. et al. An information theoretic approach to machine unlearning. arXiv preprint arXiv:2402.01401 (2024).

  28. Cha, S. et al. Learning to unlearn: Instance-wise unlearning for pre-trained classifiers. In Proceedings of the AAAI Conference on Artificial Intelligence 38, 11186–11194 (2024).

  29. Shen, S., Zhang, C., Bialkowski, A., Chen, W. & Xu, M. CaMU: disentangling causal effects in deep model unlearning. In Proceedings of the 2024 SIAM International Conference on Data Mining (SDM) 779–787 (SIAM, 2024).

  30. Cotogni, M., Bonato, J., Sabetta, L., Pelosin, F. & Nicolosi, A. DUCK: Distance-based unlearning via centroid kinematics. arXiv preprint arXiv:2312.02052 (2023).

  31. Wang, W., Zhang, C., Tian, Z. & Yu, S. Machine unlearning via representation forgetting with parameter self-sharing. IEEE Trans. Inf. Forensics Secur. 19, 1099–1111 (2023).

    Google Scholar 

  32. Choi, D., Choi, S., Lee, E., Seo, J. & Na, D. Towards efficient machine unlearning with data augmentation: Guided loss-increasing (GLI) to prevent the catastrophic model utility drop. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 93–102 (2024).

  33. Hoang, T., Rana, S., Gupta, S. & Venkatesh, S. Learn to unlearn for deep neural networks: Minimizing unlearning interference with gradient projection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 4819–4828 (2024).

  34. Nguyen, Q. P., Low, B. K. H. & Jaillet, P. Variational bayesian unlearning. Adv. Neural. Inf. Process. Syst. 33, 16025–16036 (2020).

    Google Scholar 

  35. Poppi, S., Sarto, S., Cornia, M., Baraldi, L. & Cucchiara, R. Multiclass unlearning for image classification via weight filtering. IEEE Intell. Syst. 39, 40–47 (2024).

    Google Scholar 

  36. Thudi, A., Deza, G., Chandrasekaran, V. & Papernot, N. Unrolling SGD: Understanding factors influencing machine unlearning. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P) 303–319 (IEEE, 2022).

  37. Chundawat, V. S., Tarun, A. K., Mandal, M. & Kankanhalli, M. Zero-shot machine unlearning. IEEE Trans. Inf. Forensics Secur. 18, 2345–2354 (2023).

    Google Scholar 

  38. Tarun, A. K., Chundawat, V. S., Mandal, M. & Kankanhalli, M. Fast yet effective machine unlearning. IEEE Trans. Neural Netw. Learn. Syst. 35, 13046–13055 (2023).

    Google Scholar 

  39. Abbasi, A., Thrash, C., Akbari, E., Zhang, D. & Kolouri, S. CovarNav: Machine unlearning via model inversion and covariance navigation. arXiv preprint arXiv:2311.12999 (2023).

  40. Fan, C. et al. SalUn: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation. arXiv preprint arXiv:2310.12508 (2023).

  41. Graves, L., Nagisetty, V. & Ganesh, V. Amnesiac machine learning. In Proceedings of the AAAI Conference on Artificial Intelligence 35, 11516–11524 (2021).

  42. Watanabe, S. Pseudo-labeling for enhanced user privacy in approximate machine unlearning. In ICASSP 2025–2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 1–5 (IEEE, 2025).

  43. Wang, J., Bie, H., Jing, Z. & Zhi, Y. Scrub-and-learn: Category-aware weight modification for machine unlearning. AI 6, 108 (2025).

    Google Scholar 

  44. Wang, J., Bie, H., Jing, Z., Zhi, Y. & Fan, Y. Weight masking in image classification networks: Class-specific machine unlearning. Knowl. Inf. Syst. 67, 3245–3265 (2025).

    Google Scholar 

Download references

Funding

This work is supported in part by the Science and Technology Innovation 2030 Major Project (Grant No. 2022ZD0211603) and the Beijing Natural Science Foundation – Joint Funds of the Haidian Original Innovation Project (Grant No. L232056).

Author information

Authors and Affiliations

  1. Intelligent Media Computing Center, School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, 100876, China

    Jiali Wang, Hongxia Bie, Zhao Jing & Yichen Zhi

Authors
  1. Jiali Wang
    View author publications

    Search author on:PubMed Google Scholar

  2. Hongxia Bie
    View author publications

    Search author on:PubMed Google Scholar

  3. Zhao Jing
    View author publications

    Search author on:PubMed Google Scholar

  4. Yichen Zhi
    View author publications

    Search author on:PubMed Google Scholar

Contributions

J.W. conceived the experiment(s), Z.J. and Y.Z. conducted the experiment(s), J.W. and H.B. analysed the results, and J.W. wrote the original draft. All authors reviewed the manuscript.

Corresponding author

Correspondence to Hongxia Bie.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, J., Bie, H., Jing, Z. et al. Feature-indistinguishable machine unlearning via negative-hot label encoding and class weight masking. Sci Rep (2026). https://doi.org/10.1038/s41598-026-40379-9

Download citation

  • Received: 29 September 2025

  • Accepted: 12 February 2026

  • Published: 03 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-40379-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Machine unlearning
  • Label encoding
  • Feature indistinguishable representation
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics