Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Stabilizing updates in differentially private stochastic gradient descent with buffered rejection
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 18 March 2026

Stabilizing updates in differentially private stochastic gradient descent with buffered rejection

  • Sifan Deng1,
  • Kai Zhang1,
  • Weilin Zhang2,
  • Huiqin Jiang1 &
  • …
  • Pei-Wei Tsai3 

Scientific Reports , Article number:  (2026) Cite this article

  • 827 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Mathematics and computing

Abstract

Differentially private stochastic gradient descent is a standard algorithm for training deep models on sensitive data, but under tight privacy budgets it must add large noise to every step, which slows convergence and reduces accuracy. Selective update methods for differential private stochastic gradient descent reject updates that fail a noisy validation test and save privacy cost, but each decision still relies on a single noisy signal and remains unstable. We propose a differential private training algorithm that combines a buffered rejection mechanism with a phased parameter decay strategy for stochastic gradient descent. In each iteration, the proposed algorithm maintains two candidate updates, evaluates their privately perturbed loss improvements, and applies a local preferential choice. This buffered comparison spends privacy budget on directions that are more likely to be beneficial. The phased decay strategy tracks validation accuracy and gradually adjusts the noise multipliers, learning rate, and rejection threshold to match the current training stage. Experiments on MNIST, Fashion-MNIST, CIFAR-10, and IMDb with identical privacy budgets show that the proposed algorithm consistently improves test accuracy over the standard differential private stochastic gradient descent and the selective update based differential private stochastic gradient descent, typically by 0.5–2 percentage points, and converges faster at the same privacy level. Membership inference evaluations report area under the ROC curve values close to 0.5, indicating that these gains do not weaken empirical privacy.

Data availability

This study utilized four publicly available datasets that are openly accessible for research purposes. The MNIST dataset can be accessed at https://github.com/cvdfoundation/mnist/tree/master, Fashion-MNIST at https://github.com/zalandoresearch/fashion-mnist, CIFAR-10 at https://www.cs.toronto.edu/ kriz/cifar.html, and the IMDb movie reviews dataset at http://ai.stanford.edu/amaas/data/sentiment/. All datasets are freely available and can be obtained from these official repositories without restrictions.

References

  1. Hong, C., Chen, L., Liang, Y. & Zeng, Z. Stacked capsule graph autoencoders for geometry-aware 3D head pose estimation. Comput. Vis. Image Underst. 208, 103224 (2021).

    Google Scholar 

  2. Huang, J., Hong, C., Xie, R., Ran, L. & Qian, J. A simple and efficient channel MLP on token for human pose estimation. Int. J. Mach. Learn. Cybern. 16, 3809–3817 (2025).

    Google Scholar 

  3. Lee, X., Hong, C., Zhang, X. & Chen, Y. Droformer: Temporal action detection with drop mechanism of attention. Int. J. Mach. Learn. Cybernet. 1–16 (2025).

  4. Luo, G.-F., Wang, D.-H., Zhang, X.-Y., Lin, Z.-H. & Zhu, S. Joint radical embedding and detection for zero-shot Chinese character recognition. Pattern Recognit. 161, 111286 (2025).

    Google Scholar 

  5. Luo, G.-F. et al. Self-information of radicals: A new clue for zero-shot Chinese character recognition. Pattern Recognit. 140, 109598 (2023).

    Google Scholar 

  6. Chen, S. et al. Learning relationship-guided vision-language transformer for facial attribute recognition. Pattern Recognit. 170, 112063 (2026).

    Google Scholar 

  7. Chen, S. et al. Hierarchical token-aware cross-modality reconstruction for visible-infrared person re-identification. In IEEE Transactions on Multimedia (2025).

  8. Lian, J., Wang, D.-H., Wu, Y. & Zhu, S. Multi-branch enhanced discriminative network for vehicle re-identification. IEEE Trans. Intell. Transport. Syst. 25, 1263–1274 (2023).

    Google Scholar 

  9. Dwork, C. et al. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9, 211–407 (2014).

    Google Scholar 

  10. Paszke, A. et al. Automatic differentiation in Pytorch. Neural Inf. Process. Syst. (2017).

  11. Abadi, M. et al. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 308–318 (2016).

  12. Fu, J. et al. Dpsur: Accelerating differentially private stochastic gradient descent using selective update and release. In Proceedings of the VLDB Endow (2024).

  13. Hu, J., Liu, Y. & Wu, K. Neural network pruning based on channel attention mechanism. Connect. Sci. 34, 2201–2218 (2022).

    Google Scholar 

  14. Katharopoulos, A. & Fleuret, F. Not all samples are created equal: Deep learning with importance sampling. In International Conference on Machine Learning. 2525–2534 (2018).

  15. Phan, N., Wu, X., Hu, H. & Dou, D. Adaptive Laplace mechanism: Differential privacy preservation in deep learning. In 2017 IEEE International Conference on Data Mining (ICDM). 385–394 (2017).

  16. Andrew, G., Thakkar, O., McMahan, B. & Ramaswamy, S. Differentially private learning with adaptive clipping. Adv. Neural Inf. Process. Syst. 34, 17455–17466 (2021).

    Google Scholar 

  17. Lee, J. & Kifer, D. Concentrated differentially private gradient descent with adaptive per-iteration privacy budget. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1656–1665 (2018).

  18. Pichapati, V., Suresh, A. T., Yu, F. X., Reddi, S. J. & Kumar, S. Adaclip: Adaptive clipping for private SGD. arXiv preprint arXiv:1908.07643 (2019).

  19. Koskela, A. & Honkela, A. Learning rate adaptation for differentially private learning. International Conference on Artificial Intelligence and Statistics. 2465–2475 (2020).

  20. De, S., Berrada, L., Hayes, J., Smith, S. L. & Balle, B. Unlocking high-accuracy differentially private image classification through scale. arXiv preprint arXiv:2204.13650 (2022).

  21. Papernot, N., Thakurta, A., Song, S., Chien, S. & Erlingsson, U. Tempered sigmoid activations for deep learning with differential privacy. Proc. AAAI Conf. Artif. Intell. 35, 9312–9321 (2021).

    Google Scholar 

  22. Bassily, R., Smith, A. & Thakurta, A. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science. 464–473 (2014).

  23. Carlini, N., Liu, C., Erlingsson, U., Kos, J. & Song, D. The secret sharer: Evaluating and testing unintended memorization in neural networks. In 28th USENIX Security Symposium (USENIX Security 19). 267–284 (2019).

  24. Fredrikson, M., Jha, S. & Ristenpart, T. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 1322–1333 (2015).

  25. Zhang, K., Yuan, X., Sun, R., Hong, C. & Xue, J. Traceable and collision-resilient differential privacy. IEEE Trans. Inf. For. Secur. 20, 11816–11829 (2025).

    Google Scholar 

  26. Zhang, K. et al. DPNM: A differential private notary mechanism for privacy preservation in cross-chain transactions. In IEEE Transactions on Information Forensics and Security (2025).

  27. Mironov, I. Renyi differential privacy. In 2017 IEEE 30th Computer Security Foundations Symposium (CSF). 263–275 (2017).

  28. Mironov, I., Talwar, K. & Zhang, L. Renyi differential privacy of the sampled Gaussian mechanism. arXiv preprint arXiv:1908.10530 (2019).

  29. Zhang, K. et al. Bounded and unbiased composite differential privacy. In 2024 IEEE Symposium on Security and Privacy (SP). 972–990 (2024).

  30. Zhang, K. et al. Towards privacy in decentralized IOT: A blockchain-based dual response DP mechanism. Big Data Min. Anal. 7, 699–717 (2024).

    Google Scholar 

  31. Dwork, C., McSherry, F., Nissim, K. & Smith, A. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference. 265–284 (2006).

  32. Balle, B. & Wang, Y.-X. Improving the Gaussian mechanism for differential privacy: Analytical calibration and optimal denoising. In International Conference on Machine Learning. 394–403 (2018).

  33. Cummings, R. & Desai, D. The role of differential privacy in GDPR compliance. In FAT’18: Proceedings of the Conference on Fairness, Accountability, and Transparency. Vol. 20. 2 (2018).

  34. Bu, Z., Dong, J., Long, Q. & Su, W. J. Deep learning with gaussian differential privacy. Harvard Data Sci. Rev. 2020, 10–1162 (2020).

    Google Scholar 

  35. Deng, L. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29, 141–142 (2012).

    Google Scholar 

  36. Xiao, H., Rasul, K. & Vollgraf, R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 (2017).

  37. Alex, K. Learning multiple layers of features from tiny images. In Learning-Features-2009-TR (2009).

  38. Maas, A. et al. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. 142–150 (2011).

  39. Tramer, F. & Boneh, D. Differentially private learning needs better features (or much more data). In International Conference on Learning Representations (2020).

  40. Nasr, M., Shokri, R. & Houmansadr, A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP). 739–753 (2019).

  41. Melis, L., Song, C., De Cristofaro, E. & Shmatikov, V. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP). 691–706 (2019).

  42. Jayaraman, B. & Evans, D. Evaluating differentially private machine learning in practice. In 28th USENIX Security Symposium (USENIX Security 19). 1895–1912 (2019).

Download references

Funding

This work was supported by the Natural Science Foundation of Xiamen, China (Grant No. 3502Z202472027); the Natural Science Foundation of Fujian Province, China (Grant No. 2025J011277); the Educational Research Projects for Young and Middle-aged Teachers of Fujian Province, China (Grant No. JAT241118); the Xiamen Municipal Research Program for Returned Overseas Scholars (Grant No. XMHRSS-[2024]-241-03), under the administration of the Xiamen Municipal Human Resources and Social Security Bureau, China; and the Research Start-up Program for High-level Talents at Xiamen University of Technology (Grant No. YKJ24006R).

Author information

Authors and Affiliations

  1. School of Computer and Information Engineering, Xiamen University of Technology, Xiamen, 361024, China

    Sifan Deng, Kai Zhang & Huiqin Jiang

  2. New Energy Engineering Design and Research Institute, China United Engineering Corporation Limited, Hangzhou, 310052, China

    Weilin Zhang

  3. Department of Computing Technologies, Swinburne University of Technology, Melbourne, 3122, Australia

    Pei-Wei Tsai

Authors
  1. Sifan Deng
    View author publications

    Search author on:PubMed Google Scholar

  2. Kai Zhang
    View author publications

    Search author on:PubMed Google Scholar

  3. Weilin Zhang
    View author publications

    Search author on:PubMed Google Scholar

  4. Huiqin Jiang
    View author publications

    Search author on:PubMed Google Scholar

  5. Pei-Wei Tsai
    View author publications

    Search author on:PubMed Google Scholar

Contributions

S.D. conceived the study, developed the methodology, implemented the software, performed validation experiments, and wrote the initial draft of the manuscript. K.Z. was responsible for data curation, contributed to methodology development, created visualizations, and provided supervision throughout the project. W.Z. contributed to methodology development and managed project administration. H.J. contributed to the development of methodology. P.-W.T. supervised the research and critically revised the manuscript. All authors reviewed and approved the final version of the manuscript for submission.

Corresponding author

Correspondence to Kai Zhang.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deng, S., Zhang, K., Zhang, W. et al. Stabilizing updates in differentially private stochastic gradient descent with buffered rejection. Sci Rep (2026). https://doi.org/10.1038/s41598-026-44009-2

Download citation

  • Received: 14 November 2025

  • Accepted: 09 March 2026

  • Published: 18 March 2026

  • DOI: https://doi.org/10.1038/s41598-026-44009-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Differential privacy
  • Deep learning
  • Buffered rejection mechanism
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics