Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
A flexible framework for hyperparameter optimization using homotopy and surrogate models
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 17 February 2026

A flexible framework for hyperparameter optimization using homotopy and surrogate models

  • Sophia J. Abraham1,
  • Kehelwala D. G. Maduranga2,
  • Jeffery Kinnison1,
  • Zachariah Carmichael1,
  • Jonathan D. Hauenstein3 &
  • …
  • Walter J. Scheirer1 

Scientific Reports , Article number:  (2026) Cite this article

  • 353 Accesses

  • 1 Altmetric

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Applied mathematics
  • Computer science

Abstract

Over the past few decades, machine learning has made remarkable strides, owed largely to algorithmic advancements and the abundance of high-quality, large-scale datasets. However, an equally crucial aspect in achieving optimal model performance is the fine-tuning of hyperparameters. Despite its significance, hyperparameter optimization (HPO) remains challenging due to several factors. Many existing HPO techniques rely on simplistic search methods or assume smooth and continuous loss functions, which may not always hold true. Traditional methods like grid search and Bayesian optimization often struggle to adapt swiftly and efficiently navigate the loss landscape. Moreover, the search space for HPO is frequently high-dimensional and non-convex, posing challenges in efficiently finding a global minimum. Additionally, optimal hyperparameters can vary significantly based on the dataset or task at hand, further complicating the optimization process. To address these challenges, this paper presents HomOpt, an advanced HPO methodology that integrates a surrogate model framework with homotopy optimization techniques. Unlike rigid methodologies, HomOpt offers flexibility by incorporating diverse surrogate models tailored to specific optimization tasks. Our initial investigation focuses on leveraging Generalized Additive Model (GAM) surrogates within the HomOpt framework to enhance the effectiveness of existing optimization methodologies. HomOpt’s ability to expedite convergence towards optimal solutions across varied domain spaces, encompassing continuous, discrete, and categorical domains is highlighted. We conduct a comparative analysis of HomOpt applied to multiple optimization techniques (e.g., Random Search, TPE, Bayes, and SMAC), demonstrating improved objective performance on numerous standardized machine learning benchmarks and challenging open-set recognition tasks. We also integrate CatBoost within the HomOpt framework as a surrogate, showcasing its adaptability and effectiveness in handling more complex datasets. This integration facilitates an evaluation against state-of-the-art methods such as BOHB, particularly on challenging computer vision datasets like CIFAR-10 and ImageNet. Comparative analyses reveal HomOpt’s competitive performance with reduced iterations and underscore potential optimizations in execution time. All the experimentation and method code can be found here: https://github.com/sabraha2/HOMOPT.

Data availability

The datasets generated and/or analyzed during the current study are available as follows: - The machine learning benchmarks for classification tasks on tabular data using various models (MLP, SVM, Random Forest, XGBoost, and Logistic Regression) are provided through the HPOBench repository, accessible at https://github.com/automl/HPOBench. - Open-set classification experiments utilize the MNIST and LFW datasets. MNIST is available at http://yann.lecun.com/exdb/mnist/, and LFW is available at http://vis-www.cs.umass.edu/lfw/. - The neural architecture search (NAS) benchmarks used for CIFAR-10 and ImageNet16-120 are part of the NAS-Bench-201 dataset, which can be accessed at https://github.com/D-X-Y/NAS-Bench-201. For further details regarding the benchmarks and datasets, including their configu- ration spaces, please refer to the original HPOBench paper (https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/93db85ed909c13838ff95ccfa94cebd9-Abstract-round2.html).

Code availability

The code used to implement and evaluate HomOpt is available at https://github.com/sabraha2/HOMOPT.

References

  1. Duan, K.-B. & Keerthi, S.S. Which is the best multiclass svm method? an empirical study. In: Proceedings of the 6th International Conference on Multiple Classifier Systems. MCS’05, pp. 278–285. https://doi.org/10.1007/11494683_28 (Springer, Berlin, Heidelberg, 2005).

  2. Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13(10), 281–305 (2012).

    Google Scholar 

  3. Bergstra, J., Bardenet, R., Bengio, Y. & Kégl, B. Algorithms for hyper-parameter optimization. In: Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 24. https://proceedings.neurips.cc/paper/2011/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf (Curran Associates, Inc., 2011).

  4. Bergstra, J., Komer, B., Eliasmith, C., Yamins, D. & Cox, D. D. Hyperopt: a python library for model selection and hyperparameter optimization. Computational Science & Discovery 8(1), 014008. https://doi.org/10.1088/1749-4699/8/1/014008 (2015).

    Google Scholar 

  5. Bengio, Y. Gradient-Based Optimization of Hyperparameters. Neural Comput. 12(8), 1889–1900. https://doi.org/10.1162/089976600300015187 (2000).

    Google Scholar 

  6. Maclaurin, D., Duvenaud, D. & Adams, R.P. Gradient-based hyperparameter optimization through reversible learning. In: Bach, F.R., Blei, D.M. (eds.) Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015. JMLR Workshop and Conference Proceedings, vol. 37, pp. 2113–2122. http://proceedings.mlr.press/v37/maclaurin15.html JMLR.org, (2015).

  7. Zhang, Y. et al. A comprehensive survey on particle swarm optimization algorithm and its applications. Math. Probl. Eng. 2015 (2015)

  8. Linghu, J., Gao, W., Dong, H. & Nie, Y. Higher-order multi-scale physics-informed neural network (homs-pinn) method and its convergence analysis for solving elastic problems of authentic composite materials. J. Comput. Appl. Math. 456, 116223 (2025).

    Google Scholar 

  9. Deng, L. & Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 225, 120069 (2023).

    Google Scholar 

  10. Deng, L. & Liu, S. A multi-strategy improved slime mould algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 404, 115764 (2023).

    Google Scholar 

  11. Deng, L. & Liu, S. An enhanced slime mould algorithm based on adaptive grouping technique for global optimization. Expert Syst. Appl. 222, 119877 (2023).

    Google Scholar 

  12. Deng, L. & Liu, S. Incorporating q-learning and gradient search scheme into jaya algorithm for global optimization. Artif. Intell. Rev. 56(Suppl 3), 3705–3748 (2023).

    Google Scholar 

  13. Deng, L. & Liu, S. A novel hybrid grasshopper optimization algorithm for numerical and engineering optimization problems. Neural Process. Lett. 55(7), 9851–9905 (2023).

    Google Scholar 

  14. Deng, L. & Liu, S. Advancing photovoltaic system design: An enhanced social learning swarm optimizer with guaranteed stability. Comput. Ind. 164, 104209 (2025).

    Google Scholar 

  15. Hastie, T.J. & Tibshirani, R.J. Generalized additive models, volume 43 of. Monographs on statistics and applied probability 15 (1990)

  16. Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A.V. & Gulin, A. Catboost: unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 31 (2018)

  17. Nisbet, R. Handbook of Statistical Analysis and Data Mining Applications, Second edition edn. Academic Press, an imprint of Elsevier, London, United Kingdom (2018)

  18. Iwakiri, H., Wang, Y., Ito, S. & Takeda, A. Single loop gaussian homotopy method for non-convex optimization. Adv. Neural Inf. Process. Syst. 35, 7065–7076 (2022).

    Google Scholar 

  19. Suzumura, S., Ogawa, K., Sugiyama, M., Karasuyama, M. & Takeuchi, I. Homotopy continuation approaches for robust sv classification and regression. Mach. Learn. 106, 1009–1038 (2017).

    Google Scholar 

  20. Bates, D. J., Sommese, A. J., Hauenstein, J. D. & Wampler, C. W. Numerically Solving Polynomial Systems with Bertini (SIAM, Philadelphia, 2013).

    Google Scholar 

  21. Griffin, Z. A. & Hauenstein, J. D. Real solutions to systems of polynomial equations and parameter continuation. Adv. Geom. 15(2), 173–187 (2015).

    Google Scholar 

  22. Chen, Q. & Hao, W. A homotopy training algorithm for fully connected neural networks. Proc. R. Soc. A: Math. Phys. Eng. Sci. 475(2231), 20190662 (2019).

    Google Scholar 

  23. Scheirer, W.J., Rocha, A., Sapkota, A. & Boult, T.E. Towards open set recognition. IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI) 35, 1757–1772 (2012)

  24. Rudd, E. M., Jain, L. P., Scheirer, W. J. & Boult, T. E. The extreme value machine. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 762–768 (2017).

    Google Scholar 

  25. Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A. & Talwalkar, A. Hyperband: A novel bandit-based approach to hyperparameter optimization. The J. Mach. Learn. Res. 18(1), 6765–6816 (2017).

    Google Scholar 

  26. Betrò, B. Bayesian methods in global optimization. Operations Research 91, 16–18 https://doi.org/10.1007/978-3-642-48417-9_6 (1992).

  27. Wu, J. et al. Hyperparameter optimization for machine learning models based on bayesian optimizationb. J. Electron. Sci. Technol. 17(1), 26–40 https://doi.org/10.11989/JEST.1674-862X.80904120 (2019).

  28. Ilievski, I., Akhtar, T., Feng, J. & Shoemaker, C. Efficient hyperparameter optimization for deep learning algorithms using deterministic rbf surrogates. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017).

  29. Loshchilov, I. & Hutter, F. CMA-ES for hyperparameter optimization of deep neural networks. In: International Conference on Learning Representations (ICLR 2016). Workshop Track., pp. 1–4 (2016).

  30. Boeringer, D. W. & Werner, D. H. Efficiency-constrained particle swarm optimization of a modified bernstein polynomial for conformal array excitation amplitude synthesis. IEEE Trans. Antennas Propag. 53(8), 2662–2673. https://doi.org/10.1109/TAP.2005.851783 (2005).

    Google Scholar 

  31. Kennedy, J. & Eberhart, R. Particle swarm optimization. In: Proceedings of ICNN’95-international Conference on Neural Networks 4, 1942–1948 (IEEE, 1995).

  32. Lindauer, M. et al. SMAC v3: Algorithm Configuration in Python. GitHub (2017)

  33. Bengio, Y., Simard, P. & Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166. https://doi.org/10.1109/72.279181 (1994).

    Google Scholar 

  34. Eggensperger, K., Hutter, F., Hoos, H.H. & Leyton-Brown, K. Surrogate benchmarks for hyperparameter optimization. In: MetaSel@ ECAI, pp. 24–31 (2014)

  35. Xie, W., Chen, W., Shen, L., Duan, J. & Yang, M. Surrogate network-based sparseness hyper-parameter optimization for deep expression recognition. Pattern Recognit. 111, 107701. https://doi.org/10.1016/j.patcog.2020.107701 (2021).

    Google Scholar 

  36. McLeod, M., Roberts, S.J. & Osborne, M.A. Optimization, fast and slow: Optimally switching between local and bayesian optimization. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018. Proceedings of Machine Learning Research, vol. 80, pp. 3440–3449. http://proceedings.mlr.press/v80/mcleod18a.html (PMLR, 2018).

  37. Rheinboldt, W. C. Numerical analysis of continuation methods for nonlinear structural problems. Comput. Struct. 13(1), 103–113. https://doi.org/10.1016/0045-7949(81)90114-0 (1981).

    Google Scholar 

  38. Allgower, E.L. & Georg, K. Numerical Continuation Methods. Springer Series in Computational Mathematics, vol. 13, p. 388. https://doi.org/10.1007/978-3-642-61257-2 (Springer, 1990). An introduction.

  39. Chow, J., Udpa, L. & Udpa, S.S. Homotopy continuation methods for neural networks. In: 1991 IEEE International Sympoisum on Circuits and Systems, pp. 2483–24865 https://doi.org/10.1109/ISCAS.1991.176030 (1991).

  40. Pathak, H.N. Parameter continuation with secant approximation for deep neural networks. PhD thesis, Worcester Polytechnic Institute (2018)

  41. Mehta, D., Chen, T., Tang, T. & Hauenstein, J. D. The loss surface of deep linear networks viewed through the algebraic geometry lens. IEEE Trans. Pattern Anal. Mach. Intell. 44(9), 5664–5680 (2022).

    Google Scholar 

  42. Felten, F., Gareev, D., Talbi, E.-G. & Danoy, G. Hyperparameter optimization for multi-objective reinforcement learning. arXiv preprint arXiv:2310.16487 (2023).

  43. Liu, S., Feng, Q., Eriksson, D., Letham, B. & Bakshy, E. Sparse bayesian optimization. In: International Conference on Artificial Intelligence and Statistics, pp. 3754–3774 (PMLR, 2023).

  44. Rojas-Delgado, J., Jiménez, J., Bello, R. & Lozano, J.A. Hyper-parameter optimization using continuation algorithms. In: Metaheuristics International Conference, pp. 365–377 (Springer, 2022).

  45. Storn, R. & Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11, 341–359 (1997).

    Google Scholar 

  46. Eiben, A.E. & Smith, J.E. Introduction to Evolutionary Computing. (Springer, 2015).

  47. Forrester, A., Sobester, A. & Keane, A. Engineering Design Via Surrogate Modelling: a Practical Guide. (John Wiley & Sons, 2008).

  48. Forrester, A. I. & Keane, A. J. Recent advances in surrogate-based optimization. Prog. Aerosp. Sci. 45(1–3), 50–79 (2009).

    Google Scholar 

  49. Tibshirani, R. & Hastie, T. Local likelihood estimation. J. Am. Stat. Assoc. 82(398), 559–567 (1987).

    Google Scholar 

  50. Servén, D., Brummitt, C. & Abedi, H. hlink: dswah/pyGAM: v0.8.0. Zenodo (2018)

  51. Liaw, A. et al. Classification and regression by randomforest. R news 2(3), 18–22 (2002).

    Google Scholar 

  52. Dorogush, A.V., Ershov, V. & Gulin, A. CatBoost: gradient boosting with categorical features support. In: Workshop on ML Systems at NIPS 2017 (2017)

  53. Rasmussen, C. E. & Williams, C. K. I. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2006).

    Google Scholar 

  54. Bergstra, J., Bardenet, R., Bengio, Y. & Kégl, B. Algorithms for hyper-parameter optimization. Adv. Neural Inf. Process. Syst. 24 (2011)

  55. Watson, L. T., Sosonkina, M., Melville, R. C., Morgan, A. P. & Walker, H. F. Algorithm 777: Hompack90: A suite of fortran 90 codes for globally convergent homotopy algorithms. ACM Trans. Math. Softw. (TOMS) 23(4), 514–549 (1997).

    Google Scholar 

  56. Yamamura, K., Sekiguchi, T. & Inoue, Y. A fixed-point homotopy method for solving modified nodal equations. IEEE Trans. Circuits Syst. I: Fundam. Theory Appl. 46(6), 654–665 (1999).

    Google Scholar 

  57. Baskar, A., Plecnik, M. & Hauenstein, J. D. Computing saddle graphs via homotopy continuation for the approximate synthesis of mechanisms. Mech. Mach. Theory 176, 104932 (2022).

    Google Scholar 

  58. Sommese, A.J. & Wampler, C.W. II: The Numerical Solution of Systems of Polynomials Arising in Engineering and Science. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ (2005)

  59. Nelder, J. A. & Mead, R. A simplex method for function minimization. Comput. J. 7(4), 308–313. https://doi.org/10.1093/comjnl/7.4.308 (1965).

    Google Scholar 

  60. Gramacy, R. B. & Lee, H. K. H. Cases for the nugget in modeling computer experiments. Stat. Comput. 22(3), 713–722 (2012).

    Google Scholar 

  61. Surjanovic, S. & Bingham, D. Virtual Library of Simulation Experiments: Griewank function (2013). https://www.sfu.ca/%7Essurjano/griewank.html

  62. Kinnison, J., Kremer-Herman, N., Thain, D. & Scheirer, W. Shadho: Massively scalable hardware-aware distributed hyperparameter optimization. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 738–747 (2018). https://doi.org/10.1109/WACV.2018.00086

  63. Eggensperger, K. et al. HPOBench: A collection of reproducible multi-fidelity benchmark problems for HPO. In: Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021 (eds. Vanschoren, J., Yeung, S.) , December 2021, Virtual (2021). https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/93db85ed909c13838ff95ccfa94cebd9-Abstract-round2.html

  64. Dong, X. & Yang, Y. Nas-bench-201: Extending the scope of reproducible neural architecture search. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. https://openreview.net/forum?id=HJxyZkBKDr (OpenReview.net, 2020).

  65. Vanschoren, J., Van Rijn, J. N., Bischl, B. & Torgo, L. Openml: networked science in machine learning. ACM SIGKDD Explorations Newsletter 15(2), 49–60 (2014).

    Google Scholar 

  66. Deng, L. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012).

    Google Scholar 

  67. Huang, G.B., Ramesh, M., Berg, T. & Learned-Miller, E. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst (2007)

  68. Deng, J. et al. Arcface: Additive angular margin loss for deep face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 44(10), 5962–5979. https://doi.org/10.1109/TPAMI.2021.3087709 (2022).

    Google Scholar 

  69. Albiero, V., Zhang, K. & Bowyer, K.W. How does gender balance in training data affect face recognition accuracy? In: 2020 Ieee International Joint Conference on Biometrics (ijcb), pp. 1–10 (IEEE, 2020).

  70. Guo, Y., Zhang, L., Hu, Y., He, X. & Gao, J. Ms-celeb-1m: A dataset and benchmark for large-scale face recognition. In: European Conference on Computer Vision, pp. 87–102 (Springer, 2016).

  71. Falkner, S., Klein, A. & Hutter, F. Bohb: Robust and efficient hyperparameter optimization at scale. In: International Conference on Machine Learning, pp. 1437–1446 (PMLR, 2018).

  72. Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012).

    Google Scholar 

  73. Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J. & Sculley, D. Google vizier: A service for black-box optimization. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1487–1495 (2017)

  74. Akiba, T., Sano, S., Yanase, T., Ohta, T. & Koyama, M. Optuna: A next-generation hyperparameter optimization framework. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2623–2631 (2019)

  75. Hutter, F., Kotthoff, L. & Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges. (Springer, 2019)

  76. Hog, J. et al. Meta-learning population-based methods for reinforcement learning. Transactions on Machine Learning Research. d9htascfP8 (2025).https://openreview.net/pdf?id=d9htascfP8

  77. Ma, Z., Guo, H., Gong, Y.-J., Zhang, J. & Tan, K.C. Toward automated algorithm design: A survey and practical guide to meta-black-box-optimization. IEEE Trans. Evol. Comput. (2025)

  78. Kandasamy, K., Krishnamurthy, A., Schneider, J. & Póczos, B. Parallelised bayesian optimisation via thompson sampling. In: International Conference on Artificial Intelligence and Statistics, pp. 133–142 (PMLR, 2018).

  79. Settles, B. Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences (2009)

  80. Pan, S. J. & Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009).

    Google Scholar 

Download references

Acknowledgements

This work was funded by DEVCOM Army Research Laboratory under cooperative agreement, W911NF-20-2-0218.

Author information

Authors and Affiliations

  1. Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, 46556, IN, USA

    Sophia J. Abraham, Jeffery Kinnison, Zachariah Carmichael & Walter J. Scheirer

  2. Department of Mathematics, Tennessee Tech University, Cookeville, 38505, TN, USA

    Kehelwala D. G. Maduranga

  3. Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, 46556, IN, USA

    Jonathan D. Hauenstein

Authors
  1. Sophia J. Abraham
    View author publications

    Search author on:PubMed Google Scholar

  2. Kehelwala D. G. Maduranga
    View author publications

    Search author on:PubMed Google Scholar

  3. Jeffery Kinnison
    View author publications

    Search author on:PubMed Google Scholar

  4. Zachariah Carmichael
    View author publications

    Search author on:PubMed Google Scholar

  5. Jonathan D. Hauenstein
    View author publications

    Search author on:PubMed Google Scholar

  6. Walter J. Scheirer
    View author publications

    Search author on:PubMed Google Scholar

Contributions

Abraham implemented the HomOpt framework, designed the benchmark experiments, performed data analysis, and led the manuscript writing. Maduranga focused on setting up and conducting the initial synthetic experiments and contributed to data interpretation. Kinnison and Carmichael assisted with experimentation and contributed to manuscript preparation. Hauenstein developed the homotopy-based optimization strategy, co-led the project, and contributed to the manuscript. Scheirer conceptualized and directed the overall research project, contributed to the algorithmic design, and was involved in all phases of writing and revision.

Corresponding author

Correspondence to Sophia J. Abraham.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abraham, S.J., Maduranga, K.D.G., Kinnison, J. et al. A flexible framework for hyperparameter optimization using homotopy and surrogate models. Sci Rep (2026). https://doi.org/10.1038/s41598-026-39713-y

Download citation

  • Received: 28 February 2025

  • Accepted: 29 July 2025

  • Published: 17 February 2026

  • DOI: https://doi.org/10.1038/s41598-026-39713-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Hyperparameter Optimization (HPO)
  • Homotopy Methods
  • Surrogate Models
  • Convergence Efficiency
  • Optimization Benchmarks
  • Machine Learning Models
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics