Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Nature Communications
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. nature communications
  3. articles
  4. article
NEOSTI - a neuromorphic electronic-opto spatial-temporal hybrid image sensor
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 26 March 2026

NEOSTI - a neuromorphic electronic-opto spatial-temporal hybrid image sensor

  • Tianyi Liu  ORCID: orcid.org/0009-0006-7219-10121 na1,
  • Zheng Huang1 na1,
  • Xuecheng Wang  ORCID: orcid.org/0000-0003-1778-22591 na1,
  • Wanxin Shi1,2,
  • Hongwei Chen  ORCID: orcid.org/0000-0002-2952-22031 &
  • …
  • Milin Zhang  ORCID: orcid.org/0000-0001-7544-18371 

Nature Communications , Article number:  (2026) Cite this article

  • 2596 Accesses

  • Metrics details

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Electrical and electronic engineering
  • Imaging and sensing

Abstract

Image sensors in machine vision systems face significant challenges related to energy efficiency and processing capability when storing, transferring, and processing massive amounts of data. In humans, over 80% of brain-processed information is obtained through the eyes, which are capable of detecting and synchronously processing information with extremely low overall power consumption. Inspired by the biomimetics, we propose a Neuromorphic Electronic-Opto Spatial Temporal Imager (NEOSTI), one of the smallest electronic-opto fully integrated, eye-sized vision systems enabling acquisition and operation in typical indoor/outdoor non-coherent environments, under both natural and artificial lighting conditions without any extra requirement of the light source. NEOSTI combines processing-pre-sensor in optical domain, processing-in-sensor with nonlinear acquisition capability while optical to electronic converting, and processing-near-sensor in electronic domain, enabling parallel data computing capabilities while sensing. NEOSTI also integrates a low complexity Binary Neural Network to process image semantic information. It attains competitive performance in several visual processing tasks.

Similar content being viewed by others

Embodied neuromorphic synergy for lighting-robust machine vision to see in extreme bright

Article Open access 30 December 2024

In-sensor analog optoelectronic processing of concurrent event and memory signals for dynamic vision sensing

Article Open access 26 December 2025

Multidimensional and reconfigurable optical neuromorphic computing using perovskite-based all-photonic synapses

Article Open access 18 February 2026

Data availability

The data supporting the findings of this study are available in the main text and Supplementary Materials. Source data are provided with this paper.

Code availability

The codes used in this study are publicly available at GitHub and have been archived with a DOI via Zenodo30.

References

  1. Kang, I. The art of scaling: Distributed and connected to sustain the golden age of computation. In Proc. IEEE International Solid-State Circuits Conference (ISSCC), Vol. 65, 25–31 (IEEE, 2022).

  2. Bong, K., Choi, S., Kim, C., Han, D. & Yoo, H.-J. A low-power convolutional neural network face recognition processor and a CIS integrated with always-on face detector. IEEE J. Solid State Circ. 53, 115–123 (2017).

    Google Scholar 

  3. Choi, J., Lee, S., Son, Y. & Kim, S. Y. Design of an always-on image sensor using an analog lightweight convolutional neural network. Sensors 20, 3101 (2020).

    Google Scholar 

  4. Hsu, T.-H. et al. A 0.8 V intelligent vision sensor with tiny convolutional neural network and programmable weights using mixed-mode processing-in-sensor technique for image classification. IEEE J. Solid-State Circuits 58, 3266–3274 (2023).

    Google Scholar 

  5. Hsu, T.-H. et al. A 0.5-v real-time computational cmos image sensor with programmable kernel for feature extraction. IEEE J. Solid-State Circuits 56, 1588–1596 (2020).

    Google Scholar 

  6. Lefebvre, M., Moreau, L., Dekimpe, R. & Bol, D. 7.7 A 0.2-to-3.6TOPS/W programmable convolutional imager soc with in-sensor current-domain ternary-weighted mac operations for feature extraction and region-of-interest detection. In Proc. IEEE International Solid-State Circuits Conference (ISSCC) 118–120 (IEEE, 2021).

  7. Song, H., Oh, S., Salinas, J., Park, S.-Y. & Yoon, E. A 5.1 ms low-latency face detection imager with in-memory charge-domain computing of machine-learning classifiers. In Proc. Symposium on VLSI Circuits 1–2 (IEEE, 2021).

  8. Xu, H. et al. Senputing: An ultra-low-power always-on vision perception chip featuring the deep fusion of sensing and computing. IEEE Trans. Circ. Syst. I: Regul. Pap. 69, 232–243 (2021).

    Google Scholar 

  9. Young, C., Omid-Zohoor, A., Lajevardi, P. & Murmann, B. A data-compressive 1.5/2.75-bit log-gradient qvga image sensor with multi-scale readout for always-on object detection. IEEE J. Solid State Circ. 54, 2932–2946 (2019).

    Google Scholar 

  10. Kim, W.-T., Lee, H., Kim, J.-G. & Lee, B.-G. An on-chip binary-weight convolution cmos image sensor for neural networks. IEEE Trans. Ind. Electron. 68, 7567–7576 (2020).

    Google Scholar 

  11. Park, S., Cho, J., Lee, K. & Yoon, E. 7.2 243.3pJ/pixel bio-inspired time-stamp-based 2D optic flow sensor for artificial compound eyes. In Proc. IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 126–127 (IEEE, 2014).

  12. Yamazaki, T. et al. 4.9 A 1ms high-speed vision chip with 3D-stacked 140GOPS column-parallel pes for spatio-temporal image processing. In Proc. IEEE International Solid-State Circuits Conference (ISSCC), 82–83 (IEEE, 2017).

  13. Lin, X. et al. All-optical machine learning using diffractive deep neural networks. Science 361, 1004–1008 (2018).

    Google Scholar 

  14. Chang, J., Sitzmann, V., Dun, X., Heidrich, W. & Wetzstein, G. Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification. Sci. Rep. 8, 12324 (2018).

    Google Scholar 

  15. Zhou, T. et al. Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit. Nat. Photonics 15, 367–373 (2021).

    Google Scholar 

  16. Britannica, E. Sensory reception: human vision: structure and function of the human eye. Encycl. Brittanica 27, 179 (1987).

    Google Scholar 

  17. Skorka, O. & Joseph, D. Toward a digital camera to rival the human eye. J. Electron. Imaging 20, 033009–033009 (2011).

    Google Scholar 

  18. Omnivision. OVB0B. https://www.ovt.com/products/ovb0b/ (2024).

  19. Omnivision. OX03J10. https://www.ovt.com/products/ox03j10/ (2024).

  20. Muller, R. I/sup 2/l timing circuit for the 1 ms-10 s range. IEEE J. Solid State Circ. 12, 139–143 (1977).

    Google Scholar 

  21. Ohta, J. Smart CMOS Image Sensors and Applications (1st ed.) (CRC Press, 2008).

  22. Chen, D. G., Matolin, D., Bermak, A. & Posch, C. Pulse-modulation imaging-review and performance analysis. IEEE Trans. Biomed. Circ. Syst. 5, 64–82 (2011).

    Google Scholar 

  23. Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29, 141–142 (2012).

    Google Scholar 

  24. Xiao, H., Rasul, K. & Vollgraf, R. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. Preprint at arXiv https://arxiv.org/abs/1708.07747 (2017).

  25. Google Creative Lab. Quick, draw! dataset. https://quickdraw.withgoogle.com/data (2017).

  26. Wood, E., Baltrušaitis, T., Morency, L.-P., Robinson, P. & Bulling, A. Learning an appearance-based gaze estimator from one million synthesised images. In Proc. Ninth Biennial ACM Symposium on Eye Tracking Research & Applications, 131–138 (ACM, 2016).

  27. Gorelick, L., Blank, M., Shechtman, E., Irani, M. & Basri, R. Actions as space-time shapes. Trans. Pattern Anal. Mach. Intell. 29, 2247–2253 (2007).

    Google Scholar 

  28. Gu, C. et al. Transparent and energy-efficient electrochromic AR display with minimum crosstalk using the pixel confinement effect. Device 1, 100126 (2023).

  29. Huang, Z. et al. Pre-sensor computing with compact multilayer optical neural network. Sci. Adv. 10, eado8516 (2024).

    Google Scholar 

  30. Liu, T. et al. NEOSTI - A neuromorphic electronic-opto spatial-temporal hybrid image sensor. Zenodo, https://doi.org/10.5281/zenodo.18483098 (2026).

  31. Nvidia. H100 GPU. https://www.nvidia.com/en-sg/data-center/h100/ (2024).

  32. Omnivision. OV9281. https://www.ovt.com/products/ov9281/ (2024).

  33. Nvidia. A100 Tensor Core GPU. https://www.nvidia.com/en-sg/data-center/a100/ (2024).

  34. Yao, M. et al. Spike-based dynamic computing with asynchronous sensing-computing neuromorphic chip. Nat. Commun. 15, 4464 (2024).

    Google Scholar 

  35. Eki, R. et al. 9.6 A 1/2.3 inch 12.3Mpixel with on-chip 4.97TOPS/W CNN processor back-illuminated stacked CMOS image sensor. In Proc. IEEE International Solid-State Circuits Conference (ISSCC) 154–156 (IEEE, 2021).

  36. Wang, T. et al. Image sensing with multilayer nonlinear optical neural networks. Nat. Photonics 17, 408–415 (2023).

    Google Scholar 

  37. Huo, Y. et al. Optical neural network via loose neuron array and functional learning. Nat. Commun. 14, 2535 (2023).

    Google Scholar 

  38. Chen, Y. et al. All-analog photoelectronic chip for high-speed vision tasks. Nature 623, 48–57 (2023).

    Google Scholar 

  39. Ashtiani, F., Geers, A. J. & Aflatouni, F. An on-chip photonic deep neural network for image classification. Nature 606, 501–506 (2022).

    Google Scholar 

  40. Xu, X. et al. 11 tops photonic convolutional accelerator for optical neural networks. Nature 589, 44–51 (2021).

    Google Scholar 

  41. Feldmann, J. et al. Parallel convolutional processing using an integrated photonic tensor core. Nature 589, 52–58 (2021).

    Google Scholar 

  42. Valeton, J. & van Norren, D. Light adaptation of primate cones: an analysis based on extracellular data. Vis. Res. 23, 1539–1547 (1983).

    Google Scholar 

  43. Boynton, R. M. & Whitten, D. N. Visual adaptation in monkey cones: recordings of late receptor potentials. Science 170, 1423–1426 (1970).

    Google Scholar 

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China (NSFC) (62227801 (M.Z.) and 62135009 (H.C.)), and National Key Research and Development Program of China (2024YFE0203600 (H.C.)).

Author information

Author notes
  1. These authors contributed equally: Tianyi Liu, Zheng Huang, Xuecheng Wang.

Authors and Affiliations

  1. Department of Electronic Engineering, Tsinghua University, Beijing, China

    Tianyi Liu, Zheng Huang, Xuecheng Wang, Wanxin Shi, Hongwei Chen & Milin Zhang

  2. China Mobile Research Institute, Beijing, China

    Wanxin Shi

Authors
  1. Tianyi Liu
    View author publications

    Search author on:PubMed Google Scholar

  2. Zheng Huang
    View author publications

    Search author on:PubMed Google Scholar

  3. Xuecheng Wang
    View author publications

    Search author on:PubMed Google Scholar

  4. Wanxin Shi
    View author publications

    Search author on:PubMed Google Scholar

  5. Hongwei Chen
    View author publications

    Search author on:PubMed Google Scholar

  6. Milin Zhang
    View author publications

    Search author on:PubMed Google Scholar

Contributions

T.L., Z.H., and X.W. conceived the project. Z.H. and X.W. designed the hardware. T.L. conducted the model training. T.L., Z.H., X.W., and W.S. performed the measurement. T.L., Z.H., and X.W. wrote the manuscript. M.Z. and H.C. supervised the project. All of the authors contributed to the discussion of the experiment results and reviewed the manuscript.

Corresponding authors

Correspondence to Hongwei Chen or Milin Zhang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Communications thanks Juan A. Leñero-Bardallo, Sijie Ma, and Christoph Posch for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information (download PDF )

Transparent Peer Review file (download PDF )

Source data

Source Data 1 (download XLSX )

Source Data 2 (download CSV )

Source Data 3 (download XLSX )

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, T., Huang, Z., Wang, X. et al. NEOSTI - a neuromorphic electronic-opto spatial-temporal hybrid image sensor. Nat Commun (2026). https://doi.org/10.1038/s41467-026-71091-x

Download citation

  • Received: 09 June 2025

  • Accepted: 13 March 2026

  • Published: 26 March 2026

  • DOI: https://doi.org/10.1038/s41467-026-71091-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • Reviews & Analysis
  • News & Comment
  • Videos
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • Aims & Scope
  • Editors
  • Journal Information
  • Open Access Fees and Funding
  • Calls for Papers
  • Editorial Values Statement
  • Journal Metrics
  • Editors' Highlights
  • Contact
  • Editorial policies
  • Top Articles

Publish with us

  • For authors
  • For Reviewers
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Nature Communications (Nat Commun)

ISSN 2041-1723 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing