Abstract
Rapid categorization of vocalizations enables adaptive behavior across species. While categorical perception is thought to arise in the neocortex, humans and animals could benefit from a functional organization tailored to ethologically relevant sound processing earlier in the auditory pathway. Here we developed two-photon calcium imaging in the awake echolocating bat (Eptesicus fuscus) to study the representation of vocalizations in the inferior colliculus, which is as few as two synapses from the inner ear. Echolocating bats rely on frequency-sweep-based vocalizations for social communication and navigation. Auditory playback experiments demonstrated that individual neurons responded selectively to social or navigation calls, enabling robust population-level decoding across categories. When social calls were morphed into navigation calls in equidistant step-wise increments, individual neurons showed switch-like properties and population-level response patterns sharply transitioned at the category boundary. Strikingly, category-selective neurons formed spatial clusters, independent of tonotopy within the dorsal cortex of the inferior colliculus. These findings support a revised view of categorical processing in which specified channels for ethologically relevant sounds are spatially segregated early in the auditory hierarchy, enabling rapid subcortical organization into categorical primitives.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$32.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$259.00 per year
only $21.58 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to the full article PDF.
USD 39.95
Prices may be subject to local taxes which are calculated during checkout





Similar content being viewed by others
Data availability
The preprocessed imaging data are publicly available via Zenodo at https://doi.org/10.5281/zenodo.14743696 (ref. 63). The raw data will be made available upon request to the corresponding author.
Code availability
The custom-built MATLAB code used for stimulus generation and analysis will be made available upon request to the corresponding author.
References
May, B., Moody, D. B. & Stebbins, W. C. Categorical perception of conspecific communication sounds by Japanese macaques, Macaca fuscata. J. Acoust. Soc. Am. 85, 837–847 (1989).
Barker, A. J. et al. Cultural transmission of vocal dialect in the naked mole-rat. Science 371, 503–507 (2021).
Ehret, G. Infant rodent ultrasounds—a gate to the understanding of sound communication. Behav. Genet 35, 19–29 (2005).
Wyttenbach, R. A., May, M. L. & Hoy, R. R. Categorical perception of sound frequency by crickets. Science 273, 1542–1544 (1996).
Baugh, A. T., Akre, K. L. & Ryan, M. J. Categorical perception of a natural, multivariate signal: mating call recognition in túngara frogs. Proc. Natl Acad. Sci. USA 105, 8985–8988 (2008).
Reiss, D., McCowan, B. & Marino, L. Communicative and other cognitive characteristics of bottlenose dolphins. Trends Cogn. Sci. 1, 140–145 (1997).
Reichmuth, C. & Casey, C. Vocal learning in seals, sea lions, and walruses. Curr. Opin. Neurobiol. 28, 66–71 (2014).
Prather, J., Nowicki, S., Anderson, R., Peters, S. & Mooney, R. Neural correlates of categorical perception in learned vocal communication. Nat. Neurosci. 12, 221–228 (2009).
Wright, G. S., Chiu, C., Xian, W., Wilkinson, G. S. & Moss, C. F. Social calls predict foraging success in big brown bats. Curr. Biol. 24, 885–889 (2014).
Liu, S. T., Montes-Lourido, P., Wang, X. & Sadagopan, S. Optimal features for auditory categorization. Nat. Commun. 10, 1302 (2019).
Theunissen, F. E. & Elie, J. E. Neural processing of natural sounds. Nat. Rev. Neurosci. 15, 355–366 (2014).
Tsunada, J. & Cohen, Y. E. Neural mechanisms of auditory categorization: from across brain areas to within local microcircuits. Front. Neurosci. 8, 161 (2014).
Yin, P., Strait, D. L., Radtke-Schuller, S., Fritz, J. B. & Shamma, S. A. Dynamics and hierarchical encoding of non-compact acoustic categories in auditory and frontal cortex. Curr. Biol. 30, 1649–1663.e5 (2020).
Grill-Spector, K. & Weiner, K. S. The functional architecture of the ventral temporal cortex and its role in categorization. Nat. Rev. Neurosci. 15, 536–548 (2014).
Surlykke, A. & Moss, C. F. Echolocation behavior of big brown bats, Eptesicus fuscus, in the field and the laboratory. J. Acoust. Soc. Am. 108, 2419–2429 (2000).
Gadziola, M. A., Grimsley, J. M. S., Faure, P. A. & Wenstrup, J. J. Social vocalizations of big brown bats vary with behavioral context. PLoS ONE 7, e44550 (2012).
Montoya, J., Lee, Y. & Salles, A. Social communication in big brown bats. Front. Ecol. Evol. 10, 903107 (2022).
King, A. J. & Nelken, I. Unraveling the principles of auditory cortical processing: can we learn from the visual system? Nat. Neurosci. 12, 698–701 (2009).
Fremouw, T., Faure, P. A., Casseday, J. H. & Covey, E. Duration selectivity of neurons in the inferior colliculus of the big brown bat: tolerance to changes in sound level. J. Neurophysiol. 94, 1869–1878 (2005).
Parras, G. G. et al. Neurons along the auditory pathway exhibit a hierarchical organization of prediction error. Nat. Commun. 8, 2148 (2017).
Carbajal, G. V. & Malmierca, M. S. The neuronal basis of predictive coding along the auditory pathway: from the subcortical roots to cortical deviance detection. Trends Hear. 22, 2331216518784822 (2018).
Thomas, J. M. et al. Stimulus-specific adaptation in specialized neurons in the inferior colliculus of the big brown bat, Eptesicus fuscus. Hear. Res. 291, 34–40 (2012).
Winer, J. A. & Schreiner, C. E. The Inferior Colliculus (Springer, 2005).
Barnstedt, O., Keating, P., Weissenberger, Y., King, A. J. & Dahmen, J. C. Functional microarchitecture of the mouse dorsal inferior colliculus revealed through in vivo two-photon calcium imaging. J. Neurosci. 35, 10927–10939 (2015).
Winer, J. A., Larue, D. T., Diehl, J. J. & Hefti, B. J. Auditory cortical projections to the cat inferior colliculus. J. Comp. Neurol. 400, 147–174 (1998).
Guo, W. et al. Robustness of cortical topography across fields, laminae, anesthetic states, and neurophysiological signal types. J. Neurosci. 32, 9159–9172 (2012).
Schreiner, C. E. & Langner, G. Laminar fine structure of frequency organization in auditory midbrain. Nature 388, 383–386 (1997).
Klug, A. et al. Response selectivity for species-specific calls in the inferior colliculus of mexican free-tailed bats is generated by inhibition. J. Neurophysiol. 88, 1941–1954 (2002).
Poon, P. W. F., Sun, X., Kamada, T. & Jen, P. H.-S. Frequency and space representation in the inferior colliculus of the FM bat, Eptesicus fuscus. Exp. Brain Res 79, 83–91 (1990).
Koay, G., Heffner, H. E. & Heffner, R. S. Audiogram of the big brown bat (Eptesicus fuscus). Hear. Res. 105, 202–210 (1997).
Wong, A. B. & Borst, J. G. G. Tonotopic and non-auditory organization of the mouse dorsal inferior colliculus revealed by two-photon imaging. eLife 8, e49091 (2019).
Oliver, D. L. Projections to the inferior colliculus from the anteroventral cochlear nucleus in the cat: possible substrates for binaural interaction. J. Comp. Neurol. 264, 24–46 (1987).
Wright, G., Chiu, C., Xian, W., Moss, C. & Wilkinson, G. Social calls of flying big brown bats (Eptesicus fuscus). Front. Physiol. 4, 214 (2013).
Chen, C., Xian, W. & Moss, C. F. Flying in silence: echolocating bats cease vocalizing to avoid sonar jamming. Proc. Natl Acad. Sci. USA 105, 13116–13121 (2008).
Salles, A. et al. Neural response selectivity to natural sounds in the bat midbrain. Neuroscience 434, 200–211 (2020).
Woolley, S. M. N. & Portfors, C. V. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain. Hear. Res. 305, 45–56 (2013).
Kawahara, H. & Morise, M. Interactive tools for making vocoder-based signal processing accessible: flexible manipulation of speech attributes for explorational research and education. Acoust. Sci. Technol. 45, 48–51 (2024).
Tsunada, J., Lee, J. H. & Cohen, Y. E. Representation of speech categories in the primate auditory cortex. J. Neurophysiol. 105, 2634–2646 (2011).
Freedman, D. J., Riesenhuber, M., Poggio, T. & Miller, E. K. Categorical representation of visual stimuli in the primate prefrontal cortex. Science 291, 312–316 (2001).
Mysore, S. P. & Knudsen, E. I. Flexible categorization of relative stimulus strength by the optic tectum. J. Neurosci. 31, 7745–7752 (2011).
Kar, M. et al. Vocalization categorization behavior explained by a feature-based auditory categorization model. eLife 11, e78278 (2022).
Achutha, A. C., Peremans, H., Firzlaff, U. & Vanderelst, D. Efficient encoding of spectrotemporal information for bat echolocation. PLoS Comput. Biol. 17, e1009052 (2021).
Diebold, C. A. et al. Rapid sensorimotor adaptation to auditory midbrain silencing in free-flying bats. Curr. Biol. 34, 5507–5517.e3 (2024).
Bar-Yosef, O., Rotman, Y. & Nelken, I. Responses of neurons in cat primary auditory cortex to bird chirps: effects of temporal and spectral context. J. Neurosci. 22, 8619–8632 (2002).
Holmstrom, L. A., Eeuwes, L. B. M., Roberts, P. D. & Portfors, C. V. Efficient encoding of vocalizations in the auditory midbrain. J. Neurosci. 30, 802–819 (2010).
Machens, C. K., Wehr, M. S. & Zador, A. M. Linearity of cortical receptive fields measured with natural sounds. J. Neurosci. 24, 1089–1100 (2004).
Kuo, R. I. & Wu, G. K. The generation of direction selectivity in the auditory system. Neuron 73, 1016–1027 (2012).
Palmer, A. R., Shackleton, T. M., Sumner, C. J., Zobay, O. & Rees, A. Classification of frequency response areas in the inferior colliculus reveals continua not discrete classes. J. Physiol. 591, 4003–4025 (2013).
Mittmann, D. H. & Wenstrup, J. J. Combination-sensitive neurons in the inferior colliculus. Hear. Res. 90, 185–191 (1995).
Leroy, S. A. & Wenstrup, J. J. Spectral integration in the inferior colliculus of the mustached bat. J. Neurosci. 20, 8533–8541 (2000).
Portfors, C. V. & Wenstrup, J. J. Excitatory and facilitatory frequency response areas in the inferior colliculus of the mustached bat. Hear. Res. 168, 131–138 (2002).
Portfors, C. V. & Felix, R. A. Spectral integration in the inferior colliculus of the CBA/CaJ mouse. Neuroscience 136, 1159–1170 (2005).
Lee, N., Schrode, K. M. & Bee, M. A. Nonlinear processing of a multicomponent communication signal by combination-sensitive neurons in the anuran inferior colliculus. J. Comp. Physiol. A 203, 749–772 (2017).
Brunet-Ross, K. & Wilkinson, G. S. Methods for age estimation and the study of senescence in bats. Ecological and Behavioral Methods for the Study of Bats (eds Kunz, T. H. & Parsons, S.) 315–325 (Johns Hopkins University Press, 2009).
Yang, G., Pan, F., Parkhurst, C. N., Grutzendler, J. & Gan, W.-B. Thinned-skull cranial window technique for long-term imaging of the cortex in live mice. Nat. Protoc. 5, 201–208 (2010).
Shih, A. Y., Mateo, C., Drew, P. J., Tsai, P. S. & Kleinfeld, D. A polished and reinforced thinned-skull window for long-term imaging of the mouse brain. J. Vis. Exp. 61, 3742 (2012).
Pachitariu, M. et al. Suite2p: beyond 10,000 neurons with standard two-photon microscopy. Preprint at bioRxiv https://doi.org/10.1101/061507 (2017).
Heckman, J. J. et al. High-precision spatial localization of mouse vocalizations during social interaction. Sci. Rep. 7, 3017 (2017).
Wohlgemuth, M. & Moss, C. F. Midbrain auditory selectivity to natural sounds. Proc. Natl Acad. Sci. USA 113, 2508–2513 (2016).
Kawahara, H., Masuda-Katsuse, I. & de Cheveigné, A. Restructuring speech representations using a pitch-adaptive time–frequency smoothing and an instantaneous-frequency-based F0 extraction: possible role of a repetitive structure in sounds1. Speech Commun. 27, 187–207 (1999).
Ito, T. & Oliver, D. L. The basic circuit of the IC: tectothalamic neurons with different patterns of synaptic organization send different messages to the thalamus. Front. Neural Circuits 6, 48 (2012).
Bagur, S. et al. Go/No-Go task engagement enhances population representation of target stimuli in primary auditory cortex. Nat. Commun. 9, 2529 (2018).
Lawlor, J. et al. Spatially clustered neurons in the bat midbrain encode vocalization categories Dataset. Zenodo https://doi.org/10.5281/zenodo.14743696 (2025).
Acknowledgements
We thank A. Salles for providing the conspecific bat calls, B. Englitz for providing the mouse USVs, C. Diebold for animal care and surgical support and Y. Boubenec for help with the spectral distance analysis. We thank C. Drieu, S. Moore and N. Kothari for their thoughtful feedback on our manuscript. This work was supported by an NIH Brain Initiative R34 grant R34NS118462 (K.V.K., C.F.M. and M.J.W.), NSF grant NCS-FO 1734744 and ONR grant N00014-17-1-2736 (C.F.M.).
Author information
Authors and Affiliations
Contributions
Conceptualization: J.L., M.J.W., C.F.M. and K.V.K.; experiments: J.L. and M.J.W.; data analysis: J.L. and K.V.K., writing—original draft: J.L. and K.V.K.; writing—review and editing: J.L., M.J.W., C.F.M. and K.V.K.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Neuroscience thanks Manuel Malmierca and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
Extended Data Fig. 1 Histology and immunostaining for GCaMP6f and GAD67.
a, GFP immunostaining shows expression of GCaMP6f throughout the IC. The black box indicates the location of the site highlighted in b. b, Immunostaining for IC locus highlighted in a for cell nucleus (DAPI, top left), GCaMP6f (GFP, top right), inhibitory neurons (GAD67, bottom left), and merged image (bottom right). Merged image suggests minimal overlap between GCcAMP6f and GAD67 expression. Example neurons numbered on the merged image are displayed on the right: 1) GAD67+ and GcAMP6f+ (yellow), 2) GcAMP6f+ and GAD67- (green), 3) GcAMP6f- and GAD67+ (red). The staining procedure was repeated on the brains of two bats.
Extended Data Fig. 2 Site depth estimation corrected by skull thickness.
a, Mean images (150 frames) from 1X volumetric stack for example 2X imaging site (highlighted by the black to gray box, colors correspond to depth in b) every 10 µm until example site imaging depth (bottom image). b, Raw fluorescence distribution for images displayed in A. Note that shallower images possess a longer high fluorescence tail indicating a larger skull coverage. The criterion used for skull identification (95th percentile of the overall volumetric stack distribution) is displayed in the dashed line. c, Skull thickness and corresponding depth estimation for example volumetric stack in a and b.
Extended Data Fig. 3 Tonotopic gradient in example sites for 3 separate bats.
a, Example site from the left DCIC (LIC) of bat 1. Cell ROIs are color-coded according to their best frequency (scale in b). Cells without an observable tuning are shown in gray (‘NT’ for not-tuned). b, Example site from the left DCIC of bat 2. c, Example site from the right DCIC (RIC) of bat 3.
Extended Data Fig. 4 Acoustic distribution of vocalization.
a, Average power spectrum for each vocalization sequences normalized by its max. b, Average vocalization sequence f0 end frequency as a function of start frequency. Individual point represents each sequence (social: red, navigation: blue, mouse USVs: orange). Black circles highlight the stimuli presented in Fig. 3, all stimuli were presented for Extended Data Fig. 7a, b. Conspecific sequences end and start frequency are significantly different (mean social: 22.9 and mean navigation: 29.7 kHz KW test, p = 0.012 and p = 9.2e-4, respectively). c, Average vocalization duration as a function of duration. Social and navigation sequences can be separated by duration (mean social: 8 ms, mean navigation: 4.5 ms, KW test, p = 4.9e-4). d, Vocalization f0 slope as a function of f0 peak frequency. Social vocalizations present shallower slopes (mean social: 0.08 oct./ms, mean navigation: 0.17oct./ms, KW test, p = 3.5e-4).
Extended Data Fig. 5 Selectivity index method.
a, Percentage of cells showing significant stimulus-evoked responses (ANOVA, α < 0.05) for tones (in black) and vocalizations (in gray) for each recorded site. Sites with fewer than 50% of cells with significant tone-evoked responses were excluded from the vocalization analysis. b, Mean percentage of significantly selective cells using SI for each sound category. Note that the percentage of selective cells to conspecific calls is consistently higher than other categories. c, Average sound-evoked Δf/f per category for significant selective cells sorted by their selectivity index (ncells = 1885). Left: average sound-evoked activity for social calls. Middle: average sound-evoked activity for navigation calls. Right: difference between social and navigation activity. d, Average selectivity index distribution across bats (shading: mean+-SEM). Average significance boundaries are displayed in blue (2.5 percentile, SIbound = −0.2959) and red (97.5 percentile, SIbound = 0.3109).
Extended Data Fig. 6 Linear decoding method.
a, Decoding weights for time course decoder (Fig. 3f) as a function of selectivity index per cell (black dot). Left: decoding weights does not increase (linear regression in red, adjusted R2 = −0.000129, p = 0.623) with SI at 0 s after stimulus onset. Right: decoding weights increases linearly with SI at 60 ms after stimulus onset (adjusted R2 = 0.243, p < 10−4). b, Mean decoding weights from pairwise decoding for social vs navigation (Fig. 3g,h) increase with SI (linear regression in red, adjusted R2 = 0.0374, p = 9.12e-51), but not for same stimulus decoding (social only, in grey, linear regression purple, adjusted R2 = 2.03e-05, p = 0.29).
Extended Data Fig. 7 Population categorical boundaries are maintained with expanded set of stimuli.
a, Left: pairwise decoding accuracy matching the set of stimuli in Fig. 3g-h averaged across additional bats (nbats=2, 1 male, 1 female, ncells = 1,575). Similarly, to Fig. 3g-h, decoding accuracy within categories is lower than across category (white = 50% chance, red = 100% accuracy). Right: Mean decoding accuracy within and across categories (error bar: mean+-SEM). Higher than chance decoding accuracy is indicated by a single star above each bar. Decoding accuracy is significantly lower within vs. across categories (one-way ANOVA, F(6,337) = 33.3, p = 1.7728e-31, post-hoc Tukey’s HSD, pS/N/S < 10−4, pS/S/U < 10−4, pS/N/U < 10−4) in addition decoding accuracy across bat calls is significantly lower than across bat calls and mouse USVs (pN/S/S/U = 0.0057). b, Left: pairwise decoding accuracy with an extended set of stimuli (nbats=2, same as a). Social exemplars extended from 6 to 9, temporally matched navigation exemplars are spectrally varied, USVs remain the same. Decoding accuracy remains significantly lower within vs. across categories (one-way ANOVA, F(6,655) = 67.2, p = 4.5820e-65, post-hoc Tukey’s HSD, pN/N/S = 9.4866e-09, pN/S/U < 10−4, pN/N/U < 10−4, pS/S/U < 10−4, pS/N/U = 3.5542e-15). In addition, decoding accuracy across bat vocalizations is significantly lower than across bat versus mouse USVs (pN/S/S/U = 2.5996e-10, pN/S/N/U = 2.1133e-18). c, Population response to stimuli reversal. Left: example spectrograms for forward (original vocalizations) and corresponding reverse stimulus for a social sequence exemplar. Center: distribution difference between mean evoked response forward and reverse for each category (black, normalized) and corresponding shuffle (100 shuffle, normalized, gray). Numerical value: Percentage of cells sensitive to reversal (α < 0.05, two-sided from shuffle) per category. The percentage of cells that changed their response when the stimuli is reversed is highest for social stimuli (31.2%), followed by navigation stimuli (22.8%) and lowest for mouse USV (15.9%).
Extended Data Fig. 8 Additional auditory features for selective and non-selective populations.
a, Probability density estimate of tuning and bandwidth for social selective cells (left) and navigation selective cells (right). b, Probability density estimate for both cell population represented as contours, showing a large density overlap (Two-sample Kolmogorov-Smirnov test, KS Statistic= 0.1889, p = 0.0707) c, Average peak evoked-responses as Δf/f to different type of complex sounds: white noise (WN), downsweep (DS) and upsweep (US), for social, navigation and non-selective cells (shading: mean+-SEM, two-way ANOVA, F(2,14) = 0.03, p = 0.9721). d, Mean upsweep rate tuning as peak Δf/f normalized for social, navigation and non-selective cells (shading: mean+-SEM, two-way ANOVA, F(2,44) = 0.06, p = 0.9459). e. Mean white noise duration tuning as peak df/f normalized for social, navigation and non-selective cells (shading: mean+-SEM, two-way ANOVA, F(2,44) = 0.04, p = 0.9648).
Extended Data Fig. 9 The 50% continuum boundary is the most salient for the recorded dataset.
a, Example continuum and schematic representation of CI computations for different boundaries (between category difference, BCD, in gray and within category difference, WCD, in black for each tested boundary). b, Average population activity as Δf/f for continua-responsive cells (n = 2 bats, n = 1122 cells with significant sound-evoked responses). The population average exhibits a shift from high to low activity centered around the 50% boundary, indicating a preference for the social segment of the continua. This pattern is reminiscent of the higher proportion of social-selective cells. c, Distribution of the category index for the 50% boundary (selecting for the preferred continuum). The distribution is positively skewed, suggesting a trend toward categorical responses. d, 95th percentile for shuffle (gray) and data (black) for each boundary. The 95% percentile for the data lies outside the shuffle distribution only for the 50% boundary, indicating that for other boundaries the CIs are not significantly different than the noise. e, Population. average (Δf/f, normalized per cell, error bar: mean+-SEM) along the morphing continuum for each boundary (n = 2 bats, n = 1122 cells with significant sound-evoked responses). A 4-parameter logistic fit of the population average centers around the 50% boundary, regardless of the tested boundary. f, Point of inflection for each logistic fit per boundary (black dot) and its average (dotted black line). The population transition is centered around the 50% boundary, regardless of the computed CI boundary (54.2+-9.4%).
Extended Data Fig. 10 Relationship between tuning and selectivity index examples and composite view.
a, Tuning map for example site in Fig. 4 color-coded according to the best frequency of the neuron. b, Corresponding social/navigation selectivity map for the same example site, color-coded by the neuron’s selectivity index. Note that the selectivity ‘hotspots’ do not follow the tonotopic gradient. c, Cumulative distribution for cluster centers and corresponding shuffle distribution (randomly selected from all cells coordinates, in gray) along the composite RL gradient of all imaged sites. Left: social cluster falls just outside of the shuffle distribution (two-sided permutation test, p < 0.01), perhaps due to the increase in cluster size with cell density. Center left: navigation cluster centers fall within the 95% confidence interval therefore following cell density independently of the tonotopic gradient. Center right: cells tuned to low frequencies (4 and 5.7 kHz, in dark blue) are located more rostromedially than the corresponding shuffled distribution (two-sided permutation test, p < 0.01). Right: cells tuned to high frequencies (16 and 22.7 kHz) are located more caudolaterally than the corresponding shuffled distribution (two-sided permutation test, p < 0.01).
Supplementary information
Supplementary Information
Supplementary Tables 1–3.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lawlor, J., Wohlgemuth, M.J., Moss, C.F. et al. Spatially clustered neurons in the bat midbrain encode vocalization categories. Nat Neurosci 28, 1038–1047 (2025). https://doi.org/10.1038/s41593-025-01932-3
Received:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1038/s41593-025-01932-3


