Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
mViSE: A visual search engine for analyzing multiplex IHC brain tissue images (spatial proteomics)
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 23 February 2026

mViSE: A visual search engine for analyzing multiplex IHC brain tissue images (spatial proteomics)

  • Liqiang Huang1,
  • Rachel Mills1,
  • Saikiran Mandula1,
  • Lin Bai1,
  • Mahtab Jeyhani1,
  • John Redell2,
  • Hien Nguyen1,
  • Saurabh Prasad1,
  • Dragan Maric3 &
  • …
  • Badrinath Roysam1 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Computational biology and bioinformatics
  • Neuroscience

Abstract

Whole-slide multiplex brain tissue images for spatial proteomics are massive, information-dense, and challenging to analyze. We present mVISE, an interactive multiplex visual search engine that offers an alternative programming-free query-driven analysis method based on retrieving and profiling communities of similar cells, proximal cell pairs, and multicellular niches. The retrievals can be used for exploratory cell and tissue analysis, delineating brain regions and cortical layers, profiling and comparing brain regions/sub-regions/sub-layers, etc. mVISE is enabled by multiplex encoders that seamlessly integrate visual cues across imaging channels overcoming the limitations of current foundation models. We train separate encoders to learn each facet of tissue including cell morphologies, spatial protein expression (chemoarchitecture), cell arrangements (cytoarchitecture), and wiring patterns (myeloarchitecture) from a set of user-defined molecular marker panels, without the need for human annotations or intervention, and with visual confirmation of successful learning. Multiple encoders can be combined logically to drive specialized searches. We validated mViSE’s ability to retrieve single cells, proximal cell pairs, tissue patches, delineate cortical layers, brain regions, and sub-regions. mVISE is disseminated as an open-source QuPath plug-in tool.

Data availability

The datasets including the original acquired images from the microscope and the results of each step are publicly hosted at https://doi.org/10.7910/DVN/9H9LV9. We provide a comprehensive tutorial covering the machine learning model, and the QuPath extension on our GitHub repository: https://github.com/moxx799/mViSE. A complete dataset that can be loaded onto QuPath is posted at https://zenodo.org/records/18601179. This dataset includes: (i) A 36-channel TIF file that serves as the primary raw data source; (ii) A JavaScript extension for mViSE; (iii) Output files containing the results of the required experiments, required for mViSE QuPath extension; (iv) Annotation files manually curated to provide reference labels and validation for the experimental results; and (v) A step-by-step guide on how to use the files, detailing workflows for running the JavaScript extension, and accessing the experimental results.

References

  1. Maric, D. et al. Whole-brain tissue mapping toolkit using large-scale highly multiplexed immunofluorescence imaging and deep neural networks. Nat. Commun. 12, 1550. https://doi.org/10.1038/s41467-021-21735-x (2021).

    Google Scholar 

  2. Langlieb, J. et al. The molecular cytoarchitecture of the adult mouse brain. Nature 624, 333–342. https://doi.org/10.1038/s41586-023-06818-7 (2023).

    Google Scholar 

  3. Hawrylycz, M. J. et al. An anatomically comprehensive atlas of the adult human brain transcriptome. Nature 489, 391–399. https://doi.org/10.1038/nature11405 (2012).

    Google Scholar 

  4. Swanson, L. W. Brain maps 4.0-Structure of the rat brain: An open access atlas with global nervous system nomenclature ontology and flatmaps. J. Comp. Neurol. 526, 935–943. https://doi.org/10.1002/cne.24381 (2018).

    Google Scholar 

  5. Schleicher, A., Morosan, P., Amunts, K. & Zilles, K. Quantitative architectural analysis: A new approach to cortical mapping. J. Autism Dev. Disord. 39, 1568–1581. https://doi.org/10.1007/s10803-009-0790-8 (2009).

    Google Scholar 

  6. Paxinos, G., Furlong, T. & Watson, C. Human Brainstem: Cytoarchitecture, Chemoarchitecture, Myeloarchitecture (Academic Press, 2020).

    Google Scholar 

  7. Lake, B. B. et al. Neuronal subtypes and diversity revealed by single-nucleus RNA sequencing of the human brain. Science 352, 1586–1590. https://doi.org/10.1126/science.aaf1204 (2016).

    Google Scholar 

  8. Osanai, Y., Yamazaki, R., Shinohara, Y. & Ohno, N. Heterogeneity and regulation of oligodendrocyte morphology. Front. Cell Dev. Biol. 10, 1030486. https://doi.org/10.3389/fcell.2022.1030486 (2022).

    Google Scholar 

  9. Hennigs, J. K., Matuszcak, C., Trepel, M. & Korbelin, J. Vascular endothelial cells: Heterogeneity and targeting approaches. Cells https://doi.org/10.3390/cells10102712 (2021).

    Google Scholar 

  10. Costantini, I. et al. Large-scale, cell-resolution volumetric mapping allows layer-specific investigation of human brain cytoarchitecture. Biomed. Opt. Express 12, 3684–3699. https://doi.org/10.1364/BOE.415555 (2021).

    Google Scholar 

  11. Todorov, M. I. et al. Machine learning analysis of whole mouse brain vasculature. Nat. Methods 17, 442–449. https://doi.org/10.1038/s41592-020-0792-1 (2020).

    Google Scholar 

  12. Maric, D. et al. Whole-brain tissue mapping toolkit using large-scale highly multiplexed immunofluorescence imaging and deep neural networks. Nat. Commun. 12, 1550 (2021).

    Google Scholar 

  13. Paxinos, G., Watson, C., Calabrese, E., Badea, A. & Johnson, G. A. MRI/DTI Atlas of the Rat Brain (Academic Press, 2015).

    Google Scholar 

  14. Kalra, S. et al. Yottixel–an image search engine for large archives of histopathology whole slide images. Med. Image Anal. 65, 101757 (2020).

    Google Scholar 

  15. Tizhoosh, H. R. & Pantanowitz, L. On image search in histopathology. J. Pathol. Inform. https://doi.org/10.1016/j.jpi.2024.100375 (2024).

    Google Scholar 

  16. Kaur, P. & Singh, R. K. In 2020 International Conference on Intelligent Engineering and Management (ICIEM). 187–192 (IEEE, 2020).

  17. Qayyum, A., Anwar, S. M., Awais, M. & Majid, M. Medical image retrieval using deep convolutional neural network. Neurocomputing 266, 8–20 (2017).

    Google Scholar 

  18. Song, A. H. et al. Artificial intelligence for digital and computational pathology. Nat. Rev. Bioeng. 1, 930–949 (2023).

    Google Scholar 

  19. Lu, M. Y. et al. A visual-language foundation model for computational pathology. Nat. Med. 30, 863–874 (2024).

    Google Scholar 

  20. Adnan, M., Kalra, S., Cresswell, J. C., Taylor, G. W. & Tizhoosh, H. R. Federated learning and differential privacy for medical image analysis. Sci. Rep. 12, 1953 (2022).

    Google Scholar 

  21. Wang, X. et al. RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval. Med. Image Anal. 83, 102645 (2023).

    Google Scholar 

  22. Chen, C. et al. Fast and scalable search of whole-slide images via self-supervised deep learning. Nat. Biomed. Eng. 6, 1420–1434 (2022).

    Google Scholar 

  23. Chen, R. J. et al. Towards a general-purpose foundation model for computational pathology. Nat. Med. 30, 850–862 (2024).

    Google Scholar 

  24. Kim, G. & Chun, H. Similarity-assisted variational autoencoder for nonlinear dimension reduction with application to single-cell RNA sequencing data. BMC Bioinform. 24, 432. https://doi.org/10.1186/s12859-023-05552-1 (2023).

    Google Scholar 

  25. Pinheiro Cinelli, L., Araújo Marins, M., Barros da Silva, E. A. & Lima Netto, S. In Variational Methods for Machine Learning with Applications to Deep Networks 111-149 (Springer, 2021).

  26. Aung, T. M. M. & Khan, A. A. Enhanced U-Net with attention mechanisms for improved feature representation in lung nodule segmentation. Curr. Med. Imaging Form. Curr. Med. Imaging Rev. https://doi.org/10.2174/0115734056386382250902064757 (2025).

    Google Scholar 

  27. Ronneberger, O., Fischer, P. & Brox, T. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5–9, Proceedings, Part III 18. 234–241 (Springer, 2015).

  28. Wang, W. et al. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 19175–19186.

  29. Wang, Y., Blei, D. & Cunningham, J. P. Posterior collapse and latent variable non-identifiability. Adv. Neural Inf. Process. Syst. 34, 5443–5455 (2021).

    Google Scholar 

  30. Dai, Z., Wang, G., Yuan, W., Zhu, S. & Tan, P. In Proceedings of the Asian Conference on Computer Vision. 1142–1160.

  31. Casale, F. P., Dalca, A., Saglietti, L., Listgarten, J. & Fusi, N. Gaussian process prior variational autoencoders. Adv. Neural Inf. Process. Syst. 31 (2018).

  32. Takahashi, S. et al. Comparison of vision transformers and convolutional neural networks in medical image analysis: A systematic review. J. Med. Syst. 48, 84 (2024).

    Google Scholar 

  33. Damm, S. et al. In International Conference on Artificial Intelligence and Statistics. 3931–3960 (PMLR).

  34. Bredell, G., Flouris, K., Chaitanya, K., Erdil, E. & Konukoglu, E. Explicitly minimizing the blur error of variational autoencoders. arXiv preprint arXiv:2304.05939 (2023).

  35. Held, P., Krause, B. & Kruse, R. In 2016 Third European Network Intelligence Conference (ENIC). 61–68 (IEEE, 2016).

  36. Bamwenda, J., Ozerdem, M. S., Ayyildiz, O. & Akpolat, V. A hybrid deep learning framework for accurate cell segmentation in whole slide images using YOLOv11, StarDist, and SAM2. Bioengineering https://doi.org/10.3390/bioengineering12060674 (2025).

    Google Scholar 

  37. Wang, F. & Liu, H. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2495–2504.

  38. Zhong, Z., Zheng, L., Luo, Z., Li, S. & Yang, Y. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 598–607.

  39. Ge, Y., Zhu, F., Chen, D. & Zhao, R. Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. Adv. Neural Inf. Process. Syst. 33, 11309–11321 (2020).

    Google Scholar 

  40. Wang, D. & Zhang, S. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10981–10990.

  41. Zhong, Z., Zheng, L., Cao, D. & Li, S. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 1318–1327.

  42. Bourriez, N. et al. ChAda-ViT : Channel adaptive attention for joint representation learning of heterogeneous microscopy images. Proc. CVPR IEEE https://doi.org/10.1109/Cvpr52733.2024.01098 (2024).

    Google Scholar 

  43. Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).

  44. Wang, Q. et al. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11534–11542.

  45. Hermans, A., Beyer, L. & Leibe, B. In Defense of the Triplet Loss for Person Re-Identification. arXiv preprint arXiv:1703.07737 (2017).

  46. Johnson, J., Douze, M. & Jégou, H. Billion-scale similarity search with GPUs. IEEE Trans. Big Data 7, 535–547 (2019).

    Google Scholar 

  47. Rosvall, M., Axelsson, D. & Bergstrom, C. T. The map equation. Eur. Phys. J. Spec. Top. 178, 13–23 (2009).

    Google Scholar 

  48. Siméoni, O. et al. Dinov3. arXiv preprint arXiv:2508.10104 (2025).

  49. Xu, H. et al. A whole-slide foundation model for digital pathology from real-world data. Nature 630, 181–188 (2024).

    Google Scholar 

  50. Ding, T. et al. A multimodal whole-slide foundation model for pathology. Nat. Med. 1–13 (2025).

  51. Bankhead, P. et al. QuPath: Open source software for digital pathology image analysis. Sci. Rep. 7, 1–7 (2017).

    Google Scholar 

Download references

Acknowledgements

This work was supported by NIH grant R01NS109118 (PI: Roysam).

Funding

This work was funded by the National Institutes of Neurological Disorders and Stroke (NINDS) under grant NIH R01NS109118 (PI: B. Roysam).

Author information

Authors and Affiliations

  1. University of Houston, Houston, TX, 77204, USA

    Liqiang Huang, Rachel Mills, Saikiran Mandula, Lin Bai, Mahtab Jeyhani, Hien Nguyen, Saurabh Prasad & Badrinath Roysam

  2. The University of Texas McGovern Medical School, Houston, TX, 77030, USA

    John Redell

  3. National Institute of Neurological Disorders and Stroke, Bethesda, MD, 20892, USA

    Dragan Maric

Authors
  1. Liqiang Huang
    View author publications

    Search author on:PubMed Google Scholar

  2. Rachel Mills
    View author publications

    Search author on:PubMed Google Scholar

  3. Saikiran Mandula
    View author publications

    Search author on:PubMed Google Scholar

  4. Lin Bai
    View author publications

    Search author on:PubMed Google Scholar

  5. Mahtab Jeyhani
    View author publications

    Search author on:PubMed Google Scholar

  6. John Redell
    View author publications

    Search author on:PubMed Google Scholar

  7. Hien Nguyen
    View author publications

    Search author on:PubMed Google Scholar

  8. Saurabh Prasad
    View author publications

    Search author on:PubMed Google Scholar

  9. Dragan Maric
    View author publications

    Search author on:PubMed Google Scholar

  10. Badrinath Roysam
    View author publications

    Search author on:PubMed Google Scholar

Contributions

LH, RM, and LB developed and validated the core algorithms. SM, and MJ developed the QuPath interface. HN, SP, and BR guided the core algorithm development and AI framework. JR and DM collected the imaging data and guided the mVISE validation. All authors participated in the manuscript production.

Corresponding author

Correspondence to Badrinath Roysam.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary Information.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huang, L., Mills, R., Mandula, S. et al. mViSE: A visual search engine for analyzing multiplex IHC brain tissue images (spatial proteomics). Sci Rep (2026). https://doi.org/10.1038/s41598-026-40620-5

Download citation

  • Received: 19 November 2025

  • Accepted: 13 February 2026

  • Published: 23 February 2026

  • DOI: https://doi.org/10.1038/s41598-026-40620-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com sitemap

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing