Abstract
Whole-slide multiplex brain tissue images for spatial proteomics are massive, information-dense, and challenging to analyze. We present mVISE, an interactive multiplex visual search engine that offers an alternative programming-free query-driven analysis method based on retrieving and profiling communities of similar cells, proximal cell pairs, and multicellular niches. The retrievals can be used for exploratory cell and tissue analysis, delineating brain regions and cortical layers, profiling and comparing brain regions/sub-regions/sub-layers, etc. mVISE is enabled by multiplex encoders that seamlessly integrate visual cues across imaging channels overcoming the limitations of current foundation models. We train separate encoders to learn each facet of tissue including cell morphologies, spatial protein expression (chemoarchitecture), cell arrangements (cytoarchitecture), and wiring patterns (myeloarchitecture) from a set of user-defined molecular marker panels, without the need for human annotations or intervention, and with visual confirmation of successful learning. Multiple encoders can be combined logically to drive specialized searches. We validated mViSE’s ability to retrieve single cells, proximal cell pairs, tissue patches, delineate cortical layers, brain regions, and sub-regions. mVISE is disseminated as an open-source QuPath plug-in tool.
Data availability
The datasets including the original acquired images from the microscope and the results of each step are publicly hosted at https://doi.org/10.7910/DVN/9H9LV9. We provide a comprehensive tutorial covering the machine learning model, and the QuPath extension on our GitHub repository: https://github.com/moxx799/mViSE. A complete dataset that can be loaded onto QuPath is posted at https://zenodo.org/records/18601179. This dataset includes: (i) A 36-channel TIF file that serves as the primary raw data source; (ii) A JavaScript extension for mViSE; (iii) Output files containing the results of the required experiments, required for mViSE QuPath extension; (iv) Annotation files manually curated to provide reference labels and validation for the experimental results; and (v) A step-by-step guide on how to use the files, detailing workflows for running the JavaScript extension, and accessing the experimental results.
References
Maric, D. et al. Whole-brain tissue mapping toolkit using large-scale highly multiplexed immunofluorescence imaging and deep neural networks. Nat. Commun. 12, 1550. https://doi.org/10.1038/s41467-021-21735-x (2021).
Langlieb, J. et al. The molecular cytoarchitecture of the adult mouse brain. Nature 624, 333–342. https://doi.org/10.1038/s41586-023-06818-7 (2023).
Hawrylycz, M. J. et al. An anatomically comprehensive atlas of the adult human brain transcriptome. Nature 489, 391–399. https://doi.org/10.1038/nature11405 (2012).
Swanson, L. W. Brain maps 4.0-Structure of the rat brain: An open access atlas with global nervous system nomenclature ontology and flatmaps. J. Comp. Neurol. 526, 935–943. https://doi.org/10.1002/cne.24381 (2018).
Schleicher, A., Morosan, P., Amunts, K. & Zilles, K. Quantitative architectural analysis: A new approach to cortical mapping. J. Autism Dev. Disord. 39, 1568–1581. https://doi.org/10.1007/s10803-009-0790-8 (2009).
Paxinos, G., Furlong, T. & Watson, C. Human Brainstem: Cytoarchitecture, Chemoarchitecture, Myeloarchitecture (Academic Press, 2020).
Lake, B. B. et al. Neuronal subtypes and diversity revealed by single-nucleus RNA sequencing of the human brain. Science 352, 1586–1590. https://doi.org/10.1126/science.aaf1204 (2016).
Osanai, Y., Yamazaki, R., Shinohara, Y. & Ohno, N. Heterogeneity and regulation of oligodendrocyte morphology. Front. Cell Dev. Biol. 10, 1030486. https://doi.org/10.3389/fcell.2022.1030486 (2022).
Hennigs, J. K., Matuszcak, C., Trepel, M. & Korbelin, J. Vascular endothelial cells: Heterogeneity and targeting approaches. Cells https://doi.org/10.3390/cells10102712 (2021).
Costantini, I. et al. Large-scale, cell-resolution volumetric mapping allows layer-specific investigation of human brain cytoarchitecture. Biomed. Opt. Express 12, 3684–3699. https://doi.org/10.1364/BOE.415555 (2021).
Todorov, M. I. et al. Machine learning analysis of whole mouse brain vasculature. Nat. Methods 17, 442–449. https://doi.org/10.1038/s41592-020-0792-1 (2020).
Maric, D. et al. Whole-brain tissue mapping toolkit using large-scale highly multiplexed immunofluorescence imaging and deep neural networks. Nat. Commun. 12, 1550 (2021).
Paxinos, G., Watson, C., Calabrese, E., Badea, A. & Johnson, G. A. MRI/DTI Atlas of the Rat Brain (Academic Press, 2015).
Kalra, S. et al. Yottixel–an image search engine for large archives of histopathology whole slide images. Med. Image Anal. 65, 101757 (2020).
Tizhoosh, H. R. & Pantanowitz, L. On image search in histopathology. J. Pathol. Inform. https://doi.org/10.1016/j.jpi.2024.100375 (2024).
Kaur, P. & Singh, R. K. In 2020 International Conference on Intelligent Engineering and Management (ICIEM). 187–192 (IEEE, 2020).
Qayyum, A., Anwar, S. M., Awais, M. & Majid, M. Medical image retrieval using deep convolutional neural network. Neurocomputing 266, 8–20 (2017).
Song, A. H. et al. Artificial intelligence for digital and computational pathology. Nat. Rev. Bioeng. 1, 930–949 (2023).
Lu, M. Y. et al. A visual-language foundation model for computational pathology. Nat. Med. 30, 863–874 (2024).
Adnan, M., Kalra, S., Cresswell, J. C., Taylor, G. W. & Tizhoosh, H. R. Federated learning and differential privacy for medical image analysis. Sci. Rep. 12, 1953 (2022).
Wang, X. et al. RetCCL: Clustering-guided contrastive learning for whole-slide image retrieval. Med. Image Anal. 83, 102645 (2023).
Chen, C. et al. Fast and scalable search of whole-slide images via self-supervised deep learning. Nat. Biomed. Eng. 6, 1420–1434 (2022).
Chen, R. J. et al. Towards a general-purpose foundation model for computational pathology. Nat. Med. 30, 850–862 (2024).
Kim, G. & Chun, H. Similarity-assisted variational autoencoder for nonlinear dimension reduction with application to single-cell RNA sequencing data. BMC Bioinform. 24, 432. https://doi.org/10.1186/s12859-023-05552-1 (2023).
Pinheiro Cinelli, L., Araújo Marins, M., Barros da Silva, E. A. & Lima Netto, S. In Variational Methods for Machine Learning with Applications to Deep Networks 111-149 (Springer, 2021).
Aung, T. M. M. & Khan, A. A. Enhanced U-Net with attention mechanisms for improved feature representation in lung nodule segmentation. Curr. Med. Imaging Form. Curr. Med. Imaging Rev. https://doi.org/10.2174/0115734056386382250902064757 (2025).
Ronneberger, O., Fischer, P. & Brox, T. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5–9, Proceedings, Part III 18. 234–241 (Springer, 2015).
Wang, W. et al. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 19175–19186.
Wang, Y., Blei, D. & Cunningham, J. P. Posterior collapse and latent variable non-identifiability. Adv. Neural Inf. Process. Syst. 34, 5443–5455 (2021).
Dai, Z., Wang, G., Yuan, W., Zhu, S. & Tan, P. In Proceedings of the Asian Conference on Computer Vision. 1142–1160.
Casale, F. P., Dalca, A., Saglietti, L., Listgarten, J. & Fusi, N. Gaussian process prior variational autoencoders. Adv. Neural Inf. Process. Syst. 31 (2018).
Takahashi, S. et al. Comparison of vision transformers and convolutional neural networks in medical image analysis: A systematic review. J. Med. Syst. 48, 84 (2024).
Damm, S. et al. In International Conference on Artificial Intelligence and Statistics. 3931–3960 (PMLR).
Bredell, G., Flouris, K., Chaitanya, K., Erdil, E. & Konukoglu, E. Explicitly minimizing the blur error of variational autoencoders. arXiv preprint arXiv:2304.05939 (2023).
Held, P., Krause, B. & Kruse, R. In 2016 Third European Network Intelligence Conference (ENIC). 61–68 (IEEE, 2016).
Bamwenda, J., Ozerdem, M. S., Ayyildiz, O. & Akpolat, V. A hybrid deep learning framework for accurate cell segmentation in whole slide images using YOLOv11, StarDist, and SAM2. Bioengineering https://doi.org/10.3390/bioengineering12060674 (2025).
Wang, F. & Liu, H. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2495–2504.
Zhong, Z., Zheng, L., Luo, Z., Li, S. & Yang, Y. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 598–607.
Ge, Y., Zhu, F., Chen, D. & Zhao, R. Self-paced contrastive learning with hybrid memory for domain adaptive object re-id. Adv. Neural Inf. Process. Syst. 33, 11309–11321 (2020).
Wang, D. & Zhang, S. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10981–10990.
Zhong, Z., Zheng, L., Cao, D. & Li, S. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 1318–1327.
Bourriez, N. et al. ChAda-ViT : Channel adaptive attention for joint representation learning of heterogeneous microscopy images. Proc. CVPR IEEE https://doi.org/10.1109/Cvpr52733.2024.01098 (2024).
Dosovitskiy, A. et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).
Wang, Q. et al. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11534–11542.
Hermans, A., Beyer, L. & Leibe, B. In Defense of the Triplet Loss for Person Re-Identification. arXiv preprint arXiv:1703.07737 (2017).
Johnson, J., Douze, M. & Jégou, H. Billion-scale similarity search with GPUs. IEEE Trans. Big Data 7, 535–547 (2019).
Rosvall, M., Axelsson, D. & Bergstrom, C. T. The map equation. Eur. Phys. J. Spec. Top. 178, 13–23 (2009).
Siméoni, O. et al. Dinov3. arXiv preprint arXiv:2508.10104 (2025).
Xu, H. et al. A whole-slide foundation model for digital pathology from real-world data. Nature 630, 181–188 (2024).
Ding, T. et al. A multimodal whole-slide foundation model for pathology. Nat. Med. 1–13 (2025).
Bankhead, P. et al. QuPath: Open source software for digital pathology image analysis. Sci. Rep. 7, 1–7 (2017).
Acknowledgements
This work was supported by NIH grant R01NS109118 (PI: Roysam).
Funding
This work was funded by the National Institutes of Neurological Disorders and Stroke (NINDS) under grant NIH R01NS109118 (PI: B. Roysam).
Author information
Authors and Affiliations
Contributions
LH, RM, and LB developed and validated the core algorithms. SM, and MJ developed the QuPath interface. HN, SP, and BR guided the core algorithm development and AI framework. JR and DM collected the imaging data and guided the mVISE validation. All authors participated in the manuscript production.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Huang, L., Mills, R., Mandula, S. et al. mViSE: A visual search engine for analyzing multiplex IHC brain tissue images (spatial proteomics). Sci Rep (2026). https://doi.org/10.1038/s41598-026-40620-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-026-40620-5