Abstract
Large-scale efforts by the BRAIN Initiative Cell Census Network (BICCN) are generating a comprehensive reference atlas of cell types in the mouse brain. A key challenge in this effort is mapping diverse datasets, acquired with varied imaging, tissue processing, and profiling methods, into shared coordinate frameworks. Here, we present mouse brain mapping pipelines developed using the Advanced Normalization Tools Ecosystem (ANTsX) to align MERFISH spatial transcriptomics and high-resolution fMOST morphology data to the Allen Common Coordinate Framework (CCFv3), and developmental MRI and LSFM data to the Developmental CCF (DevCCF). Simultaneously, we introduce two novel methods: 1) a velocity field–based approach for continuous interpolation across developmental timepoints, and 2) a deep learning framework for automated brain parcellation using minimally annotated and publicly available data. All workflows are open-source and reproducible. We also provide general guidance for selecting appropriate strategies across modalities, enabling researchers to adapt these tools to new data.
Similar content being viewed by others
Introduction
Over the past decade, there have been significant advancements in mesoscopic single-cell analysis of the mouse brain. It is now possible to track single neurons1, observe whole-brain developmental changes at cellular resolution2, associate brain regions with genetic composition3, and locally characterize neural connectivity4. These scientific achievements have been propelled by high-resolution profiling and imaging techniques that enable submicron, multimodal, 3D characterizations of whole mouse brains. Among these are micro-optical sectioning tomography5,6, tissue clearing methods1,7, spatial transcriptomics8,9, and single-cell genomic profiling10, each offering expanded specificity and resolution for cell-level brain analysis.
Recent efforts by the NIH BRAIN Initiative have mobilized large-scale international collaborations to create a comprehensive reference database of mouse brain structure and function. The BRAIN Initiative Cell Census Network has aggregated over 40 multimodal datasets from more than 30 research groups11, many of which are registered to standardized anatomical coordinate systems to support integrated analysis. Among the most widely used of these frameworks is the Allen Mouse Brain Common Coordinate Framework (CCFv3)12. Other CCFs include modality-specific refs. 13,14,15 and developmental atlases16,17 that track structural change across time.
Robust mapping of cell type data into CCFs is essential for integrative analysis of morphology, connectivity, and molecular identity. However, each modality poses unique challenges. For example, differences in tissue processing, imaging protocols, and anatomical completeness often introduce artifacts such as distortion, tearing, holes, and signal dropout18,19,20,21,22,23. Intensity differences and partial representations of anatomy can further complicate alignment. Also, while alternative strategies for mapping single-cell spatial transcriptomic data exist (e.g., gene expression–based models such as Tangram24) this work focuses on image-based anatomical alignment to common coordinate frameworks using spatially resolved reference images. Given this diversity specialized strategies are often needed to address the unique, modality-specific challenges.
Existing mapping solutions fall into three broad categories. The first includes integrated processing platforms that provide users with mapped datasets (e.g., Allen Brain Cell Atlas25, Brain Architecture Portal26, OpenBrainMap27, and Image and Multi-Morphology Pipeline28). These offer convenience and high-quality curated data, but limited generalizability and customization. The second category involves highly specialized pipelines tailored to specific modalities such as histology29,30,31, magnetic resonance imaging (MRI)32,33,34, microCT35,36, light sheet fluorescence microscopy (LSFM)37,38, fluorescence micro-optical sectioning tomography (fMOST)15,39, and spatial transcriptomics, including multiplexed error-robust fluorescence in situ hybridization (MERFISH)40,41,42. While effective, these solutions often require extensive engineering effort to adapt to new datasets or modalities. Finally, general-purpose toolkits such as elastix43, Slicer3D44, and the Advanced Normalization Tools Ecosystem (ANTsX)45 have all been applied to mouse brain mapping scenarios. These toolkits support modular workflows that can be flexibly composed from reusable components, offering a powerful alternative to rigid, modality-specific solutions. However, their use often requires familiarity with pipeline modules, parameter tuning, and tool-specific conventions which can limit adoption.
Building on this third category, we describe a set of modular, ANTsX-based pipelines specifically tailored for mapping diverse mouse brain data into standardized anatomical frameworks. These include two new pipelines: a velocity field–based interpolation model that enables continuous transformations across developmental timepoints of the DevCCF, and a template-based deep learning pipeline for whole brain segmentation (i.e., brain extraction) and structural anatomical regional labeling of the brain (i.e., brain parcellation) requiring minimal annotated data. In addition, we include two modular pipelines for aligning MERFISH and fMOST datasets to the Allen CCFv3. While the MERFISH dataset was previously published as part of earlier BICCN efforts46, the full image processing and registration workflow had not been described in detail until now. The fMOST workflow, by contrast, was developed internally to support high-resolution morphology mapping and has not been previously published in any form. Both pipelines were built using ANTsX tools, adapted for collaborative use with the Allen Institute, and are now released as fully reproducible, open-source workflows to support reuse and extension by the community. To facilitate broader adoption, we also provide general guidance for customizing these strategies across imaging modalities and data types. We first introduce key components of the ANTsX toolkit, which provide a basis for all of the mapping workflows described here, and then detail the specific contributions made in each pipeline.
The Advanced Normalization Tools Ecosystem (ANTsX) has been used in a number of applications for mapping mouse brain data as part of core processing steps in various workflows31,46,47,48,49, particularly its pairwise, intensity-based image registration capabilities50 and bias field correction51. Historically, ANTsX development is based on foundational approaches to image mapping52,53,54, especially in the human brain, with key contributions such as the Symmetric Normalization (SyN) algorithm50. It has been independently evaluated in diverse imaging domains including multi-site brain MRI55, pulmonary CT56, and multi-modal brain tumor registration57. More recent contributions for mouse-specific applications showcase multimodal template generation16 and anatomy-aware registration ANTsX functionality.
Beyond registration, ANTsX provides functionality for template generation58, segmentation59, preprocessing51,60, and deep learning45. It has demonstrated strong performance in consensus labeling61, brain tumor segmentation62, and cardiac motion estimation63. Built on the Insight Toolkit (ITK)64, ANTsX benefits from open-source contributions while supporting continued algorithm evaluation and innovation. In the context of mouse brain data, ANTsX provides a robust platform for developing modular pipelines to map diverse imaging modalities into CCFs. These tools span multiple classes of mapping problems: cross-modality image registration, landmark-driven alignment, temporal interpolation across developmental stages, and deep learning–based segmentation. As such, they also serve as illustrative case studies for adapting ANTsX tools to other use cases. We describe both shared infrastructure and targeted strategies adapted to the specific challenges of each modality. This paper highlights usage across distinct BICCN projects such as spatial transcriptomic data from MERFISH, structural data from fMOST, and multimodal developmental data from LSFM and MRI.
We introduce two novel contributions to ANTsX developed as part of collaborative efforts in creating the Developmental Common Coordinate Framework (DevCCF)16. First, we present an open-source velocity field–based interpolation framework for continuous mapping across the sampled embryonic and postnatal stages of the DevCCF atlas16. This functionality enables biologically plausible interpolation between timepoints via a time-parameterized diffeomorphic velocity model65, inspired by previous work66. Second, we present a deep learning pipeline for structural parcellation of the mouse brain from multimodal MRI data. This includes two novel components: 1) a template-derived brain extraction model using augmented data from two ANTsX-derived template datasets67,68, and 2) a template-derived parcellation model trained on DevCCF P56 labelings mapped from the AllenCCFv3. This pipeline demonstrates how ANTsX tools and public resources can be leveraged to build robust anatomical segmentation pipelines with minimal annotated data. We independently evaluate this framework using a longitudinal external dataset69, demonstrating generalizability across specimens and imaging protocols. All components are openly available through the R and Python ANTsX packages, with general-purpose functionality documented in a reproducible, cross-platform tutorial (https://tinyurl.com/antsxtutorial). Code specific to this manuscript, including scripts to reproduce the novel contributions and all associated evaluations, is provided in a dedicated repository (https://github.com/ntustison/ANTsXMouseBrainMapping). Additional tools for mapping spatial transcriptomic (MERFISH) and structural (fMOST) data to the AllenCCFv3 are separately available at (https://github.com/dontminchenit/CCFAlignmentToolkit).
Results
Mapping multiplexed error-robust fluorescence in situ hybridization (MERFISH)
We developed an ANTsX-based pipeline to map spatial transcriptomic MERFISH data into the AllenCCFv3 (Fig. 1a). This approach was used in recent efforts to create a high-resolution transcriptomic atlas of the mouse brain46. The pipeline maps spatial gene expression patterns from MERFISH onto anatomical labels in the AllenCCFv3. It includes MERFISH-specific preprocessing steps such as section reconstruction, label generation from spatial transcriptomic maps, and anatomical correspondence mapping. Alignment proceeds in two stages: 1) 3D affine registration and section matching of the AllenCCFv3 to the MERFISH data, and 2) linear + deformable 2D section-wise alignment between matched MERFISH and atlas slices. These transformations are concatenated to produce a complete mapping from each MERFISH data to AllenCCFv3.
MERFISH imaging was performed on cryosectioned brains from C57BL/6 mice using previously described protocols46. Brains were placed into an optimal cutting temperature (OCT) compound (Sakura FineTek 4583) stored at −80°. The fresh frozen brain was sectioned at 10 μm on Leica 3050 S cryostats at intervals of 200 μm to evenly cover the brain. A set of 500 genes was selected to distinguish ~ 5200 transcriptomic clusters. Raw MERSCOPE data were decoded using Vizgen software (v231). Cell segmentation was performed using Cellpose70,71 based on DAPI and PolyT stains which was propagated to adjacent slices across z-planes. Each MERFISH cell was assigned a transcriptomic identity by mapping to a scRNA-seq reference taxonomy.
Alignment quality was evaluated iteratively by an expert anatomist, guided by expected gene-marker correspondences to AllenCCFv3 regions. As previously reported46, further assessment of the alignment showed that, of the 554 terminal regions (gray matter only in the AllenCCFv3), only seven small subregions did not contain cells from the MERFISH dataset post registration: frontal pole, layer 1 (FRP1), FRP2/3, FRP5; accessory olfactory bulb, glomerular layer (AOBgl); accessory olfactory bulb, granular layer (AOBgr); accessory olfactory bulb, mitral layer (AOBmi); and accessory supraoptic group (ASO). A broader discussion of evaluation design choices and evaluation rationale is included in the Discussion.
Mapping fluorescence micro-optical sectioning tomography (fMOST) data
We also constructed a pipeline for mapping fMOST images to the AllenCCFv3 using ANTsX (Fig. 1b). The approach leverages a modality-specific average fMOST atlas as an intermediate target, adapted from previous work in human and mouse brain mapping12,15,16,58,72,73,74,75. The atlas was constructed from 30 fMOST images selected to capture representative variability in anatomical shape and image intensity across the population. Preprocessing includes cubic B-spline downsampling to match the 25 μm isotropic AllenCCFv3 resolution, stripe artifact suppression using a 3D notch filter implemented with SciPy’s frequency-domain filtering tools, and N4 bias field correction51. A one-time, annotation-driven alignment registers the fMOST atlas to AllenCCFv3 using landmark-based registration of key structures. This canonical mapping is then reused. New fMOST specimens are first aligned to the fMOST atlas using standard intensity-based registration, and the concatenated transforms yield full spatial normalization to the AllenCCFv3. This same mapping can be applied to neuron reconstructions to facilitate population-level analysis of morphology and spatial distribution.
fMOST imaging was performed on 55 mouse brains with sparse transgenic labeling of neuron populations76,77 using the high-throughput fMOST platform78,79. Voxel resolution was 0.35 × 0.35 × 1.0 μm3. Two imaging channels were acquired: GFP-labeled neuron morphology (green), and propidium iodide counterstaining for cytoarchitecture (red). Alignment was performed using the red channel for its greater contrast, though multi-channel mapping is also supported.
The canonical mapping from the fMOST atlas to AllenCCFv3 was evaluated using both quantitative and qualitative approaches. Dice similarity coefficients were computed between corresponding anatomical labels in the fMOST atlas and AllenCCFv3 following registration. These labels were manually annotated or adapted from existing atlas segmentations. Representative Dice scores included: whole brain (0.99), caudate putamen (0.97), fimbria (0.91), posterior choroid plexus (0.93), anterior choroid plexus (0.96), optic chiasm (0.77), and habenular commissure (0.63). In addition to these quantitative assessments, each registered fMOST specimen was evaluated qualitatively. An expert anatomist reviewed alignment accuracy and confirmed structural correspondence. Neuron reconstructions from individual brains were also transformed into AllenCCFv3 space, and their trajectories were visually inspected to confirm anatomical plausibility and preservation of known projection patterns. A broader discussion of evaluation design choices and evaluation rationale is included in the Discussion.
Continuously mapping the DevCCF developmental trajectory
The DevCCF is an openly accessible resource for the mouse brain research community16, comprising symmetric, multi-modal MRI and LSFM templates generated using the ANTsX framework58. It spans key stages of mouse brain development (E11.5, E13.5, E15.5, E18.5, P4, P14, and P56) and includes structural labels defined by a developmental ontology. The DevCCF was constructed in coordination with the AllenCCFv3 to facilitate integration across atlases and data types.
Although this collection provides broad developmental coverage, its discrete sampling limits the ability to model continuous transformations across time. To address this, we developed a velocity flow–based modeling approach that enables anatomically plausible, diffeomorphic transformations between any two continuous time points within the DevCCF range (Fig. 2). Unlike traditional pairwise interpolation, which requires sequential warping through each intermediate stage, this model, defined by a time-varying velocity field (i.e., a smooth vector field defined over space and time that governs the continuous deformation of an image domain), allows direct computation of deformations between any two time points in the continuum which improves smoothness and enables flexible spatiotemporal alignment. This functionality is implemented in both ANTsR and ANTsPy (see ants.fit_time_varying_transform_to_point_sets(...)) and integrates seamlessly with existing ANTsX workflows. The velocity field is represented as a 4D ITK image where each voxel stores the x,y,z components of motion at a given time point. Integration of the time-varying velocity field uses uses 4th order Runge-Kutta (ants.integrate_velocity_field(...))80.
Each DevCCF template includes over 2500 labeled anatomical regions, with spatial resolutions ranging from 31.5 to 50 μm. For the velocity flow modeling task, we identified a common set of 26 bilateral regions (13 per hemisphere) that were consistently labeled across all timepoints. These regions span major developmental domains including the pallium, subpallium, midbrain, prosomeres, hypothalamus, hindbrain subregions, and key white matter tracts (Fig. 3).
Prior to velocity field optimization, all templates were rigidly aligned to the DevCCF P56 template using the centroids of these common label sets. Pairwise correspondence between adjacent timepoints was then computed using ANTsX’s multi-metric registration via ants.registration(...). Instead of performing intensity-based multi-label registration directly, we constructed 24 binary label masks per atlas pair (one per structure) and optimized alignment using the mean squares similarity metric with the SyN transform50.
To generate the point sets for velocity field optimization, we sampled both boundary (contour) and interior (region) points from the P56 labels and propagated them to each developmental stage using the learned pairwise transforms. Contours were sampled at 10% of available points and regions at 1%, yielding 173,303 total points per atlas (Ncontour = 98, 151; Nregion = 75,152). Boundary points were assigned double weight during optimization to emphasize anatomical boundary correspondence.
The velocity field was optimized using the seven corresponding point sets and their associated weights. The field geometry was defined at [256, 182, 360] with 11 integration points at 50 μm resolution, yielding a compressed velocity model of ~ 2 GB. This resolution balanced accuracy and computational tractability while remaining portable. All data and code are publicly available in the accompanying GitHub repository.
To normalize temporal spacing, we assigned scalar values in [0, 1] to each template. Given the nonlinear spacing in postnatal development, we applied a logarithmic transform to the raw time values prior to normalization. Within this logarithmic temporal transform, P56 was assigned a span of 28 postnatal days to reflect known developmental dynamics (i.e., in terms of modeling the continuous deformation, the morphological changes between Day 28 and Day 56 are insignificant). This improved the temporal distribution of integration points (Fig. 4, right panel).
(Top left) Total displacement error over iterations. (Top right) Median displacement error per integration point across the optimization timeline, spanning embryonic (E11.5) to postnatal (P56) stages. (Bottom) Dice similarity scores comparing region-level label overlap between: (1) conventional pairwise SyN registration and (2) velocity flow-based deformation, across intermediate timepoints. Using region-based pairwise registration with SyN as a performance upper bound, the velocity flow model achieves comparable accuracy while also enabling smooth, continuous deformation across the full developmental continuum.
Optimization was run for a maximum of 200 iterations using a 2020 iMac (3.6 GHz 10-Core Intel Core i9, 64 GB RAM), with each iteration taking ~ 6 min. During each iteration, the velocity field was updated across all 11 integration points by computing regularized displacement fields between warped point sets at adjacent time slices. Updates were applied using a step size of δ = 0.2. Convergence was assessed via average displacement error across all points, with final convergence achieved after ~ 125 iterations (Fig. 4, left panel). Median errors across integration points also trended toward zero, albeit at varying rates. To benchmark performance, we compared the velocity model’s region-based alignment to traditional pairwise registration using SyN, a widely used diffeomorphic algorithm. The velocity model achieved comparable Dice scores at sampled timepoints while additionally offering smooth interpolation across the entire developmental trajectory.
Once optimized, the velocity field enables the computation of diffeomorphic transformations between any pair of continuous time points within the DevCCF developmental range. Figure 5 illustrates cross-warping between all DevCCF stages using the velocity flow model. In addition to facilitating flexible alignment between existing templates, the model also supports the synthesis of virtual templates at intermediate, unsampled developmental stages. As shown in Fig. 6, we demonstrate the creation of virtual age templates (e.g., P10.3 and P20) by warping adjacent developmental atlases to a target timepoint and constructing an averaged representation using ANTsX’s template-building functionality.
All usage examples, scripts, and supporting data for full reproducibility are publicly available in the associated codebase.
Automated structural labeling of the mouse brain
Structural labeling strategies for the mouse brain are essential for understanding the organization and function of the murine nervous system81. By dividing the brain into anatomically or functionally defined regions, researchers can localize biological processes, relate regional features to behavior, or quantify spatial variation in gene expression patterns82,83. While deep learning techniques have yielded robust segmentation and labeling tools for the human brain (e.g., SynthSeg84, ANTsXNet45), analogous development for mouse data (e.g., MEMOS85) has been limited. Mouse neuroimaging often presents unique challenges, such as highly anisotropic sampling, that complicate transfer of existing tools. At the same time, high resolution resources like the AllenCCFv3 and DevCCF provide reference label sets that can serve as training data. We demonstrate how ANTsX can be used to construct a full structural labeling pipeline for the mouse brain (Fig. 7), including both whole brain segmentation (i.e., brain extraction) and the subsequent template-based region segmentation.
The mouse brain cortical labeling pipeline integrates two deep learning components for brain extraction and anatomical region segmentation. Both networks rely heavily on data augmentation applied to templates constructed from open datasets. The framework also supports further refinement or alternative label sets tailored to specific research needs. Possible applications include voxelwise cortical thickness estimation.
To develop a general-purpose mouse brain extraction model, we constructed whole-head templates from two publicly available T2-weighted datasets. The first dataset, from the Center for Animal MRI (CAMRI) at the University of North Carolina at Chapel Hill67, includes 16 isotropic MRI volumes acquired at 0.16 × 0.16 × 0.16 mm3 resolution. The second dataset68 comprises 88 specimens acquired in three orthogonal 2D views (coronal, axial, sagittal) at 0.08 × 0.08 mm3 in-plane resolution with 0.5 mm slice thickness. These orthogonal 2D acquisitions were reconstructed into high-resolution 3D volumes using a B-spline fitting algorithm86. Using this synthesized dataset and the CAMRI images, we created two ANTsX-based population templates58, each paired with a manually delineated brain mask. These served as the basis for training an initial template-based brain extraction model. Deep learning training of the network employed aggressive data augmentation strategies, including bias field simulation, histogram warping, random spatial deformation, noise injection, and anisotropic resampling. This enabled the model to generalize beyond the two templates. The initial model was released through ANTsXNet and made publicly available.
Subsequent community use led to further improvements. A research group applying the tool to their own ex vivo T2-weighted mouse brain data contributed a third template and associated mask (acquired at 0.08 mm isotropic resolution). Incorporating this into the training data improved robustness and accuracy to an independent dataset and extended the model’s generalizability. The refined model is distributed through ANTsPyNet via antspynet.mouse_brain_extraction(...).
The AllenCCFv3 atlas and its hierarchical ontology, along with the DevCCF, provide a strong foundation for developing region-wise anatomical labeling models for multi-modal mouse brain imaging. Using the allensdk Python library, we generated a coarse segmentation scheme by grouping anatomical labels into six major regions: cerebral cortex, cerebral nuclei, brainstem, cerebellum, main olfactory bulb, and hippocampal formation. These labels were mapped onto the P56 T2-weighted DevCCF template to serve as training targets. We trained a 3D U-net–based segmentation network using this template and the same augmentation strategies described for brain extraction. The model is publicly available via ANTsXNet (antspynet.mouse_brain_parcellation(...)) and supports robust anatomical labeling across diverse imaging geometries and contrasts. The inclusion of aggressive augmentation, including simulated anisotropy, enables the model to perform well even on thick-slice input data. Internally, the model reconstructs isotropic probability and label maps, facilitating downstream morphometric analyses. For example, this network integrates with the ANTsX cortical thickness estimation pipeline (antspynet.mouse_cortical_thickness(...)) to produce voxelwise cortical thickness maps, even when applied to anisotropic or limited-resolution mouse brain data.
For evaluation, we used an additional publicly available dataset69 that is completely independent from the data used in training the brain extraction and parcellation networks. Data includes 12 specimens each imaged at seven time points (Day 0, Day 3, Week 1, Week 4, Week 8, Week 20) with in-house-generated brain masks (i.e., produced by the data providers) for a total of 84 images. Spacing is anisotropic with an in-plane resolution of 0.1 × 0.1 mm2 and a slice thickness of 0.5 mm.
Figure 8 summarizes the whole-brain overlap between manually segmented reference masks and the predicted segmentations for all 84 images in the evaluation cohort. The proposed network demonstrates excellent performance in brain extraction across a wide age range. To further assess the utility of the parcellation network, we used the predicted labels to guide anatomically informed registration to the AllenCCFv3 atlas using ANTsX multi-component registration, and compared this to intensity-only registration (Fig. 9). While intensity-based alignment performs reasonably well, incorporating the predicted parcellation significantly improves regional correspondence. Dice scores shown in Fig. 9c were computed using manually segmented labels transformed to AllenCCFv3 space.
Evaluation of the ANTsX mouse brain extraction on an independent, publicly available dataset consisting of 12 specimens × 7 time points = 84 total images. Dice overlap comparisons with the user-generated brain masks provide good agreement with the automated results from the brain extraction network.
Evaluation of the ANTsX deep learning--based mouse brain parcellation on a diverse MRI cohort. a T2-weighted DevCCF P56 template with the six-region parcellation: cerebral cortex, nuclei, brain stem, cerebellum, main olfactory bulb, and hippocampal formation. b Example segmentation result from a representative subject (NR5, Day 0) using the proposed deep learning pipeline. c Box plots show Dice overlap across subjects for each registration approach and region. The centre line is the median; box bounds are the interquartile range (25th--75th percentiles); whiskers extend to the minimum and maximum values within 1.5 × IQR of the lower/upper quartiles; points beyond the whiskers are outliers.
Discussion
The diverse mouse brain cell type profiles gathered through BICCN and associated efforts provide a rich multi-modal resource to the research community. However, despite significant progress, optimal leveraging of these valuable resources remains an ongoing challenge. A central component to data integration is accurately mapping novel cell type data into common coordinate frameworks (CCFs) for subsequent processing and analysis. To meet these needs, tools for mapping mouse brain data must be both broadly accessible and capable of addressing challenges unique to each modality. In this work, we described modular ANTsX-based pipelines developed to support three distinct BICCN efforts encompassing spatial transcriptomic, morphological, and developmental data. We demonstrated how a flexible image analysis toolkit like ANTsX can be tailored to address specific modality-driven constraints by leveraging reusable, validated components.
As part of collaborative efforts with the Allen Institute for Brain Science and the broader BICCN initiative, we developed two modular pipelines for mapping MERFISH and fMOST datasets to the AllenCCFv3. These workflows were designed to accommodate the specific requirements of high-resolution transcriptomic and morphological data while leveraging reusable components from the ANTsX ecosystem. The MERFISH pipeline incorporates preprocessing and registration steps tailored to known anatomical and imaging artifacts in multiplexed spatial transcriptomic data. While the general mapping strategy is applicable to other sectioned histological datasets, these refinements demonstrate how general-purpose tools can be customized to meet the demands of specialized modalities. The fMOST workflow, in contrast, emphasizes reusability and consistency across large datasets. It introduces an intermediate, canonical fMOST atlas to stabilize transformations to the AllenCCFv3, reducing the need for repeated manual alignment and enabling standardized mapping of single-neuron reconstructions to a common coordinate framework.
Evaluation of both workflows followed established QA/QC protocols used at the Allen Institute, emphasizing biologically meaningful criteria such as expected gene-marker alignment (MERFISH) and accurate reconstruction of neuronal morphology (fMOST). These domain-informed assessments, also used in prior large-scale mapping projects46, prioritize task-relevant accuracy over other possible benchmarks such as Dice coefficients or landmark distances. While formal quantitative scores were not reported for these specific pipelines, they both demonstrate reliable, expert-validated performance in collaborative contexts. Additional documentation and evaluation commentary are available in the updated CCFAlignmentToolkit GitHub repository.
For developmental data, we introduced a velocity field–based model for continuous interpolation between discrete DevCCF timepoints. Although the DevCCF substantially expands coverage of developmental stages relative to prior atlases, temporal gaps remain. The velocity model enables spatio-temporal transformations within the full developmental interval and supports the generation of virtual templates at unsampled ages. This functionality is built using ANTsX components for velocity field optimization and integration, and offers a novel mechanism for interpolating across the non-linear developmental trajectory of the mouse brain. Such interpolation has potential utility for both anatomical harmonization and longitudinal analyses. Interestingly, long-range transformations (e.g., P56 to E11.5) revealed anatomy evolving in plausible ways yet sometimes diverging from known developmental patterns (e.g., hippocampal shape changes) reflecting the input data and offering insight into temporal gaps. These behaviors could assist future efforts to determine which additional time points would most improve spatiotemporal coverage.
We also introduced a template-based deep learning pipeline for mouse brain extraction and parcellation using aggressive data augmentation. This approach is designed to reduce the reliance on large annotated training datasets, which remain limited in the mouse imaging domain. Evaluation on independent data demonstrates promising generalization, though further refinement will be necessary. As with our human-based ANTsX pipelines, failure cases can be manually corrected and recycled into future training cycles. Community contributions are welcomed and encouraged, providing a pathway for continuous improvement and adaptation to new datasets.
The ANTsX ecosystem offers a powerful foundation for constructing scalable, reproducible pipelines for mouse brain data analysis. Its modular design and multi-platform support enable researchers to develop customized workflows without extensive new software development. The widespread use of ANTsX components across the neuroimaging community attests to its utility and reliability. As a continuation of the BICCN program, ANTsX is well positioned to support the goals of the BRAIN Initiative Cell Atlas Network (BICAN) and future efforts to extend these mapping strategies to the human brain.
Methods
The following methods are all available as part of the ANTsX ecosystem with analogous elements existing in both ANTsR (ANTs in R) and ANTsPy (ANTs in Python), underpinned by a shared ANTs/ITK C++ core. Most development for the work described was performed using ANTsPy. For equivalent functionality in ANTsR, we refer the reader to the comprehensive ANTsX tutorial: https://tinyurl.com/antsxtutorial.
General ANTsX utilities
Although focused on distinct data types, the three pipelines presented in this work share common components that address general challenges in mapping mouse brain data. These include correcting image intensity artifacts, denoising, spatial registration, template generation, and visualization. Table 1 provides a concise summary of the relevant ANTsX functionality.
Standard preprocessing steps in mouse brain imaging include correcting for spatial intensity inhomogeneities and reducing image noise, both of which can impact registration accuracy and downstream analysis. ANTsX provides implementations of widely used methods for these tasks. The N4 bias field correction algorithm51, originally developed in ANTs and contributed to ITK, mitigates artifactual, low-frequency intensity variation and is accessible via ants.n4_bias_field_correction(...). Patch-based denoising60 has been implemented as ants.denoise_image(...).
ANTsX includes a robust and flexible framework for pairwise and groupwise image registration80. At its core is the SyN algorithm50, a symmetric diffeomorphic model with optional B-spline regularization66. In ANTsPy, registration is performed via ants.registration(...) using preconfigured parameter sets (e.g., antsRegistrationSyNQuick[s], antsRegistrationSyN[s]) suitable for different imaging modalities and levels of computational demand. Resulting transformations can be applied to new images with ants.apply_transforms(...).
ANTsX supports population-based template generation through iterative pairwise registration to an evolving estimate of the mean shape and intensity reference space across subjects58. This functionality was used in generating the DevCCF templates16. The procedure, implemented as ants.build_template(...), produces average images in both shape and intensity by aligning all inputs to a common evolving template.
To support visual inspection and quality control, ANTsPy provides flexible image visualization with ants.plot(...). This function enables multi-slice and multi-orientation rendering with optional overlays and label maps.
Mapping fMOST data to AllenCCFv3
Mapping fMOST data into the AllenCCFv3 presents unique challenges due to its native ultra-high resolution and imaging artifacts common to the fMOST modality. Each fMOST image can exceed a terabyte in size, with spatial resolutions far exceeding those of the AllenCCFv3 (25 μm isotropic). To reduce computational burden and prevent resolution mismatch, each fMOST image is downsampled using cubic B-spline interpolation via ants.resample_image(...) to match the template resolution.
Stripe artifacts (i.e., periodic intensity distortions caused by nonuniform sectioning or illumination) are common in fMOST and can mislead deformable registration algorithms. These were removed using a custom 3D notch filter (remove_stripe_artifact(...)) implemented in the CCFAlignmentToolkit using SciPy frequency domain filtering. The filter targets dominant stripe frequencies along a user-specified axis in the Fourier domain. In addition, intensity inhomogeneity across sections, often arising from variable staining or illumination, was corrected using N4 bias field correction.
To facilitate reproducible mapping, we first constructed a contralaterally symmetric average template from 30 fMOST brains and their mirrored counterparts using ANTsX template-building tools. Because the AllenCCFv3 and fMOST data differ substantially in both intensity contrast and morphology, direct deformable registration between individual fMOST brains and the AllenCCFv3 was insufficiently robust. Instead, we performed a one-time expert-guided label-driven registration between the average fMOST template and AllenCCFv3. This involved sequential alignment of seven manually selected anatomical regions: 1) brain mask/ventricles, 2) caudate/putamen, 3) fimbria, 4) posterior choroid plexus, 5) optic chiasm, 6) anterior choroid plexus, and 7) habenular commissure which were prioritized to enable coarse-to-fine correction of shape differences. Once established, this fMOST-template-to-AllenCCFv3 transform was reused for all subsequent specimens. Each new fMOST brain was then registered to the average fMOST template using intensity-based registration, followed by concatenation of transforms to produce the final mapping into AllenCCFv3 space.
A key advantage of fMOST imaging is its ability to support single neuron projection reconstruction across the entire brain77. Because these reconstructions are stored as 3D point sets aligned to the original fMOST volume, we applied the same composite transform used for image alignment to the point data using ANTsX functionality. This enables seamless integration of cellular morphology data into AllenCCFv3 space, facilitating comparative analyses across specimens.
Mapping MERFISH data to AllenCCFv3
MERFISH data are acquired as a series of 2D tissue sections, each comprising spatially localized gene expression measurements at subcellular resolution. To enable 3D mapping to the AllenCCFv3, we first constructed anatomical reference images by aggregating the number of detected transcripts per voxel across all probes within each section. These 2D projections were resampled to a resolution of 10 μm × 10 μm to match the in-plane resolution of the AllenCCFv3.
Sections were coarsely aligned using manually annotated dorsal and ventral midline points, allowing initial volumetric reconstruction. However, anatomical fidelity remained limited by variation in section orientation, spacing, and tissue loss. To further constrain alignment and enable deformable registration, we derived region-level anatomical labels directly from the gene expression data.
To assign region labels to the MERFISH data, we use a cell type clustering approach previously detailed46. In short, manually dissected scRNAseq data was used to establish the distribution of cell types present in each of the following major regions: cerebellum, CTXsp, hindbrain, HPF, hypothalamus, isocortex, LSX, midbrain, OLF, PAL, sAMY, STRd, STRv, thalamus and hindbrain. Clusters in the scRNA-seq dataset were then used to assign similar clusters of cell types in the MERFISH data to the regions they are predominantly found in the scRNA-seq data. To account for clusters that were found at low frequency in regions outside its main region we calculated for each cell its 50 nearest neighbors in physical space and reassigned each cell to the region annotation dominating its neighborhood.
A major challenge was compensating for oblique cutting angles and non-uniform section thickness, which distort the anatomical shape and spacing of the reconstructed volume. Rather than directly warping the MERFISH data into atlas space, we globally aligned the AllenCCFv3 to the MERFISH coordinate system. This was done via an affine transformation followed by resampling of AllenCCFv3 sections to match the number and orientation of MERFISH sections. This approach minimizes interpolation artifacts in the MERFISH data and facilitates one-to-one section matching.
We used a 2.5D approach for fine alignment of individual sections. In each MERFISH slice, deformable registration was driven by sequential alignment of anatomical landmarks between the label maps derived from MERFISH and AllenCCFv3. A total of nine regions, including isocortical layers 2/3, 5, and 6, the striatum, hippocampus, thalamus, and medial/lateral habenula, were registered in an empirically determined order. After each round, anatomical alignment was visually assessed by an expert, and the next structure was selected to maximize improvement in the remaining misaligned regions.
The final transform for each section combined the global affine alignment and the per-structure deformable registrations. These were concatenated to generate a 3D mapping from the original MERFISH space to the AllenCCFv3 coordinate system. Once established, the composite mapping enables direct transfer of gene-level and cell-type data from MERFISH into atlas space, allowing integration with other imaging and annotation datasets.
DevCCF velocity flow transformation model
The Developmental Common Coordinate Framework (DevCCF)16 provides a discrete set of age-specific templates that temporally sample the developmental trajectory. To model this biological progression more continuously, we introduce a velocity flow–based paradigm for inferring diffeomorphic transformations between developmental stages. This enables anatomically plausible estimation of intermediate templates or mappings at arbitrary timepoints between the E11.5 and P56 endpoints of the DevCCF. Our approach builds on established insights from time-varying diffeomorphic registration65, where a velocity field governs the smooth deformation of anatomical structures over time. Importantly, the framework is extensible and can naturally accommodate additional timepoints for the potential expansion of the DevCCF.
We first coalesced the anatomical labels across the seven DevCCF templates (E11.5, E13.5, E15.5, E18.5, P4, P14, P56) into 26 common structures that could be consistently identified across development. These include major brain regions such as the cortex, cerebellum, hippocampus, midbrain, and ventricles. For each successive pair of templates, we performed multi-label deformable registration using ANTsX to generate forward and inverse transforms between anatomical label volumes. From the P56 space, we randomly sampled approximately 1e6 points within and along the boundaries of each labeled region and propagated them through each pairwise mapping step (e.g., P56 → P14, P14 → P4, …, E13.5 → E11.5). This procedure created time-indexed point sets tracing the spatial evolution of each region.
Using these point sets, we fit a continuous velocity field over developmental time using a generalized B-spline scattered data approximation method86. The field was parameterized over a log-scaled time axis to ensure finer temporal resolution during early embryonic stages, where morphological changes are most rapid. Optimization proceeded for approximately 125 iterations, minimizing the average Euclidean norm between transformed points at each step. Ten integration points were used to ensure numerical stability. The result is a smooth, differentiable vector field that defines a diffeomorphic transform between any two timepoints within the template range.
This velocity model can be used to estimate spatial transformations between any pair of developmental stages—even those for which no empirical template exists—allowing researchers to create interpolated atlases, align new datasets, or measure continuous structural changes. It also enables developmental alignment of multi-modal data (e.g., MRI to LSFM) by acting as a unifying spatiotemporal scaffold. The underlying components for velocity field fitting and integration are implemented in ITK, and the complete workflow is accessible in both ANTsPy (ants.fit_time_varying_transform_to_point_sets(...)) and ANTsR. In addition the availability of the DevCCF use case, self-contained examples and usage tutorials are provided in our public codebase.
Automated brain extraction and parcellation with ANTsXNet
To support template-based deep learning approaches for structural brain extraction and parcellation, we implemented dedicated pipelines using the ANTsXNet framework. ANTsXNet comprises open-source deep learning libraries in both Python (ANTsPyNet) and R (ANTsRNet) that interface with the broader ANTsX ecosystem and are built on TensorFlow/Keras. Our mouse brain pipelines mirror existing ANTsXNet tools for human imaging but are adapted for species-specific anatomical variation, lower SNR, and heterogeneous acquisition protocols.
Deep learning training setup
All network-based approaches were implemented using a standard U-net87 architecture and hyperparameters previously evaluated in ANTsXNet pipelines for human brain imaging45. This design follows the ‘no-new-net’ principle88, which demonstrates that a well-configured, conventional U-net can achieve robust and competitive performance across a wide range of biomedical segmentation tasks with little to no architectural modifications from the original. Both networks use a 3D U-net architecture implemented in TensorFlow/Keras, with five encoding/decoding levels and skip connections. The loss function combined Dice and categorical cross-entropy terms. Training used a batch size of 4, Adam optimizer with an initial learning rate of 2e-4, and early stopping based on validation loss. Training was performed on an NVIDIA DGX system (4 × Tesla V100 GPUs, 256 GB RAM). Model weights and preprocessing routines are shared across ANTsPyNet and ANTsRNet to ensure reproducibility and language portability. For both published and unpublished trained networks available through ANTsXNet, all training scripts and data augmentation generators are publicly available at https://github.com/ntustison/ANTsXNetTraining.
Robust data augmentation was critical to generalization across scanners, contrast types, and resolutions. We applied both intensity- and shape-based augmentation strategies:
-
Intensity augmentations:
-
Shape augmentations:
-
Random nonlinear deformations and affine transforms: antspynet.randomly_transform_image_data(...)
-
Anisotropic resampling across axial, sagittal, and coronal planes: ants.resample_image(...)
-
Brain extraction
We originally trained a mouse-specific brain extraction model on two manually masked T2-weighted templates, generated from public datasets67,68. One of the templates was constructed from orthogonal 2D acquisitions using B-spline–based volumetric synthesis via ants.fit_bspline_object_to_scattered_data(...). Normalized gradient magnitude was used as a weighting function to emphasize boundaries during reconstruction86.
This training strategy provides strong spatial priors despite limited data by leveraging high-quality template images and aggressive augmentation to mimic population variability. During the development of this work, the network was further refined through community engagement. A user from a U.S.-based research institute applied this publicly available (but then unpublished) brain extraction tool to their own mouse MRI dataset. Based on feedback and iterative collaboration with the ANTsX team, the model was retrained and improved to better generalize to additional imaging contexts. This reflects our broader commitment to community-driven development and responsiveness to user needs across diverse mouse brain imaging scenarios.
The final trained network is available via ANTsXNet through the function antspynet.mouse_brain_extraction(...). Additionally, both template/mask pairs are accessible via ANTsXNet. For example, one such image pair is available via:
-
Template: antspynet.get_antsxnet_data("bsplineT2MouseTemplate")
-
Brain mask: antspynet.get_antsxnet_data("bsplineT2MouseTemplateBrainMask")
Brain parcellation
For brain parcellation, we trained a 3D U-net model using the DevCCF P56 T2-weighted template and anatomical segmentations derived from AllenCCFv3. This template-based training strategy enables the model to produce accurate, multi-region parcellations without requiring large-scale annotated subject data.
To normalize intensity across specimens, input images were preprocessed using rank-based intensity normalization (ants.rank_intensity(...)). Spatial harmonization was achieved through affine and deformable alignment of each extracted brain to the P56 template prior to inference. In addition to the normalized image input, the network also receives prior probability maps derived from the atlas segmentations, providing additional spatial context.
This general parcellation deep learning framework has also been applied in collaboration with other groups pursuing related but distinct projects. In one case, a model variant was adapted for T2-weighted MRI using an alternative anatomical labeling scheme; in another, a separate model was developed for serial two-photon tomography (STPT) with a different parcellation set. All three models are accessible through a shared interface in ANTsXNet: antspynet.mouse_brain_parcellation(...). Ongoing work is further extending this approach to embryonic mouse brain data. These independent efforts reflect broader community interest in adaptable parcellation tools and reinforce the utility of ANTsXNet as a platform for reproducible, extensible deep learning workflows.
Evaluation and reuse
To assess model generalizability, both the brain extraction and parcellation networks were evaluated on an independent longitudinal dataset comprising multiple imaging sessions with varied acquisition parameters69. Although each label or imaging modality required retraining, the process was streamlined by the reusable ANTsX infrastructure enabled by rapid adaptation with minimal overhead. These results illustrate the practical benefits of a template-based, low-shot strategy and modular deep learning framework. All trained models, associated training scripts, and supporting resources are openly available and designed for straightforward integration into ANTsX workflows.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The following datasets were used in this study and are publicly available: • Allen Common Coordinate Framework (AllenCCFv3): Available from the Allen Institute for Brain Science at https://atlas.brain-map.org/atlas. • Developmental Common Coordinate Framework (DevCCF) MRI and LSFM datasets: Publicly available via the Kim Lab https://kimlab.io/home/projects/DevCCF/index.html. • MERFISH spatial transcriptomics data: Previously published46https://portal.brain-map.org. • Developmental datasets for brain extraction and segmentation: – High-resolution MRI data of brain C57BL/6 and BTBR mice in three different anatomical views: https://data.mendeley.com/datasets/dz9x23fttt/1. – CAMRI Mouse Brain Data: https://openneuro.org/datasets/ds002868/versions/1.0.1• Evaluation dataset for brain extraction and segmentation: A longitudinal microstructural MRI dataset in healthy C57Bl/6 mice at 9.4 Tesla https://www.frdr-dfdr.ca/repo/dataset/9ea832ad-7f36-4e37-b7ac-47167c0001c1. • ANTsXNet-pretrained templates and models: Available through ANTsPy at https://github.com/ANTsX/ANTsPyNet. Source data are provided with this paper.
Code availability
All processing pipelines and supporting code are openly available at: •https://github.com/ntustison/ANTsXMouseBrainMapping(DevCCF velocity model and deep learning parcellation). Also contains the text, scripts, and data to reproduce the manuscript (including figures). •https://github.com/dontminchenit/CCFAlignmentToolkit(MERFISH and fMOST workflows).
References
Keller, P. J. & Ahrens, M. B. Visualizing whole-brain activity and development at the single-cell level using light-sheet microscopy. Neuron 85, 462–83 (2015).
La Manno, G. et al. Molecular architecture of the developing mouse brain. Nature 596, 92–96 (2021).
Wen, L. et al. Single-cell technologies: From research to application. Innovation 3, 100342 (2022).
Oh, S. W. et al. A mesoscale connectome of the mouse brain. Nature 508, 207–14 (2014).
Gong, H. et al. Continuously tracing brain-wide long-distance axonal projections in mice at a one-micron voxel resolution. Neuroimage 74, 87–98 (2013).
Li, A. et al. Micro-optical sectioning tomography to obtain a high-resolution atlas of the mouse brain. Science 330, 1404–8 (2010).
Ueda, H. R. et al. Tissue clearing and its applications in neuroscience. Nat. Rev. Neurosci. 21, 61–79 (2020).
Stahl, P. L. et al. Visualization and analysis of gene expression in tissue sections by spatial transcriptomics. Science 353, 78–82 (2016).
Burgess, D. J. Spatial transcriptomics coming of age. Nat. Rev. Genet 20, 317 (2019).
Hardwick, S. A. et al. Single-nuclei isoform RNA sequencing unlocks barcoded exon connectivity in frozen brain tissue. Nat. Biotechnol. 40, 1082–1092 (2022).
Hawrylycz, M. et al. A guide to the BRAIN initiative cell census network data ecosystem. PLoS Biol. 21, e3002133 (2023).
Wang, Q. et al. The allen mouse brain common coordinate framework: A 3D reference atlas. Cell 181, 936–953.e20 (2020).
Perens, J. et al. An optimized mouse brain atlas for automated mapping and quantification of neuronal activity using iDISCO+ and light sheet fluorescence microscopy. Neuroinformatics 19, 433–446 (2021).
Ma, Y. et al. A three-dimensional digital atlas database of the adult C57BL/6J mouse brain by magnetic resonance microscopy. Neuroscience 135, 1203–1215 (2005).
Qu, L. et al. Cross-modal coherent registration of whole mouse brains. Nat. Methods 19, 111–118 (2022).
Kronman, F. N. et al. Developmental mouse brain common coordinate framework. Nat. Commun. 15, 9072 (2024).
Chuang, N. et al. An MRI-based atlas and database of the developing mouse brain. Neuroimage 54, 80–89 (2011).
Dries, R. et al. Advances in spatial transcriptomic data analysis. Genome Res. 31, 1706–1718 (2021).
Ricci, P. et al. Removing striping artifacts in light-sheet fluorescence microscopy: A review. Prog. biophysics Mol. Biol. 168, 52–65 (2022).
Agarwal, N., Xu, X. & Gopi, M. Robust registration of mouse brain slices with severe histological artifacts. in Proceedings of the tenth indian conference on computer vision, graphics and image processing 1–8 (2016).
Agarwal, N., Xu, X. & Gopi, M. Automatic detection of histological artifacts in mouse brain slice images. in Medical computer vision and bayesian and graphical models for biomedical imaging: MICCAI 2016 international workshops, MCV and BAMBI, athens, greece, october 21, 2016, revised selected papers 8 105–115 (Springer, 2017).
Tward, D. et al. 3d mapping of serial histology sections with anomalies using a novel robust deformable registration algorithm. in International workshop on multimodal brain image analysis 162–173 (Springer, 2019).
Cahill, L. S. et al. Preparation of fixed mouse brains for MRI. Neuroimage 60, 933–939 (2012).
Biancalani, T. et al. Deep learning and alignment of spatially resolved single-cell transcriptomes with tangram. Nat. Methods 18, 1352–1362 (2021).
Sunkin, S. M. et al. Allen brain atlas: An integrated spatio-temporal portal for exploring the central nervous system. Nucleic acids Res. 41, D996–D1008 (2012).
Kim, Y. et al. Brain-wide maps reveal stereotyped cell-type-based cortical architecture and subcortical sexual dimorphism. Cell 171, 456–469 (2017).
Fürth, D. et al. An interactive framework for whole-brain maps at cellular resolution. Nat. Neurosci. 21, 139–149 (2018).
Li, Y. et al. mBrainAligner-web: A web server for cross-modal coherent registration of whole mouse brains. Bioinformatics 38, 4654–4655 (2022).
Puchades, M. A., Csucs, G., Ledergerber, D., Leergaard, T. B. & Bjaalie, J. G. Spatial registration of serial microscopic brain images to three-dimensional reference atlases with the QuickNII tool. PloS one 14, e0216796 (2019).
Eastwood, B. S. et al. Whole mouse brain reconstruction and registration to a reference atlas with standard histochemical processing of coronal sections. J. Comp. Neurol. 527, 2170–2178 (2019).
Ni, H. et al. A robust image registration interface for large volume brain atlas. Sci. Rep. 10, 2139 (2020).
Pallast, N. et al. Processing pipeline for atlas-based imaging data analysis of structural and functional mouse brain MRI (AIDAmri). Front Neuroinform 13, 42 (2019).
Celestine, M., Nadkarni, N. A., Garin, C. M., Bougacha, S. & Dhenain, M. Sammba-MRI: A library for processing SmAll-MaMmal BrAin MRI data in python. Front Neuroinform 14, 24 (2020).
Ioanas, H.-I., Marks, M., Zerbi, V., Yanik, M. F. & Rudin, M. An optimized registration workflow and standard geometric space for small animal brain imaging. Neuroimage 241, 118386 (2021).
Aggarwal, M., Zhang, J., Miller, M. I., Sidman, R. L. & Mori, S. Magnetic resonance imaging and micro-computed tomography combined atlas of developing and adult mouse brains for stereotaxic surgery. Neuroscience 162, 1339–1350 (2009).
Chandrashekhar, V. et al. CloudReg: Automatic terabyte-scale cross-modal brain volume registration. Nat. methods 18, 845–846 (2021).
Jin, M. et al. SMART: An open-source extension of WholeBrain for intact mouse brain registration and segmentation. eNeuro 9, https://doi.org/10.1523/ENEURO.0482-21.2022 (2022).
Negwer, M. et al. FriendlyClearMap: An optimized toolkit for mouse brain mapping and analysis. Gigascience 12, giad035 (2022).
Lin, W. et al. Whole-brain mapping of histaminergic projections in mouse brain. Proc. Natl. Acad. Sci. 120, e2216231120 (2023).
Zhang, M. et al. Spatially resolved cell atlas of the mouse primary motor cortex by MERFISH. Nature 598, 137–143 (2021).
Shi, H. et al. Spatial atlas of the mouse central nervous system at molecular resolution. Nature 622, 552–561 (2023).
Zhang, Y. et al. Reference-based cell type matching of in situ image-based spatial transcriptomics data on primary visual cortex of mouse brain. Sci. Rep. 13, 9567 (2023).
Klein, S., Staring, M., Murphy, K., Viergever, M. A. & Pluim, J. P. W. Elastix: A toolbox for intensity-based medical image registration. IEEE Trans. Med Imaging 29, 196–205 (2010).
Fedorov, A. et al. 3D slicer as an image computing platform for the quantitative imaging network. Magn. Reson. imaging 30, 1323–1341 (2012).
Tustison, N. J. et al. The ANTsX ecosystem for quantitative biological and medical imaging. Sci. Rep. 11, 9068 (2021).
Yao, Z. et al. A high-resolution transcriptomic and spatial atlas of cell types in the whole mouse brain. Nature 624, 317–332 (2023).
Pagani, M., Damiano, M., Galbusera, A., Tsaftaris, S. A. & Gozzi, A. Semi-automated registration-based anatomical labelling, voxel based morphometry and cortical thickness mapping of the mouse brain. J. Neurosci. methods 267, 62–73 (2016).
Anderson, R. J. et al. Small animal multivariate brain analysis (SAMBA) - a high throughput pipeline with a validation framework. Neuroinformatics 17, 451–472 (2019).
Allan Johnson, G. et al. Whole mouse brain connectomics. J. Comp. Neurol. 527, 2146–2157 (2019).
Avants, B. B., Epstein, C. L., Grossman, M. & Gee, J. C. Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain. Med Image Anal. 12, 26–41 (2008).
Tustison, N. J. et al. N4ITK: Improved N3 bias correction. IEEE Trans. Med Imaging 29, 1310–20 (2010).
Bajcsy, R. & Broit, C. Matching of deformed images. in Sixth International Conference on Pattern Recognition (ICPR’82) 351–353 (1982).
Bajcsy, R. & Kovacic, S. Multiresolution elastic matching. Comput. Vis., Graph., Image Process. 46, 1–21 (1989).
Gee, J. C., Reivich, M. & Bajcsy, R. Elastically deforming 3D atlas to match anatomical brain images. J. Comput Assist Tomogr. 17, 225–36 (1993).
Klein, A. et al. Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration. Neuroimage 46, 786–802 (2009).
Murphy, K. et al. Evaluation of registration methods on thoracic CT: The EMPIRE10 challenge. IEEE Trans. Med Imaging 30, 1901–20 (2011).
Baheti, B. et al. The brain tumor sequence registration challenge: Establishing correspondence between pre-operative and follow-up MRI scans of diffuse glioma patients. https://doi.org/10.48550/arXiv.2112.06979 (2021).
Avants, B. B. et al. The optimal template effect in hippocampus studies of diseased populations. Neuroimage 49, 2457–66 (2010).
Avants, B. B., Tustison, N. J., Wu, J., Cook, P. A. & Gee, J. C. An open source multivariate framework for n-tissue segmentation with evaluation on public data. Neuroinformatics 9, 381–400 (2011).
Manjón, J. V., Coupé, P., Martí-Bonmatí, L., Collins, D. L. & Robles, M. Adaptive non-local means denoising of MR images with spatially varying noise levels. J. Magn. Reson Imaging 31, 192–203 (2010).
Wang, H. et al. Multi-atlas segmentation with joint label fusion. IEEE Trans. Pattern Anal. Mach. Intell. 35, 611–23 (2013).
Tustison, N. J. et al. Optimal symmetric multimodal templates and concatenated random forests for supervised brain tumor segmentation (simplified) with ANTsR. Neuroinformatics https://doi.org/10.1007/s12021-014-9245-2 (2014).
Tustison, N. J., Yang, Y. & Salerno, M. Advanced normalization tools for cardiac motion correction. in Statistical atlases and computational models of the heart - imaging and modelling challenges (eds. Camara, O. et al.) vol. 8896 3–12 (Springer International Publishing, 2015).
McCormick, M., Liu, X., Jomier, J., Marion, C. & Ibanez, L. ITK: Enabling reproducible research and open science. Front Neuroinform 8, 13 (2014).
Beg, M. F., Miller, M. I., Trouvé, A. & Younes, L. Computing large deformation metric mappings via geodesic flows of diffeomorphisms. Int. J. Comput.Vis. 61, 139–157 (2005).
Tustison, N. J. & Avants, B. B. Explicit B-spline regularization in diffeomorphic image registration. Front Neuroinform 7, 39 (2013).
Hsu, L.-M. et al. CAMRI mouse brain MRI data. OpenNeuro. [Dataset] https://doi.org/10.18112/openneuro.ds002868.v1.0.0 (2020).
Reshetnikov, V. et al. High-resolution MRI data of brain C57BL/6 and BTBR mice in three different anatomical views. Mendeley Data, V1, https://doi.org/10.17632/dz9x23fttt.1 (2021).
Rahman, N., Xu, K., Budde, M. D., Brown, A. & Baron, C. A. A longitudinal microstructural MRI dataset in healthy C57Bl/6 mice at 9.4 tesla. Sci. Data 10, 94 (2023).
Liu, J. et al. Concordance of MERFISH spatial transcriptomics with bulk and single-cell RNA sequencing. Life Sci Alliance 6, e202201701 (2023).
Stringer, C., Wang, T., Michaelos, M. & Pachitariu, M. Cellpose: A generalist algorithm for cellular segmentation. Nat. Methods 18, 100–106 (2021).
Jia, H., Yap, P.-T., Wu, G., Wang, Q. & Shen, D. Intermediate templates guided groupwise registration of diffusion tensor images. NeuroImage 54, 928–939 (2011).
Tang, S., Fan, Y., Wu, G., Kim, M. & Shen, D. RABBIT: Rapid alignment of brains by building intermediate templates. NeuroImage 47, 1277–1287 (2009).
Dewey, B. E., Carass, A., Blitz, A. M. & Prince, J. L. Efficient multi-atlas registration using an intermediate template image. in Proceedings of SPIE–the international society for optical engineering vol. 10137 (NIH Public Access, 2017).
Perens, J. et al. Multimodal 3D mouse brain atlas framework with the skull-derived coordinate system. Neuroinformatics 21, 269–286 (2023).
Rotolo, T., Smallwood, P. M., Williams, J. & Nathans, J. Genetically-directed, cell type-specific sparse labeling for the analysis of neuronal morphology. PLoS One 3, e4099 (2008).
Peng, H. et al. Morphological diversity of single neurons in molecularly defined cell types. Nature 598, 174–181 (2021).
Gong, H. et al. High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level. Nat. Commun. 7, 12142 (2016).
Wang, J. et al. Divergent projection patterns revealed by reconstruction of individual neurons in orbitofrontal cortex. Neurosci. Bull. 37, 461–477 (2021).
Avants, B. B. et al. The Insight ToolKit image registration framework. Front Neuroinform 8, 44 (2014).
Chon, U., Vanselow, D. J., Cheng, K. C. & Kim, Y. Enhanced and unified anatomical labeling for a common mouse brain atlas. Nat. Commun. 10, 5067 (2019).
Tasic, B. et al. Adult mouse cortical cell taxonomy revealed by single cell transcriptomics. Nat. Neurosci. 19, 335–46 (2016).
Bergmann, E., Gofman, X., Kavushansky, A. & Kahn, I. Individual variability in functional connectivity architecture of the mouse brain. Commun. Biol. 3, 738 (2020).
Billot, B. et al. SynthSeg: Segmentation of brain MRI scans of any contrast and resolution without retraining. Med Image Anal. 86, 102789 (2023).
Rolfe, S. M., Whikehart, S. M. & Maga, A. M. Deep learning enabled multi-organ segmentation of mouse embryos. Biol. Open 12, bio059698 (2023).
Tustison, N. J. & Gee, J. C. Generalized n-d Ck B-spline scattered data approximation with confidence values. in Medical imaging and augmented reality (eds. Yang, G.-Z., Jiang, T., Shen, D., Gu, L. & Yang, J.) 76–83 (Springer Berlin Heidelberg, 2006).
Falk, T. et al. U-net: Deep learning for cell counting, detection, and morphometry. Nat. Methods 16, 67–70 (2019).
Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J. & Maier-Hein, K. H. nnU-net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203–211 (2021).
Tustison, N. J. et al. Image- versus histogram-based considerations in semantic segmentation of pulmonary hyperpolarized gas images. Magn. Reson Med. 86, 2822–2836 (2021).
Acknowledgements
Support for the research reported in this work includes funding from the National Institute of Biomedical Imaging and Bioengineering (R01-EB031722) and National Institute of Mental Health (RF1-MH124605, U24-MH114827, and NIH RF1MH124605 to Y.K.). We also acknowledge the data contribution of Dr. Adam Raikes (GitHub @araikes) of the Center for Innovation in Brain Science at the University of Arizona for refining the weights of the mouse brain extraction network.
Author information
Authors and Affiliations
Contributions
N.T., M.C., and J.G. wrote the main manuscript text and figures. M.C., M.K., R.D., S.S., Q.W., L.N., J.D., C.G., and J.G. developed the Allen registration pipelines. N.T., F.K., J.G., and Y.K. developed the time-varying velocity transformation model for the DevCCF. N.T. and M.T. developed the brain parcellation and cortical thickness methodology. All authors reviewed the manuscript.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Hannah Spitzer, Shaun Warrington and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Source data
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Tustison, N.J., Chen, M., Kronman, F.N. et al. The ANTsX ecosystem for mapping the mouse brain. Nat Commun 16, 11548 (2025). https://doi.org/10.1038/s41467-025-66741-5
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41467-025-66741-5











