Introduction

Printing medical images as three dimensional (3D) objects aids teaching anatomy, training surgeons1, and helping individuals to appreciate complex anatomical shapes2. Because brain imaging data is typically available for patients as well as individuals who volunteer for neuroimaging research, 3D printing can aid public engagement. Printers have recently become commonplace, and online 3D printing services support many methods and materials3.

Unfortunately, the current methods for converting medical data into the meshes is arduous2. Specifically, Madan2 suggests sequentially using the independent desktop software applications FreeSurfer4 (to estimate the cortical surface), meshgeometry (to convert the mesh format), VCGLib tridecimator (to simplify the mesh), MeshLab5 (to convert the mesh format) and netfabb (to repair geometry). The time to create a mesh from a single individual scan requires “many hours”2. Furthermore, installing the required software has complications - FreeSurfer requires a license and does not natively support the Windows operating system (requiring the Windows Subsystem for Linux), VCGLib must be compiled from source, and since the article was published netfabb has become commercial software. NiftyWeb6 provides a novel cloud-based solution to this problem, where users upload a T1-weighted magnetic resonance imaging (MRI) scan and are emailed a segmented brain mesh. This is convenient, but private data is shared externally and in our experience, the process takes several hours while producing images with errors that prevent printing without additional tools (such as netfabb).

In contrast, our brain2print (https://brain2print.org) web page creates meshes for 3D printing in minutes (Fig. 1). Brain2print provides an edge-based application, with all computations performed on the user’s own computer. The zero-footprint design does not require the user to install any software. By using web standards, the software should run on any computer, regardless of hardware or software. Since the image data never leaves the user’s computer, privacy concerns are eliminated.

Fig. 1
figure 1

brain2print web page for converting brain imaging data for 3D printing. The user can drag-and-drop an anatomical scan (shown on grayscale background). Next, they choose a segmentation model that parcelates the brain. Segmented brain regions are then shown in different colors. Users can interactively click on brain locations (here the crosshair is on the Pallidum). Finally, the user can use the ‘Create Mesh’ button to generate a triangulated mesh. A dialog (shown) allows custom settings for mesh creation.

Users can drag and drop their anatomical brain scan onto the web page, which is visible as an interactive image. The user next selects a BrainChop AI inference model7 for segmenting their brain scan. These models harness the user’s own local graphics card for inference, providing a cross-platform and cross-hardware solution. Models of different computational complexity are available for users to try if the fastest segmentation results are unacceptable. Once the user is satisfied with the segmentation, they proceed with converting the segmented voxels to a mesh by pressing the ‘Create Mesh’ button. A dialog appears that allows the user to choose options that influence the complexity and structure of the mesh (Fig. 2). The interactive visualization allows the user to ensure the mesh is acceptable. The final step is to press the ‘Save Mesh’ button to save the finished mesh in a format suitable for printing.

Fig. 2
figure 2

Processing stages of brain2print. (a) Grayscale voxel-based input image. (b) Segmented voxels using a BrainChop model. (c) The segmented voxels are converted to a triangulated surface. (d) Optionally, the mesh can be smoothed to appear less jagged. (e) Optionally, the mesh can be simplified to reduce triangle count.

Methods

Mirroring the user’s experience, the implementation of brain2print has two stages. First, the brainchop AI models are used to segment a raw MRI scan (Fig. 2a) into brain regions (2b). The second step converts the segmented voxels into a triangulated mesh (2c). The AI models of the first stage leverage WebGL via TensorFlowJS that harnesses the user’s massively parallel graphics card. In contrast, the second voxel-to-mesh stage leverages the user’s central processing unit (CPU). While we have previously described the brainchop models8, the high performance voxel-to-mesh operations are new and therefore we describe them in more detail here. These new voxel-to-mesh features have been added to our niimath9 project, while we also provide access to slower but more precise functions via a plugin to the itk-wasm project10. Both these projects compile proven tools in the C language to WebAssembly (for web pages, as here).

NiiVue (https://github.com/niivue/niivue) is used to load and display images. This ensures drag and drop support for common interchange formats including DICOM, NRRD and NIfTI. The DICOM format and file organization is particularly complicated, and therefore NiiVue is not able to read all variations of this format. In cases where users are unable to import DICOM images directly, they can use dcm2niix to import images11. In addition, it can also read the native proprietary voxel-based image formats of many popular tools, including MRtrix12 (MIF extension), AFNI13 (BRIK), FreeSurfer4, and DSI-Studio14 (SRC). As seen in Fig. 1, NiiVue allows the user to interactively inspect the voxel-based image, the AI-aided segmentation as well as the resulting triangulated mesh. This allows the user to fine-tune the mesh prior to the time consuming mesh printing process. Finally, NiiVue supports various export formats for the resulting triangulated mesh, specifically STL, WaveFront OBJ and MZ3.

Trade offs in mesh creation

Converting voxel-based data into a mesh involves key decisions that influence the shape of the final surface. As shown in Fig. 2, directly generating a mesh from binary segmentation (2B) often results in a jagged surface (2 C), which may require smoothing for better appearance and usability (2D). Large, complex volumes can also create oversized mesh files, necessitating simplification to reduce file size (2E).

The optimal mesh topology depends on the intended use case15. Elongated triangles are considered ideal for triangle simplification algorithms focused on minimizing geometric error but are unsuitable for applications like finite element simulations16. Similarly, faster methods may introduce subtle defects, such as holes or self-intersecting triangles. While modern 3D printing tools generally tolerate these issues, they can pose challenges in other contexts. To accommodate diverse needs and equipment, our approach provides two pipeline options: one for faster processing that minimizes geometric error and another that prioritizes defect prevention and produces more uniformly shaped and sized triangles.

For 3D printing, additional considerations arise. Unlike meshes for computer visualization, printed meshes are typically hollow to minimize material usage, increasing complexity. The surface thickness must align with the printing material, and hollow meshes require escape holes for filler material removal.

Fast mesh creation

Our fast pipeline is optimized for rapid mesh generation, achieving processing times of mere seconds, even on outdated, battery-powered devices—a claim substantiated in the results section. The development for this pipeline prioritized performance, with extensive testing of several variants of Sven Forstmann’s implementation of mesh simplification using quadric error metrics17. which focuses on minimizing geometric error.

We found that WebAssembly methods are faster than pure JavaScript methods if binary data transfer is used, but that WebAssembly faces a huge penalty if triangulated meshes are passed as ASCII text (the WaveFront OBJ format). This can be seen with our live demo web page (https://neurolabusc.github.io/simplifyjs/) where one can selectively choose the simplification method. Specifically, we found simplifying the ‘brain’ mesh from 327,678 to 81,920 triangles (removing 75%) required 841ms with the fastest pure JavaScript method (‘niivue’), 1118ms for web assembly with text transfer (WASM) and 256ms when combining web assembly and binary transfer (WASMmz3). We also determined that Sven Forstmann’s original C + + implementation could be optimized by using pure C code that focused on pre-allocating arrays (presumably, there is a penalty for growing arrays in WebAssembly). This finding is demonstrated by our live demo (https://neurolabusc.github.io/niivue-simplify/) where reducing the number of triangles to 25% for the lh.mz3 mesh required 277ms for the C + + code and 209ms for the pure C implementation. Based on this survey, the fastest C code compiled to WebAssembly is included for brain2print. The subsequent evaluation builds on this optimized code.

Precise mesh creation

Our fast method produces meshes suitable for modern 3D printers but makes trade-offs that may not be ideal for all use cases. Specifically, it often generates anisotropic triangles to preserve mesh shape while minimizing complexity. Additionally, this method can introduce minor defects, such as small holes (compromising watertightness) and self-intersecting triangles. While these defects can often be repaired with offline tools, minimizing their occurrence during mesh creation is preferable for some use cases.

To address this, we provide a slower but more accurate mesh creation pipeline based on the “cuberille” method for polygonization of implicit surfaces18 as implemented in the ITK-wasm repository10. This approach prioritizes the creation of defect-free meshes with uniformly sized and nearly equilateral triangles. Since the method has been described in detail elsewhere and is not focused on performance optimization, we do not evaluate its runtime in the results section.

Results

The mission of brain2print is to generate printable meshes from MRI scans in seconds rather than hours, regardless of the user’s operating system and hardware. To demonstrate this, we evaluated the computation time on three systems: a high end Linux workstation with a AMD Threadripper 7995WX with a NVidia RTX 4090 graphics card, a mid-range Apple MacBook Pro laptop with an M2 Pro processor (integrated graphics) and an obsolete Apple laptop with an Intel Core i5-8259U processor (integrated graphics; released in 2018). Our tests used the default MRI scan that is automatically loaded when the brain2print.org web page is loaded.

Our first test evaluated the time for brainchop tools to segment the brain. We evaluated three variants of the ‘Subcortical + GWM’ models that attempt to parcellate subcortical regions as well as cortical gray matter and white matter. These models use different approaches operating on the entire image as a single texture, one approach (fast-high) executes complete convolutional kernels of the model using all input and output channels simultaneously, offering faster processing but requiring substantial GPU memory. A second, more memory-efficient variation (slow-low) splits the convolution operation processing one input channel at a time for each layer and thus significantly reducing GPU memory use at the cost of increased processing time. The final ‘fast-low’ uses a model with fewer channels and the approach to convolution of the slow-low model further saving on memory that (in theory) trades off segmentation quality for speed. Results are shown in Table 1. While the Intel computer did not have the resources for the fast-high model, all computers were able to segment the brain image in a matter of seconds. Note that the workstation was slower to compute the ‘fast-low’ model than the ‘fast-high’ model. This is consistent with our findings across many computers, some TensorFlowJS optimizations that improve performance on some hardware have deleterious performance on other hardware.

We measured the time for converting the default MRI image to a mesh, using the default (Botsch et al.15) settings. Again, all computers were able to perform this task in a matter of seconds (Table 2).

Table 1 Time in milliseconds to segment the brain for a high end workstation (AMD CPU, NVidia GPU), a professional laptop (Apple CPU with integrated GPU) and an obsolete laptop (Intel CPU with integrated GPU).
Table 2 Time in milliseconds for each stage of converting voxels to mesh.

We created brain meshes from several T1-weighted MRI scans (all independent from the training set used to create the brainchop models). All of the meshes were able to be printed without complications using both Bambu Lab X1 Carbon and PS1 3D Printers (Austin, TX) using the Bambu Studio slicer software (with results of the latter shown in Fig. 3).

Fig. 3
figure 3

Example of a printed brain created with brain2print.

Discussion

The brain2print web application provides a quick and rapid method for users to convert voxel-based MRI scans of the brain into triangulated meshes suitable for printing. Our results demonstrate that it generates meshes in seconds and creates robust meshes without requiring any additional tools.

One current limitation of brain2print is that the provided AI models are all specific to T1-weighted MRI scans of the head. Therefore, this tool is not suitable for other modalities (e.g. T2-weighted MRI, CT scans) or different anatomy (e.g. torso). Most MRI protocols do include a T1-weighted image, as it has good soft-tissue contrast (discriminating muscle, fat, gray matter and white matter) and provides high spatial resolution in a short scan time. An advantage of Computerized Tomography (CT) is that the voxel intensity is calibrated (using Hounsfield Units), and it provides very strong intensity boundaries between air and other tissues and bone and other tissues. Therefore, voxel intensity is sufficient to segment these boundaries, and one does not need to rely on AI models. For example, computerized tomography (CT) scans clearly distinguish bone (bright) from soft tissue (midtone) and air (dark). To fill this niche, we provide ct2print (https://ct2print.org) for situations where voxel intensity is sufficient to discriminate tissue boundaries (Fig. 4). This is based on our previously described voxel-to-mesh software, without the first stage of AI segmentation. Our implementation allows the user to select the voxel intensity to use as an isosurface threshold, defaulting to Otsu’s method19. This web page allows digital printing for clinical images and morphology libraries (e.g. https://www.morphosource.org/ and https://digimorph.org/index.phtml).

Fig. 4
figure 4

(a) Cut-away of a hollow mesh showing material reduction (blue arrow indicates escape hole in the right cerebellum). (b) ct2print generates meshes for both pre-segmented images and intensity-defined boundaries, demonstrated with a post-contrast CT scan of a human head (Philips Medical) at a threshold of -120 Hounsfield units. (c) Same scan with a threshold of 160, highlighting the bone and arteries. (d) CT scan of an iguana head (Yale Peabody Museum, DigiMorph YPM 14376).

Our brain2print is designed to work on any browser by adhering to web standards. However, not all devices have the resources to run AI models. Many browsers also obscure graphics capabilities to protect user privacy, making it difficult to automatically determine which devices can handle specific models. To address this, brain2print includes a drop-down menu that lets users choose between simple and more demanding models, helping them find the best fit for their hardware. We anticipate that future software and hardware will provide better support.

In the future, we hope that our modular and open source design can be extended by other developers to provide AI models tuned for other applications. Specifically, we recognize that there are AI models that segment other imaging modalities and structures including CT scans of the torso20,21. While brain2print leverages advanced features of TensorFlowJS and mature WebGL2, we provide developers with a fully functional ONNX implementation (https://github.com/niivue/niivue-onnx) of our simplest model using the nascent WebGPU. This allows developers to use the framework (TensorFlowJS or ONNX) that is best suited to their needs.

While the primary goal of brain2print is to facilitate 3D printing of MRI scans, its modular open-source framework is designed to support broader applications. For instance, the segmentation and mesh generation capabilities could be utilized to model brain stimulation22 or enhance source localization for scalp-recorded signals23.

In conclusion, brain2print provides a simple, fast and secure solution to 3D print medical scans. Our platform not only enhances the accessibility of 3D printing technology for educational and research purposes but also sets a precedent for future developments in the integration of AI and web-based solutions.