Introduction

Background

Prenatal evaluation of fetal anatomy is typically performed with ultrasound according to guidelines from the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG)1. A screening fetal ultrasound examination is usually performed at 18 to 24 weeks of gestational age1. An additional Magnetic Resonance Imaging (MRI) examination may be performed if the initial ultrasound findings are inconclusive or if there is suspicion of potential abnormalities not sufficiently characterized by ultrasound2. MRI is a safe imaging modality without ionizing radiation, which has very high soft-tissue contrast, resulting in significantly better anatomical visualization compared to ultrasonography3. MRI can provide additional diagnostic information in up to 23% of cases when documenting fetal anomalies, significantly enhancing diagnostic accuracy4. MRI has, therefore, become an increasingly more widely used modality for fetal diagnostic decision making, facilitating treatment planning and prognostication.

Radiological reporting of fetal MRI requires highly subspecialized knowledge. Segmentation, classification, and quality assurance can be time-consuming. Specifically, segmentation involves the manual delineation of different anatomical structures, for biometric assessment, and can help diagnostic decision-making in certain conditions, such as fetal growth restriction (FGR). Classification tasks in fetal imaging can encompass a wide range of objectives, including the detection of fetal abnormalities, identification of biometric deviations indicative of pathology, and determination of fetal position, all of which are critical for accurate diagnosis and clinical decision-making. MRI is susceptible to different imaging artifacts, with maternal/fetal motion being the most common. Such artifacts can distort fetal anatomy and, thus, hinder diagnostic decision-making. Meshaka et al.5 provide a comprehensive review of these different tasks and how artificial intelligence (AI) can advantageously be used in such scenarios.

Related works

To address the challenges of manual fetal MRI analysis, numerous algorithms have been developed, targeting specific processes such as segmentation, classification, and quality assurance. However, these solutions often operate in isolation, addressing individual steps rather than providing an integrated analysis pipeline.

For segmentation tasks, most efforts have focused on the fetal brain, aiming to delineate brain structures for neurodevelopmental assessment6,7,8,9,10. In comparison, fewer studies have targeted segmentation of the entire fetal body, despite its importance for comprehensive biometric analysis11,12,13. Some works have pursued even more specific targets, such as segmenting individual fetal organs, such as the lungs14,15. Beyond fetal anatomy, research has focused on maternal tissues, particularly segmenting the placenta and amniotic fluid due to their relevance in assessing fetal growth and pregnancy health16,17,18,19,20,21,22.

Classification tasks in fetal MRI cover a wider range of objectives. Many studies focus on abnormality prediction, such as detecting neurodevelopmental disorders in the fetal brain or diagnosing conditions affecting other fetal organs23,24,25. Less commonly, classification approaches have been applied to tasks such as fetal position and pose detection, which can support movement tracking and inform mode of delivery decisions26,27.

In terms of quality assurance, most existing methods have targeted motion correction, with many involving slice-to-volume registration and deep learning based methods28,29,30,31. However, there is comparatively limited research on detecting and correcting other artifacts that can arise in fetal MRI and impact diagnostic confidence32.

Despite these advancements, a clear gap remains in the literature: current research typically addresses only individual steps of the workflow rather than providing a fully integrated solution to streamline the entire fetal MRI analysis process. Moreover, existing approaches often overlook clinical accessibility and ease of use, which greatly limits their applicability in real-world healthcare settings.

Objectives

In response to these challenges, we propose the Fetal Assessment Suite (FetAS). This specialized software platform incorporates a range of developed AI methods to automate the processing and analysis of fetal MRI. FetAS currently performs several key tasks: (1) artifact detection, (2) motion correction, (3) segmentation of the fetal body, amniotic fluid, and placenta, and (4) classification of fetal and placental position and orientation. The software is deployed as a secure, user-friendly web-based platform, allowing clinicians and researchers to easily upload datasets and perform all necessary analyses within a single integrated environment. By streamlining fetal MRI analysis and improving workflow efficiency, FetAS aims to support clinical practice, particularly in settings with limited access to specialized fetal radiologists. Furthermore, FetAS can facilitate research by enabling collaboration and the development of large, curated dataset cohorts.

Methods

System architecture

The FetAS platform employs a modern web architecture to ensure responsive, scalable, and secure MRI data analysis (Fig. 1). The frontend leverages Vite33 React34 for rapid development cycles and optimized production builds, with a responsive user interface crafted using HeroUI35 components and Tailwind CSS36 that seamlessly adapts across all device form factors. This client-side application communicates with a robust Node.js37 and Express.js38 backend that manages RESTful API endpoints and handles user authentication through Passport.js39 middleware integrated with JSON Web Tokens40 (JWTs) for secure, stateless session management. A sophisticated queuing system is implemented to manage incoming analysis requests, ensuring data is processed sequentially and reliably to support smooth clinical workflows without bottlenecks.

Data persistence is achieved through multiple specialized storage solutions. User credentials and data metadata are securely stored in an encrypted MySQL41 database hosted on Amazon Relational Database Service42 (RDS), ensuring data integrity, redundancy, and compliance with security standards. All uploaded and processed imaging data, including DICOM43 files, are stored in Amazon S344, providing scalable, durable, and secure object storage for large medical datasets.

The entire system is deployed on Amazon Web Services (AWS) infrastructure, where the frontend is compiled and hosted on Amazon S3 with global content delivery through Amazon CloudFront45 for low-latency worldwide access. The backend services run on dedicated Amazon EC246 instances equipped with GPU acceleration, enabling efficient execution of integrated AI models for computationally intensive tasks such as image segmentation and classification. This distributed architecture ensures optimal performance while maintaining the security and reliability required for clinical applications.

The platform includes secure clinician login and user management, enabling authorized users to upload, process, share, and collaboratively review imaging data. Only authorized users are allowed to view datasets they have uploaded; otherwise, access is denied. Uploaded MRI data undergoes automated preprocessing, starting from automatic data deidentification, to ensure no personal health information is ever stored on the platform. This is followed by image normalization and histogram adjustments to standardize inputs for subsequent analyses and improve algorithm robustness. These processes are for algorithm performance; however, the original DICOMs are displayed to the user, unless replaced by motion correction, to ensure clinically relevant details are not lost. This system architecture design ensures high availability, security, and scalability, enabling deployment in both research and clinical environments.

The FetAS platform uses a layered security and access-control approach within AWS. All data is encrypted at rest and in transit using the AWS Key Management Service47 and Transport Layer Security. Storage is separated into Amazon S3 buckets for raw uploads, processed data, logs, and model files, with bucket policies that limit access by dataset and role. All components operate inside a Virtual Private Cloud (VPC) with public and private subnets, and traffic is controlled through security groups and network access control lists. Identity and Access Management (IAM) roles follow a least-privilege model for Amazon EC2, Amazon RDS, S3, and other services, and user permissions are assigned by role. Secrets and passwords are stored in AWS Secrets Manager48, and administrator accounts use Multi-Factor Authentication. Amazon RDS, MySQL, S3, and Redis are encrypted and deployed in private subnets, and system activity is recorded through Amazon CloudWatch, AWS CloudTrail49, and VPC Flow Logs.

Each process runs in a new batch environment with a clean Python runtime. Model files are stored in encrypted S3 but are preloaded onto encrypted Amazon Elastic Block Store50 volumes on the EC2 instance for inference. This keeps model access local to the instance and avoids cross-service data transfer during processing. Automatic RDS snapshots, S3 versioning, and defined recovery steps support data persistence.

Fig. 1
figure 1

FetAS application architecture diagram.

Dataset and preprocessing

The core functions of FetAS are designed to process Half-Fourier Acquisition Single-shot Turbo spin Echo (HASTE) and Steady-State Free Precession (SSFP) sequences with SENSE, acquired on either 1.5T or 3.0T Siemens MRI scanners. These MRI datasets were obtained from The Hospital for Sick Children in Toronto, Canada. Data collection spanned between the years 2000 and 2024. This retrospective study was approved by The Hospital for Sick Children Research Ethics Board (REB #1000062640), the Toronto Metropolitan University Research Ethics Board (REB #2018 − 398), and Clinical Trials Ontario (REB #4861). Gestational ages spanned from 20 to 37 weeks. Extremely rare pathologies (1:100,000) were excluded from this study. All data extraction, utilization, and related methods were conducted in accordance with the relevant guidelines and regulations of these REBs. Given the retrospective design and complete de-identification of all data, the requirement for informed consent was waived by the Research Ethics Board at The Hospital for Sick Children and the Research Ethics Board at Toronto Metropolitan University.

Each processing module was trained on a diverse subset of this dataset, with all annotations clinically validated. General preprocessing was applied to the raw DICOM files prior to model training. This included extracting pixel or voxel data, depending on whether the method used a 2D or 3D approach. Min-max normalization was applied to standardize intensities between 0 and 1. Additionally, data augmentation was performed on-the-fly during training to enhance model robustness and generalizability. A summary of the sample size for each task is shown in Table 1.

Main processes

FetAS allows users to easily complete one of many post-acquisition processing tasks on their uploaded datasets. The datasets and results from each process can be securely shared for collaborative work, with all data stored in S3 for future access. The following subsections briefly outline the methodology, functionality, and significance of each process. A summary of each task’s performance and methods is shown in Table 1.

Table 1 Summary of dataset, tasks and respective performance metrics.

Artifact detection

Degradation of fetal MRIs is most commonly caused by motion, chemical shift, and radiofrequency artifacts51. These artifacts can obscure parts of the fetus, making interpretation challenging for the radiologist. Currently, MRI operators have to manually review each MRI series to determine the presence of artifacts. Thus, the goal of this process was to automate this procedure, and provide a quick alert on whether the MRI was corrupted by artifacts (Fig. 2). This would significantly improve workflow efficiency, allowing clinicians to see more patients and helping to reduce the existing backlog. Although reacquisition is unlikely at this stage, the artifact detection module remains useful beyond its primary purpose. It provides insights into the type and severity of artifacts, which can assist researchers or clinicians who may be less familiar with the appearance of such distortions, thereby serving as a valuable teaching and training tool. In addition, it can contribute to dataset curation and standardization by allowing users to rapidly identify and flag artifact-corrupted images, facilitating the inclusion or exclusion of scans according to the specific objectives of a study or analysis pipeline.

Fig. 2
figure 2

Common artifacts observed in fetal MRI. (a) Motion artifact affecting the entire fetus. (b) Chemical shift artifact, characterized by a thick dark outline along water–fat interfaces. (c) Radiofrequency artifact, appearing as a high-intensity bright spot in the head region.

The artifact detection algorithm makes use of a 2D convolution neural network (CNN) framework which incorporates skip connections, Inception blocks, and Squeeze and Excitation (SE) blocks that stem from the ResNet-50, Inception, and SE networks, respectively. The output gives the type of artifact detected as well as a corresponding 4-class severity grade (none, mild, medium, severe). A preliminary version of this algorithm has been developed and validated, as described in32.

Motion artifact correction

Among the various artifacts encountered in fetal MRI, motion is the most prominent and stems from either fetal or maternal movement. Although there exist motion artifact reduction techniques, which rely on implementing specific sequence types and/or utilizing patient dependent strategies, such as maternal breath-hold, there is no absolute guarantee that the corrected MRI will be motion-free. Thus, we have developed and implemented this process to automatically correct for motion artifacts, ensuring that radiologists can make accurate diagnoses with clear and interpretable anatomy (Fig. 3). Furthermore, by mitigating motion artifacts, this process reduces the need for repeat scans, thereby enhancing overall workflow efficiency.

Fig. 3
figure 3

Example of a motion-free and motion-corrupted image pair. (a) Ground truth motion-free fetal MRI. (b) Corresponding MRI with simulated motion artifact.

v

To accomplish this, a 2D generative adversarial network (GAN) was developed and trained on image pairs, one with motion artifacts, and the other without. The motion artifacts were synthetically generated by modifying the frequency contents of its motion-free counterpart, introducing random translational and rotational variations to mimic the diverse and unpredictable nature of motion in fetal MRI, resulting in varying levels of artifact severity across the dataset. By training with these realistic simulated artifacts, the network was able to correct genuine motion artifacts in clinical MRIs during inference effectively. An initial implementation of this approach was proposed and evaluated in previous work31.

Anatomy segmentation

Biometrics like estimated fetal weight, amniotic fluid volume, and placental volume are key indicators of fetal growth, maternal health, and overall pregnancy well-being. These metrics help detect conditions such as FGR and amniotic fluid abnormalities, including oligohydramnios and polyhydramnios, while placental volume can reveal complications like preeclampsia and placental insufficiency. However, assessing these conditions currently requires manual segmentation, a process that is labour-intensive and requires specialized expertise. This process addresses these challenges by automatically, rapidly, and accurately segmenting fetal MRIs into three classes: fetal body, amniotic fluid, and placenta. These segmentations (Fig. 4) enable volume-based biometrics that can assist in the timely diagnosis of common abnormalities, ultimately improving pregnancy monitoring and outcomes.

Fig. 4
figure 4

Example FetAS segmentation overlaid on a representative 2D slice, illustrating fetal body (red), amniotic fluid (blue), and placenta (green). Corresponding 3D renderings are shown for (a) the fetal body, (b) amniotic fluid, and (c) placenta.

A 3D UNet-based architecture was developed. Specific enhancements were made to improve segmentation quality such as spatial and channel attention mechanisms and Multi-Level Feature Extraction using dilated convolutions. This encoder-decoder network takes a fetal MRI volume as input, down-samples and then up-samples it to generate a segmentation mask for the fetal body, amniotic fluid, and placenta. This methodology builds upon an early version that was introduced and validated in previous work52. Although not required, utilizing the motion correction process prior to segmentation is recommended as this has been shown to improve segmentation quality, and therefore biometric estimation accuracy, in prior research53.

Fetal body and placenta classification

Anatomical position assessment is essential for pregnancy management. For example, fetal presentation can be classified as vertex (head-down), breech (buttocks or feet-down), or transverse (lying sideways), each carrying different delivery implications and potential risks. Similarly, accurate determination of placental location is required, particularly for identifying placenta previa, where the placenta partially or completely covers the cervical os, increasing the risk of bleeding and complications during delivery. While these position assessments are commonly done using ultrasound, they still need to be determined as part of a fetal MRI radiological assessment to ensure the radiological report is complete. Thus, in the absence of expertise in fetal MRI, these too need to be evaluated. Therefore, we developed these processes to automate the detection of fetal presentation (Fig. 5) and placental position (Fig. 6) in MRI, which can assist in timely clinical decision-making, support treatment planning, and improve maternal-fetal outcomes.

Fig. 5
figure 5

Examples of (a) Fetal Vertex position, (b) Breech position, and (c) Transverse position, which are detected and classified in FetAS.

Fig. 6
figure 6

Examples of (a) placental posterior position and (b) placenta previa, which are detected and classified in FetAS. The blue-highlighted regions show the placenta, as segmented by FetAS.

To achieve this, 2D CNNs were utilized. For fetal orientation, we use an average prediction, two slice-based classification models. We then take a majority vote across the entire 3D array to classify fetal orientation. For placental previa classification, there are two sequential CNNs, the first of which classifies if an image contains the placenta in view, which then feeds to the placenta previa classification model. An extensive analysis, featuring an ablation study and a detailed comparison to other models, was performed and published in our prior work26,54.

Performance evaluation

We conducted a comprehensive evaluation of the FetAS platform, focusing on processing performance and specific metrics. Specifically, processing time was evaluated through log streams timestamping the initial and final requests the server produced for a given task. CPU and memory usage are calculated through the htop Linux command55, providing the average CPU and memory increase a process produces once queued. GPU usage was calculated through the nvidia-smi56 command. Test datasets consisted of fetal MRI scans with sequences described in the Dataset and Preprocessing section. To distinguish between dataset size, we included a small dataset of 512 × 512 T2 fetal MRI containing 60 slices with a total size of 30.1 MB. The modules evaluated in the small dataset included artifact detection, motion correction, multi-organ segmentation and uploading a dataset. For artifact detection, motion correction, and multi-organ segmentation, we utilized a 512 × 512 T2 fetal MRI dataset with 104 slices for the large dataset, resulting in a final size of 52.3 MB. To benchmark the dataset upload process, the large dataset was changed to reflect an entire study with multiple sequences, resulting in 1,353 MRI files totalling 966 MB. The upload metrics were 119 MB/s with a latency of 23 ms.

All evaluations were conducted on the deployed backend Amazon EC2 G4dn 2xlarge instance, which is equipped with an NVIDIA T4 GPU, 16 GiB of GPU memory, 64 GiB of memory, eight virtual CPU cores running 2.5 GHz Cascade Lake 24 C processors, and one 225 GB NVMe SSD. The total network throughput has a baseline of 10 GB/s with a burst of up to 25 GB/s.

Results

Platform interface and workflow demonstration

The following subsections show an end-to-end working demonstration of FetAS from a user’s standpoint. This includes logging in, an overview of the homepage, dataset details, dataset loading, dataset processing, and result visualization. For this work, we define a “process” as a collection of functions encompassing an anatomical region (e.g. Fetal Body), or combined use of anatomical regions (e.g. Relative Measures). A “subprocess” (or “Operation”) includes a specific function that uses an MRI dataset (e.g. Segmentation, Biometric Evaluation).

Login

Upon entering the landing page and clicking on “Authorized Login”, the user is prompted to log in with their email and password (Fig. 7). Accounts are created by the website administrator or the “Clinician” user role.

Fig. 7
figure 7

FetAS landing (left) and login page (right).

Homepage

The homepage is the first page displayed after logging in (Fig. 8). This page provides the user with the dashboard: a page of the processes and datasets that are currently stored for the user. A top navigation bar is provided to access all the relevant components of the website, alongside user-specific information.

Fig. 8
figure 8

Homepage of FetAS. The navigation bar presents the user with redirecting tabs to the (a) dashboard, (b) dataset, (c) processes, (d) clinicians, and (e) sites. The username and role (f) are displayed on the navbar, offering options on click to view your account, access website information, toggle dark mode, submit support tickets, and log out.

Dataset details

In the datasets page, users are provided with a list of all currently uploaded datasets (Fig. 9). This page is intended to create, view, edit, and delete any datasets stored within the website. To facilitate this function, we include a list of columns with headers that can be sorted either alphabetically or chronologically: the assigned ID given by our database, the user specified name of the dataset, the time at which the dataset was created, the gestational age of the fetus, the sex of the fetus, whether the dataset was motion corrected, the acquisition plane of the dataset, and action buttons to view, edit, download or delete the dataset. Users can modify which columns appear within the datasets page for ease of access.

When a user navigates to view the dataset by pressing on the blue folder button in the actions column, they are shown the “View Dataset” page (Fig. 10). This page provides a detailed breakdown of the column fields previously shown in the “Datasets” page, along with a list of processes the user has performed with this dataset. The user can also download the specified dataset as a compressed folder. The folder contains a root dataset folder with a subfolder for the dataset ID, which in turn contains two subfolders: dcm holding the DICOM files, and result holding the process’s results.

Fig. 9
figure 9

The datasets page, which includes a sortable table of datasets stored in the user’s FetAS account. At the top left of the table, users can create new datasets (a). Within the table, columns can be hidden or displayed under the “Columns Options” tab (b). Actionable buttons (c) include the blue folder to view the dataset (Fig. 10), and the triple dotted dropdown to edit, download, or delete the respective dataset.

Fig. 10
figure 10

View dataset page. Dataset information is shown on the left, and a list of associated processes is shown to its right (a). If a user presses on one of the processes, they are navigated to the “View Process” page. A cropped version of the “View Process” page (b) is shown, demonstrating the fetal body biometric results.

Loading dataset

After selecting to create a dataset on the “Datasets” page, the user is presented with a stepper-style progress bar that indicates what part of the dataset creation process they are in. Figure 11 demonstrates the first step of this task: filling out the required fields describing the dataset. Next, users are prompted to upload their dataset by navigating their local file explorer or by dragging and dropping selected files (Fig. 12). The various deep learning models were and will be trained on different types of fetal MRI sequences; therefore, model selection must align with the sequence type being analyzed. For instance, our segmentation models trained on SSFP with SENSE T2-weighted sequences should only be applied to SSFP with SENSE T2 data. Users are expected to choose the appropriate model corresponding to the sequence type present in their uploaded dataset. This is shown in Fig. 13, where the series description metadata is displayed to the user for ease of sequence selection. The fourth and fifth steps for uploading a dataset involve running the “Artifact Detection” and “Motion Correction” algorithms. These can be skipped and applied later in the “Processes” page; as such, we will cover them in the next section.

Fig. 11
figure 11

New dataset: dataset details page. Fields are shown with their respective label, example input and a description of the field (blue arrows).

Fig. 12
figure 12

New dataset: upload MRI files page. The field to upload DICOM files is shown in the grey box with filenames uploaded. The underlined browse button opens the user’s local file manager to select a multitude of sequences.

Fig. 13
figure 13

New dataset: select sequence page. A field displaying a list of sequences uploaded by the user, with the requirement to select the specific sequence for processing. Sequence IDs are redacted under each sequence name.

Processing data

After uploading a dataset, a user can begin a new process by navigating to the “Processes” page (Fig. 14). This page, similar to the “Datasets” page, contains a table with filterable column fields. The fields are described in the following order: dataset name, process type, subprocess type, process status, last modification time, and actions for viewing or deleting the process. Notably, a queuing system is used to handle a large number of user requests, so demonstrating the status of a process is essential for ease of use.

When a new process is selected, the platform’s currently available processes are displayed (Fig. 15). In order, they are: Amniotic Fluid, Placenta, Fetal Body, Multi-Organ (or Multi-Region) and Relative Measures. Each process outlines the number of subprocesses (or tools) available. With a process selected, a user can choose a subprocess and the corresponding dataset(s) to be queued at once. Specific subprocesses have dependencies on other processes being performed; for example, to perform fetal biometrics, the fetal body or multi-region segmentation subprocesses must first be performed. If a subprocess has a dependency, it is shown to the user as a dependency to be performed first, with the subprocess greyed out.

Fig. 14
figure 14

The processes page outlines a table of processes the user has. At the top left of the table, users can create new datasets (a). The status column (b) describes the process as either queued, executing, or currently in use (as a dependency). The status and last updated columns can be hidden from view using the “Columns Options” button (c). The actions column gives two buttons to either view or delete the process while it is being completed (d).

Fig. 15
figure 15

New processes page outlining the processes currently available in FetAS with their respective subprocesses.

Visualization of processed outputs

(a) Artifact Detection Output & Motion Correction.

The artifact detection process is available after sequence selection to facilitate the detection of any low-quality image. If a user proceeds with artifact detection, a fetal body segmentation will be performed first and displayed to the user before running the artifact detection (Fig. 16. a). This provides feedback in case there are any errors during either process. Once artifact detection is complete, three pie charts will be displayed corresponding to the type of artifact. The pie charts are then sub-categorized by None, Mild, Medium or Severe types of artifacts explained in Lim et al.32. In Fig. 16, the fetal MRI shows a large proportion of slices being Severe for all three artifact classifications. Hovering over a sub-classification, users are shown the exact number of slices in such a sub-classification. If a user uploads an image with severe motion artifacts, the motion correction option is provided as the following step. This step runs the motion correction algorithm31, allowing the user to compare the motion-corrected dataset with the original dataset (Fig. 17). Due to the generative nature of this process, a user can regenerate, confirm or discard any changes to the motion-corrected dataset.

Both motion correction and artifact detection are optional processes to ensure a streamlined method of using the application. Artifact detection and motion correction are included before any further processing if a user wants to perform motion correction on an already highly motion-corrupted image. In this way, users are prompted to perform preprocessing before they are shown the complete list of processes to take advantage of a higher-quality dataset.

Fig. 16
figure 16

The Artifact Detection step. A cropped page demonstrating the transition between running (a) and finalizing artifact detection (b) is shown as cropped fields. The classification results are displayed as pie charts once the process is complete. From left to right, the classifications are motion artifacts (c), radio frequency (d) and chemical shift (e).

Fig. 17
figure 17

The Motion Correction step. Comparison of the original SSFP with SENSE fetal MRI, along with its motion-corrected output, is shown. The side-by-side images are slightly zoomed in for clarity. Users have three available buttons from left to right: regenerate results (grey), discard results (red) and accept uploaded images (green).

(b) Segmentation Output.

The segmentation subprocess is an integral part of FetAS. Currently, we provide 2D or 3D-based segmentation models. For amniotic fluid, placenta, and fetal body processes, the segmentation models available are 2D slice-based models11,16, which are effective in datasets with a low number of slices, as they avoid slice-wise downsampling loss, unlike 3D models that ensure image shape consistency across all three dimensions. However, for Multi-Organ segmentation, we implemented a 3D-based model that outperforms the 2D approach when given a sufficiently large 3D MRI (e.g., 256 × 256 × 64). Once segmentation is performed, the user can view their segmentations through a scroll-based panel, where the dataset, segmentation and resulting masked dataset are synchronized (Fig. 18).

Fig. 18
figure 18

Cropped visualization of the “View Process” page when the Multi-Organ Segmentation processes and subprocesses are selected. A fetal SSFP sequence with SENSE is demonstrated above (a) with its placenta (white), amniotic fluid (grey) and fetal body (dark grey) segmentation labels (b), and the corresponding mask of the original DICOM (c).

(c) Classification Output.

Current classification subprocesses involve placenta previa and fetal orientation classification26. An example output of fetal orientation classification is seen in Fig. 19. These subprocesses are available through the Fetal Body and Placenta processes, respectively. For fetal body orientation, the classification is performed by calculating the majority vote of the three classifications: vertex, breech or transverse. If all slices are classified as unknown, then the classification of unknown is applied. For placenta previa detection, the algorithm identifies placenta-relevant slices and feeds them into another model to classify the placenta as previa or not (Fig. 20). To ensure transparency in our models, we plan to provide model confidence for all classifications, along with the slice index for visual confirmation if needed.

Fig. 19
figure 19

The classification result of fetal orientation is shown in the “View Process” page. For the fetal MRI shown in Fig. 18, the fetal orientation subprocess displays its classification as in the vertex orientation.

Fig. 20
figure 20

The classification results for the placenta previa model are shown on the “View Process” page. A slice from the fetal MRI used for this process is shown in the figure. The current implementation generates a JSON output format (prototype), where the average probability as a whole, probability per slice and index are shown. The user interface for this system is under development to be user-friendly.

Processing performance

Table 2 presents the processing performance metrics of FetAS, based on evaluations using representative test datasets. We tested a range of tasks, including a 2D classification ensemble model for artifact detection, a generative model for motion correction, and a 3D segmentation model for multi-organ segmentation, to assess overall system performance. Average processing time across tasks was approximately 20 s, excluding the network-dependent data upload step. CPU usage was generally similar for small and large datasets, except during data upload and 3D segmentation. In the upload step, the EC2 instance parsed a significantly greater number of files before transferring them to the S3 bucket, increasing CPU load. In 3D segmentation, CPU usage was elevated due to preprocessing operations that scale with dataset size. GPU usage remained consistent across tasks because the deep learning architectures used in FetAS constrained the input size and model parameters to a fixed configuration, with the CPU performing any necessary preprocessing to meet these constraints. Finally, memory usage increased with dataset size across all tasks, reflecting the greater computational demands of processing larger inputs.

Table 2 Performance metrics for small and large datasets across a variety of tasks.

Discussion

Principal findings

In this study, we presented FetAS, a novel web-based platform integrating multiple automated analysis processes for fetal MRI, including artifact detection, motion correction, multi-anatomical segmentation, and classification of fetal and placental position. The results demonstrate that FetAS provides a seamless, end-to-end workflow, enabling clinicians and researchers to securely upload datasets, process them through an integrated pipeline, visualize outputs, and export results within a single user-friendly environment. Processing performance evaluations showed that all modules operated within feasible runtime ranges, and maximum capability testing confirmed FetAS’s robustness in handling clinically relevant dataset sizes and multiple concurrent requests.

The process design is intended to support future iterations of subprocesses that encompass a wide range of maternal and fetal anatomy and functions related to these regions. This is also similar to how a radiologist could categorize diseases; for example, our amniotic fluid process includes segmentation as a subprocess. When selecting this subprocess, a clinician must choose a dataset to use, with the option to queue multiple datasets simultaneously. Once completed, the subprocess for biometric evaluation can be processed, which currently only provides a clinician with the total amniotic fluid volume; however, classifying AF disorder can be incorporated. This exemplifies the modularity of our application, where if a clinician finds a specific use case with the original fetal MRI, segmentation, classification or metrics extracted, they can facilitate the research and implementation of it.

Our performance results demonstrate what a user can currently expect when testing FetAS with their datasets. The EC2 instance can handle any process at a time, and with a first-in, first-out queuing system, we ensure there are no bottlenecks when multiple concurrent users use it. Additionally, this enables us to scale up our resources based on demand; thus, the performance metrics listed in Table 2 are not fixed measures. Overall, these results provide a clear baseline for iterative development to improve upon.

Comparison with prior work

Previous efforts in fetal MRI analysis have focused predominantly on isolated algorithmic tasks, such as segmentation of the fetal body or placenta, motion correction methods, or artifact detection algorithms implemented as standalone scripts or research pipelines. While these developments have advanced the field, their integration into routine clinical workflows remains limited due to barriers in usability, accessibility, and the need for technical expertise to deploy and operate them.

In contrast, FetAS provides a comprehensive platform that bridges this translational gap by incorporating multiple validated AI methods within an accessible web-based interface. To our knowledge, FetAS is among the first platforms to integrate artifact detection, motion correction, anatomical segmentation, and classification tasks specific to fetal imaging into a unified tool accessible via standard browsers without the need for local installation or specialized computational infrastructure. Similar software has been developed in other domains, such as the FMRIB Software Library (FSL) for brain MRI analysis57. Another example is 3D Slicer paired with plug-ins, such as DeepInfer, which enables AI model deployment and visualization across multiple imaging modalities58. These platforms are widely used and demonstrate the potential of integrated, user-friendly software to advance clinical research in medical imaging. However, most of these platforms rely on local computing resources, which are typically unavailable in clinical environments. Comprehensive, accessible solutions remain particularly rare in the context of fetal MRI, demonstrating the unique value of FetAS. Unlike these traditional tools, FetAS is fully web-based, eliminating the hardware and software constraints associated with local installation and enabling broader, more seamless adoption in both clinical and research settings.

Clinical and research implications

The development of FetAS has several important clinical and research implications. First, in terms of clinical workflow efficiency, FetAS automates time-intensive tasks such as segmentation and motion correction. This automation has the potential to reduce reporting times, facilitate more timely diagnoses, and facilitate radiological decision-making, particularly in clinical settings with limited subspecialty expertise. FetAS can help extend high-quality diagnostic support, enabling more equitable access to care in rural or remote areas where dedicated radiological expertise may be unavailable. Second, the integration of artifact detection and motion correction modules improves the quality of input data for subsequent analyses. This improvement has the potential to increase diagnostic confidence and accuracy, particularly in the assessment of fetal growth and rare anomalies. Third, FetAS offers a standardized pipeline for the analysis of fetal MRI datasets, which can significantly benefit research efforts. By enabling the creation of large, curated cohorts, the platform supports future AI development, population-based studies, and the discovery of novel imaging biomarkers.

Limitations & future work

The current evaluation of FetAS showcases the current workflow capabilities that a radiologist can expect to be automated. Performance metrics for these algorithms have been published or are under review in prior work11,16,25,26,31,32,53,54. Platform performance is currently limited by cloud resources, particularly GPU instances for ultra-high-resolution datasets. Integrating AI into clinical workflows also poses challenges, requiring clear communication with radiologists and staff about software capabilities and limitations. Although preliminary internal user testing has been conducted, formal usability and workflow impact studies are still underway via a multi-center clinical study across Ontario (CTO #4861). This study will also expand the platform’s generalizability across sequences and scanners.

Ongoing work at the Maternal Fetal Imaging Lab includes integrating a 3D visualization tool and supporting multimodal data fusion (e.g., ultrasound, electronic health records), leveraging the software’s modular design. The ability of the current data uploading scheme, by which clinical MRIs are uploaded from a local filesystem onto our secure cloud storage, can be limiting. In busy clinical scenarios, local filesystem uploading can be impractical. Therefore, integration with hospital database software would streamline this process, making it an essential consideration for future updates. These enhancements will establish FetAS as a scalable, comprehensive platform for maternal–fetal diagnostics and clinical research.

FetAS implements comprehensive data security measures, including encryption in transit and at rest, network and storage segregation, least-privilege IAM roles, and automatic de-identification of all personal health information before upload. The application is compliant with local regulatory approval for research environments. As such, access to the platform is currently granted for research purposes upon obtaining the necessary inter-institutional and ethics approvals. The validation of this work in a clinical setting is currently underway, with clinical implementation as the end goal.

Conclusion

FetAS promises to address current challenges towards translating fetal MRI analysis methods into routine clinical and research workflows. By integrating multiple automated processes within an accessible, secure, and user-friendly platform, FetAS can improve the limited availability of specialized fetal radiologists and the resulting backlog in imaging interpretation. Through the automation of time-consuming tasks such as artifact detection, motion correction, segmentation, and classification, FetAS can reduce processing and reporting times, thereby enhancing diagnostic efficiency, improving clinical workflows, and enabling timely decision-making in fetal care. Ultimately, FetAS is designed to equalize access to high-quality fetal MRI diagnostics by enabling use in regions with limited radiological expertise, while also serving as a triaging tool in centers with established radiology services. By providing a consistent analytical pipeline, FetAS facilitates large-scale research, supports data sharing, and promotes the development of future AI tools. Its modular design also enables training and education, empowering clinicians, trainees, and researchers to contribute to advancing maternal–fetal imaging across both clinical and academic settings.