Introduction

Additive manufacturing (AM) has been revolutionizing the production of complex parts across various industries. AM involves creating an object in a layerwise1 or volumetric fashion2, using either polymeric, ceramic, metallic, or multi-material printing. A critical challenge common to all AM methods is to ensure that the final part meets the design criteria, which remains manual and laborious tasks, typically involving human expertise and experience. Defects in AM, such as excessive or insufficient material deposition, internal voids, or porosity defects are often concealed inside the part and not accessible to traditional inspection methods3,4. Detecting such defects requires expensive and time-consuming computed tomography (CT) scans which are subsequently challenging to process due to their size, complexity, and inherent spatial nature. However, inspection is an unavoidable step in the overall manufacturing process.

Figure 1
figure 1

Overview of the proposed framework aimed to enhance AM part inspection of DTs via collaborative VR. It supports multimodal data alignment and visualization, streaming large-scale and multi-resolution volumetric data, visual occlusion management, as well as team collaboration features. Furthermore, it allows multiple users either co-located or remote to collaborate in a shared virtual environment.

Given the ubiquitous challenges associated with monitoring, part inspection, and part performance verification, industrial and commercial sectors are building digital twins (DTs) of AM to mitigate these issues5,6,7. DTs help in optimizing the AM process and part inspection by enabling detailed visualization and analysis. While there are various definitions of what constitutes a DT in AM, the most common notions include a variety of rich multimodal data streams covering the entire production cycle from design to printing, to inspection and lifecycle management8,9. Compared to traditional single-sensor-based monitoring for DTs, multisensor fusion-based DTs, including toolpath and process parameters, provide a more in-depth understanding and analysis, especially for inspecting and predicting the location-specific quality of AM parts10. Integrating and visualizing multimodal data from different formats and various sources can pose technical challenges that need to be resolved. In this context, AM data is collected from various sources, such as design files, sensing suites, images, prescribed toolpaths, machine kinematics and health monitoring, CT scans, and other performance measurements. However, as DTs become increasingly data-intensive, few approaches exist today that can automatically and jointly process all different data streams. Hence, the crucial challenge is for experts to effectively gain insights from the DT in a coherent fashion11,12,13. Traditional approaches and desktop tools provide only limited visualization and interaction opportunities for effectively handling the 3D spatial nature of AM components and their DT representations9,14. This can make it challenging to inspect complex AM structures thoroughly.

Compared to desktop-based systems, virtual reality (VR) provides unprecedented benefits to deal with several challenges in AM, including optimizing process, maintenance, part quality, and inspection15,16,17,18. The spatial and multimodal nature of DTs and the distributed AM process result in new challenges for human experts to inspect complex data and require a new set of capabilities. By leveraging the advantages of VR, a more intuitive and effective inspection process can be facilitated to meet the specific demands of AM, such as multimodal data alignment, occlusion management for revealing internal defects, streaming large-scale volumetric data, and real-time synchronization for distributed expert collaboration13,19,20. Managing the scalability and complexity of AM DTs may require multiple experts and multidisciplinary teams to inspect multimodal data and their properties based on virtual representations21. Therefore, team collaboration is extremely useful to inspect AM data from all different lifecycle stages22. For detailed information on related work, refer to the supplementary material.

Here, we contend that carefully designed immersive visualization techniques and intuitive interactions coupled with a shared virtual environment can make DTs significantly more effective. Nevertheless, designing such visualizations for the data-intensive and multimodal streams in an intuitive and scalable manner remains a challenge23,24. For example, CT scans, while indispensable for ensuring the structural integrity and accuracy of these complex parts, produce large-scale and complex volumes. The three-dimensional and dense nature of these datasets makes them not only difficult to inspect but also challenging to interpret, especially when considering the boutique or low-volume production typical of AM applications. In addition, experts from design, production, and related teams need to be involved to ensure its accuracy, reliability, and usability21,25,26. In response to these challenges, we developed a collaborative VR framework for AM experts to collaboratively inspect AM parts (see Fig. 1). Our framework not only fosters collaborative interactions among geographically distributed users but also incorporates several innovative approaches aimed at improving the understanding and inspection of complex AM parts and processes. These features include intuitive alignment capabilities within the VR environment, visualization strategies for effective visual occlusion management, and VR-based volume rendering of the volumetric data. Moreover, it provides the ability to stream large-scale volumetric data addressing the critical issue of handling the massive datasets generated by CT scans. The framework is designed to align seamlessly with the spatially complex, multimodal data integral to AM DTs, fostering novel team collaboration. It represents a paradigm shift in how users interact with, analyze, and derive actionable insights from complex manufacturing data, setting a new benchmark in this emergent field and pushing the boundaries of current DT inspection methods. Our main contributions are as follows:

  • Design and implementation of a novel collaborative VR framework designed for real-time collaboration and inspection of AM DTs. It allows multidisciplinary teams ranging from design and manufacturing to quality assurance teams to collaborate and jointly discuss data in the immersive VR environment.

  • A comprehensive inspection tool aimed at improving, understanding, and leveraging the complexity of AM data, including intuitive alignment, multimodal data visualization, occlusion management, streaming large-scale volumetric data, and features for real-time team collaboration.

  • Results of two exploratory studies conducted with domain experts provide valuable insights into usability, potential benefits, applicability, and current limitations, providing a basis and opening new research directions for DTs.

Figure 2
figure 2

Overview of the workflow includes several components of the additive manufacturing and virtual inspection of digital twins (left to right). It starts with data acquisition aimed to collect multimodal data from the manufacturing process ranging from 3D design models to machine data and X-ray CT scans after the curing process. Data processing involves the process of collection and integration of multimodal data, while data storage and management aim to store and manage large-scale volumetric data, including filtering, querying, and subspacing. Model and simulation building refers to the virtual environment as well as the communication interface to connect the simulation with the data. Finally, virtual simulation and interactive interface are components developed to support and enable users to quickly inspect data collected from the printing process in an immersive and collaborative environment.

Collaborative VR for additively manufactured part inspection of digital twins

AM DTs are described by multimodal data streams collected from the processes, including prescribed toolpaths used for instructing the printer, machine toolpaths describing the actual printing locations, X-ray CT used for investigating internal material densities, and in-process images for layerwise verification. To provide a comprehensive inspection tool for those modalities, however, there are multiple steps to achieve this goal. We first need to align those modalities and provide multimodal data visualization. Since X-ray CT scans are crucial for inspection, occlusion management plays a vital role in conveying depth information of volumetric rendering for inner-structure inspection. Moreover, X-ray CT scans are often large and challenging to manage and share between users. Hence, a solution is needed to store and provide flexible data management. Team collaboration is not only sharing a 2D screen, but the users need to step into the shared immersive environment to explore and inspect those DT representations.

The workflow of AM part inspection of DTs comprises several key components ranging from data acquisition to collaborative and interactive visualization in an immersive environment (see Fig. 2). Data acquisition refers to the process of obtaining multimodal data before, during, and after the printing process. The collected data is then processed with the data processing step. This step involves the processes of stitching, converting, and pre-aligning multimodal data. Data storage and management involves storing data on the server and providing the ability to manage, query, and filter. Model and simulation building aims at creating a virtual environment as well as a communication interface to provide interactions between users and virtual objects. Virtual simulation focuses on synchronization and streaming data from the server to the simulation and providing visualization as well as dealing with occlusion and volumetric rendering. Finally, the interactive interface provides immersive and intuitive interactions as well as real-time synchronization between multiple users for AM part inspection. In the following sections, we describe collaborative VR-based approaches designed and developed to enhance the inspection and analysis of AM parts. The proposed framework allows individual or distributed users to perform data exploration and inspection in the shared virtual environment via real-time synchronization. Moreover, the framework incorporates several features: intuitive alignment, multimodal data visualization, comprehensive occlusion management, streaming large-scale volumetric data, and features for synchronous team collaboration.

Intuitive alignment

Comparing different modalities of AM DTs is crucial for inspection, e.g., comparing between prescribed and actual toolpaths or X-ray CT scans after the printing to the intended design. Valuable insights can be gained from these comparisons to highlight and inspect deviations between modalities and potential defects. The fundamental challenge prevalent in many DT applications is that each modality has its own format, units, and coordinate systems. Without proper alignment, those modalities cannot be compared and integrated effectively. For instance, additional adjustments are required to align the prescribed toolpaths, machine toolpaths, in-process images, and the X-ray CT volume to ensure accurate visualization and interpretation. In this case, inspecting how a location in one modality, i.e., in-process images, overlays in another modality, i.e., X-ray CT volume, would not be possible without properly aligning the data. We believe that the VR-based alignment could be a crucial tool to optimize these modalities’ visualization and support the multimodal analysis.

While mid-air interaction is a common approach for interacting and manipulating virtual objects in VR, such interactions could also suffer from hand instability, e.g., unnoticeable tremor, which introduces inaccuracies when trying to align at a high precision. To achieve this goal, we implemented a VR-based alignment tool with an adaptation and enhancement of a precise alignment technique proposed by Rodrigues et al.27. We utilized a non-linear mapping approach that adapts the acceleration of the VR controller to determine the object movement instead of a direct linear mapping between the controller and the virtual object. The VR controller speed is computed for each frame by calculating the controller’s displacement within a time interval of 500 ms. An adaptive scaled mapping technique is enhanced to allow more precise control during object manipulation by automatically handling the separation of the degree of freedom (DoF). Therefore, we can use this technique to reduce the cognitive load and achieve high-precision alignment of multimodal data. Since the printing process often presents both local and global defects that go beyond the rigid body alignment, we further integrated a free-form deformation technique28 to provide a flexible and intuitive alignment for non-linear deformation. It first defines the control points based on the bounding box or grid structure of the model. Each control point has its position and acts as a handle for deforming the object. The user can adjust the deformation by interacting with the control points. Once the control point is moved, the interpolation method is computed for affected nearby vertices to smoothly deform the object. These transformations can be saved for revisiting the alignment or exporting to other tools for further registration.

Figure 3
figure 3

Multimodal data visualization to support the process of data inspection and analysis: (a) users can load and explore data from multiple data streams in the virtual environment, (b) data representations based on printed layer, including prescribed toolpath (green), machine toolpath (yellow), and error (red), (c) users can enable in-process imaging to further inspect the data, and (d) volumetric data visualization from X-ray CT scans with machine toolpaths.

Multimodal data visualization

To fully understand and analyze the complex behavior of AM DTs, relying solely on a single type of data visualization is often insufficient. While there are possibilities to gather new forms of data from physical assets, the ability to process and visualize them into valuable insights and situational awareness, such as corresponding locations in different modalities, is challenging as one modality would visually occlude another when they are superimposed. The main benefit of multimodal data visualization for AM inspection and analysis is to provide a comprehensive comparison across different data streams, e.g., before vs after, pre-cure vs post-cure, or nominal vs actual printing part. One possible solution is to superimpose the visualization of one modality over another, e.g., computer-aided design (CAD) model and X-ray CT scans29. However, this introduces the issues of occlusion and robustness between modalities.

In this work, we incorporate several modalities, ranging from toolpaths and in-process images to X-ray CT scans. For instance, overlaying additional modalities, i.e., machine toolpaths on top of the CT scans and in-process images can help identify areas of defects and provide a more comprehensive understanding of the printing process and behavior (see Fig. 3). Current supported data streams include design models, prescribed toolpaths (.pgm), machine data (.hdf5), including machine toolpaths and errors, in-process images, X-ray CT scans, and 3D reconstructed models. For the development environment, a Unity game engine (Unity Software Inc., CA, USA) version 2019.4.20f1 was used. We implemented interfaces to load those data types and visualize them in the immersive VR environment. The users can intuitively explore, analyze, and gain a comprehensive understanding of the process, which can make informed decisions to optimize the process and quality. Various options are provided to support multimodal data exploration and analysis, e.g., inspecting layer by layer, choosing data streams, and comparing them with different modalities. We used line renderer components in Unity to visualize the toolpaths. In this context, prescribed toolpaths, which are used to instruct the 3D printer (G-Code commands), are visualized as green lines with corresponding positions (green dots). The machine data containing actual paths generated by the 3D printer are visualized with yellow lines, and errors between prescribed and machine toolpaths are displayed in red. In-process images captured during the printing process are visualized to the corresponding positions of the printing coordinate. The users also have options to enable and overlay in-process images as well as volumetric rendering of the X-ray CT scans for further inspection and analysis.

Figure 4
figure 4

Occlusion management with volumetric rendering in immersive VR. We implemented the colorized volume with the transfer functions and color maps. With the cutting objects, it allows users to explore the inner structures of the volumetric data smoothly, e.g., using a cross-section plane (a), sphere (b,c), or box (d,e) cutout with inclusive and exclusive modes, respectively.

Occlusion management and volumetric rendering

Volume rendering is used to visualize final part volumetric data obtained from X-ray CT scans. Since our AM lattice strucutres are dense, it is challenging to distinguish and identify the inner-structure details of volumetric data without occlusion management. It is also crucial to enhance the depth perception and shape of the rendered object exposed to its surrounding environment. To reduce occlusion in volumes, a common approach is to apply a transfer function allowing the user to specify structures to be visualized and manipulate the opacity and color of the voxels belonging to the structures30. For instance, the transfer function can be used to control the visualization of the structures that occlude the regions of interest. However, adjusting only the transfer function does not allow for global adjustments of visibility.

We adapted and enhanced UnityVolumeRendering technique to handle and compute volumetric data in Unity. The transfer function and color mapping were integrated to improve the occlusion and lighting management in VR. We integrated cutting geometries to provide a global way to adjust visibility and inspect the inner structure of volumetric data while maintaining local adjustments. It includes cross-section plane, box, and sphere cutouts similar to the technique proposed by Titov et al.31. The box and sphere cutouts supplement with inclusive and exclusive modes that can either make the inside visible or invisible (see Fig. 4). It allows for a flexible adjustment of visibility, thus, the inner structures inside the volumetric data can be explored and inspected. The users in VR can interactively move the cutting object around and choose the cutout mode to crop the region of interest, e.g., to inspect and discuss the data with their collaborators accordingly.

Streaming large-scale volumetric data

X-ray CT scans are crucial for the AM inspection process to analyze defects and ensure the overall quality and reliability of the printed parts. They provide a non-invasive and non-destructive way to characterize and inspect internal material densities of the AM parts, particularly complex internal structures that are not accessible by other inspection methods. However, X-ray CT scans are often large, which cannot be easily shared with collaborators and also impose computation and rendering challenges for VR.

One of the key features of our proposed framework is the ability to utilize large-scale data management and stream multi-resolution volumetric data remotely during runtime. We utilized a data management approach using OpenViSUS for efficient querying and managing large-scale volumetric data32,33. It was deployed on a server to handle the challenges posed by large datasets, which can be challenging to fit into the memory of the typical computer. The datasets are also stored on the server and can be streamed with our developed RESTful Web API to provide flexible data streaming, e.g., filtering, cropping, and multi-resolution data query. This way, we can instantly get a subsampled version or a high-resolution subspace of the data, so that interactive inspection is guaranteed and not inhibited by the large-scale nature. Furthermore, we incorporated the development with multithreading using Thread Ninja toolkit to support the massive computations in the background in Unity. Hence, the main thread can be used to maintain the computational logic for VR rendering.

Figure 5
figure 5

Various features for data exploration and inspection as well as related features to improve team-based collaboration: those features include adjusting CT window width/level, using cross-section plane and axis slice view (axial, coronal, and sagittal) to explore and inspect data, drawing annotations, aligning and visualizing multimodal data, streaming multi-resolution CT volume, loading and interacting 3D scanned models, and drawing on the virtual whiteboard.

Multi-user and collaborative inspection

Conventionally, designers, production teams, domain experts, and sometimes customers directly interact with the physical objects after the AM part has been produced, often requiring the shipping of physical components or experts traveling to different locations. One notable characteristic of the parts produced through AM is the intricate internal geometric structures. Inspecting these structures requires volumetric imaging, i.e., CT scans that lead to the well-known challenges of visualizing it through 2D interfaces, and domain knowledge by multiple experts20,29. Collaborative VR tackles these crucial challenges by enabling intuitive interactions, jointly shared inspection and analysis, and facilitating real-time team discussions.

The proposed framework provides exploration and inspection features via real-time synchronization. Figure 5 shows features developed to support the team in exploring and inspecting AM parts. A virtual tablet was designed to provide user interface interactions. The users can change different datasets, adjust volume intensity, i.e., CT window width and level, and use data exploration features, such as cross-section plane and slicing views from different axes, including axial, coronal, and sagittal views. Drawing annotations in collaborative VR can be essential for team discussion. It can be used to enhance the collaboration between team members more effectively. The users can draw annotations to specify the region of interest on the dataset and initiate the discussion accordingly. The drawing was implemented by using a VR ray casting technique mapped with the index fingertip of the virtual hand. Moreover, we developed a virtual whiteboard to support the team discussion. The users can use a virtual marker to draw their illustrations on the whiteboard in a similar way that they perform in the physical world. To provide an engaged environment for team communication, we design personalized user avatar representations, including an avatar head generated from a headshot, animated VR hands, facial blendshapes, and the voice icon that appears when the user engages in voice communication. Figure 6 shows collaborative users explore and inspect AM data in the virtual environment. While the environment is designed and developed for VR users using head-mounted displays (HMD), non-VR users can also join in the environment as a spectator mode using conventional inputs such as mouse and keyboard. This feature would be essential to allow other users, e.g., trainees, to observe and quickly join the discussion in the virtual environment.

We developed the collaborative VR environment using a client-server architecture. This approach can provide a stable and secure solution for network communication and avoid common connection issues compared to the peer-to-peer approach, e.g., scalability and over-distance connection. Thus, a client sends an update through the server, and the server multicasts the updates to other clients for real-time synchronization. The proposed framework also allows users, who join late or reconnect due to connection issues, to load previous synchronized updates through the server. We utilized a Photon Unity Networking (Exit Games GmbH, Germany) to handle and provide load-balancing services and shared network sessions between the users. Photon Voice 2 was also utilized for voice communication. Since the client’s computer specification and network conditions could affect the connections, we implemented the mechanism for data synchronization suggested by Singhal and Zyda34 to optimize the network latency. A remote procedure call (RPC) approach was used to send requests and distribute data. Sending large amounts of data in one chunk is avoided. To prevent the server’s bottleneck, clients are also responsible for individual graphical rendering. Moreover, the clients are responsible for storing object states locally, while only sending updates of their periodic data, e.g., the object’s positions, rotations, and scales during the interaction and active state.

Figure 6
figure 6

Multi-user collaboration for AM data exploration and inspection: (a) collaborative users virtually join and explore AM data, i.e., X-ray CT volume, in a shared virtual environment, and (b) users can inspect further details of the data using various inspection features, i.e., a cross-section plane. The interactions between users are synchronized in real time.

Apparatus Two VR-ready computers were used during the testing and evaluation. They were RAZER BLADE 16 laptops equipped with 13th Gen Intel(R)Core(TM)i9-13950HX (32 CPUs) processor, NVIDIA GeForce RTX 4090 (16GB VRAM) graphics card, and 32GB RAM. One user connected with a VIVE XR Elite VR headset, which has 1920 \(\times\) 1920 pixels per eye (3840 x 1920 pixels combined), 110-degree field of view (FOV), and 90 Hz refresh rate. Another user used a Meta Quest 3 headset that has 2064 \(\times\) 2208 pixels per eye, 110-degree FOV, and up to 120 Hz refresh rate. Both VR headsets were connected as PC VR. The speed of internet connection was approximately 247.58 Mbps (download) and 248.41 Mbps (upload).

Evaluation and expert feedback

This work was reviewed and approved for release by Lawrence Livermore National Laboratory’s review committee under LLNL-JRNL-861091. All methods were carried out in accordance with relevant guidelines and regulations. We followed Human-Computer Interaction (HCI) best practices35 in user evaluations and gathering structured feedback from domain experts. These practices, including exploratory and usability evaluations as well as think-aloud protocols, offer significant advantages to evaluate the usability, usefulness, user experience, and applicability of our framework and ensure the system meets user needs. Two exploratory studies were conducted: the first study aimed to assess the applicability and potential benefits of the proposed framework18, and the second study focused thoroughly on case studies of inspection and analysis with real data collected from AM processes. The first study was conducted with six domain experts (three were computer scientists and the other three were engineering and material scientists). They all had experience working with AM. One reported having nine years of working experience, three reported between three to five years, and the other two rated as having between one to two years of working experience. The second study was conducted with two AM experts (two and six years of working experience). Exploratory and semi-structured interviews were conducted to assess the proposed framework and collect their qualitative feedback. All participants were informed about the objective and study procedure before starting the study. Verbal informed consent was also obtained from all the study participants. For datasets used in these studies, see the supplementary material.

Applicability and potential benefits

All experts expressed their positive feedback. They confirmed the potential benefits and its applicability. The framework can be a powerful tool to enhance the inspection process of AM DTs. The experts stated that using collaborative VR is beneficial to visualize and guide their collaborators, e.g., between the design and production teams, through the machine, installation setups, and experimental data. Allowing collaboration among experts and interdisciplinary teams across geographical locations could enhance efficiency, particularly the inspection and analysis, while also establishing a new benchmark for incorporating DTs. It is extremely useful when physical travel is restricted, i.e., during the COVID-19 pandemic. Developed features for data exploration and inspection were assessed as useful with real-time synchronization, which is beneficial for team communication and discussion cycles. Scaling and slicing through the data were rated as the most useful features. VR provides an immersive way to inspect and analyze complex structures of AM. It is particularly pronounced since it is challenging to perform these tasks on 2D desktop screens. This could lead to better-informed decisions and optimize the design and production accordingly. The spectator mode using conventional input devices was also rated as helpful for some users, e.g., trainees or senior supervisors to promptly join in the discussion. There were no potential issues regarding VR discomfort or cybersickness observed. They mentioned the framework can be easily adapted by their team although there might be a small learning curve of using this technology.

One expert expressed that drawing annotation in VR is intuitive and extremely useful to highlight the point of interest in the data to their collaborators. In this way, they could speed up the process, define multiple regions of defects, and initiate the discussion. The experts suggested adding different colors for annotation drawings or each user has a unique color for themselves. Besides drawing in the 3D space, they also suggested implementing the annotation drawings on the slicing plane. This would be beneficial to explore the data slice by slice. Additional features, such as avatar representations and interactions with the virtual whiteboard, were developed to support team communication and discussion. VR hand representations with animations when pressing the controller buttons were assessed as supportive. The experts suggested adding a help button or tooltips to show the button-mapped functions. However, they stated that it is a learning curve of using the technology. The experts highlighted a few challenges of integrating it into their workflow, e.g., logistics and accessibility. They also mentioned about safety while using VR headsets in a limited space in their working setting. Besides using the framework for inspection and analysis, the experts confirmed employing it for team training and skill development using virtual environments.

Figure 7
figure 7

Inspection and analysis of additively manufactured parts using collaborative VR and DTs: (a) AM and collected multimodal data during the printing process, (b) results of printed part (25 mm square lattice) in the physical world, (c) users use our proposed VR framework to inspect different multimodal data representations, e.g., marking annotations, and (d) they can compare results between post-curing volume of CT scans (top) and layer-by-layer images during the printing process (bottom).

Case studies

In addition to collaborative aspects, utilizing and visualizing multiple data streams of the AM processes was evaluated as essential for diagnostic and enhancing the AM inspection process. Figure 7 shows an example of a case study using the framework to inspect and analyze AM defects through multimodal data visualization. Two AM experts were asked to use the framework to explore and inspect the data. Their feedback was collected via a Think-aloud protocol and semi-structured interviews. They mentioned that the data can be collected from various sources during the printing process. However, at the current stage of their workflow, there is no best approach to integrate the data together. It is also limited to visualizing and interacting with those data on the desktop screen. Hence, it is advantageous to combine and visualize them in an immersive environment. It is not only beneficial for exploration but it is crucial for inspecting and defining defects, e.g., evaluating each step and layerwise of the printing process, such as prescribed and machine toolpaths, volume of CT scans, and in-process images.

They stated that it is challenging to define defects via visual inspection or desktop-based systems because the parts can be very small. With our VR framework, they can enlarge and inspect them with additional information. As an example on this dataset, they found that something poked the first layer of the pre-cure process resulting in defects. The framework was helpful by supporting different data modalities, and the data was aligned. They stated that it provides a comprehensive way to explore and inspect the part, e.g., during the pre-cure and post-cure process. Hence, they could compare in-process images with the post-cure volume obtained from CT scans with other modalities overlaid on top, such as prescribed and machine toolpaths. In this case, they can see that the toolpaths were actually going through the hole. That means the defect was there before the post-cure process, and they could evaluate the layers, which were affected. The annotation was helpful in highlighting the defect across multiple modalities. They could further use the cross-section plane and slicing feature to assess the area. All interactions were synchronized in real time, and it was beneficial to collaborate and communicate with their collaborators in the same shared environment. For further improvements, the experts suggested adding additional metadata information to the visualization, such as strut diameters, dispense rates, and scaling units. Providing a heatmap for velocity along with the multimodal data visualization could be useful as well. Furthermore, they suggested developing a measurement tool in VR to measure the length between two points.

Discussion and future work

DTs are increasingly used in manufacturing settings. We tackle the critical needs and explore the use case of emerging DTs, which requires handling multimodal data in an intuitive and collaborative way. We demonstrate using our proposed collaborative VR framework for the inspection of AM DTs. The proposed framework shows that the traditional desktop-based approach might be not appropriate for team-based collaboration and providing the inspection and analysis intuitively and immersively. Our collaborative VR framework addresses these challenges with advanced visualization and interaction techniques. The framework was evaluated with domain experts in the AM. The results demonstrate the usability, applicability, and potential benefits. All experts found the framework intuitive and effective, and it opens new ways to qualify the AM parts.

To provide a comprehensive tool for the inspection of AM DTs, we address the challenges ranging from the alignment of multiple data streams collected from the AM processes, multimodal data visualization, occlusion management, and streaming large-scale data to collaborative inspection. The alignment was rated as helpful to map all data modalities in one place for comparison. It is a crucial component for multimodal data visualization. Aligning complex structures requires fine control, however, explicitly switching between translation, rotation, and scaling with our alignment technique can increase task completion time. Additionally, the free-form deformation for alignment can be challenging when increasing the object’s scale. The alignment can be improved by providing additional information, such as scaling units and a measurement tool to measure and estimate the length of two points.

In terms of multimodal data visualization, visualizing the prescribed and machine toolpaths along with the X-ray CT and in-process images was useful to provide insights into behaviors of the defect before and after the curing process. Different modalities, including sensor data, pressure, velocity, and others, will be incorporated in future work. Investigating visualization techniques for new data streams, such as heatmap and multiscale visualization, would be interesting as well. It is essential during the inspection and analysis to consider the occlusion and color maps for volumetric rendering in VR. On one hand, it provides an intuitive way to explore volumetric data and grasp the spatial relationships of complex internal structures. On the other hand, the possibility of working as a team to jointly explore data and facilitate collaborative analysis with real-time synchronization is advantageous. However, there are several technical challenges in visualizing and rendering super high-resolution volumetric data in VR to avoid user discomfort. It also includes hardware requirements, which require a powerful computer with a better graphic card to render and maintain a high level of VR immersion. For occlusion management, future research should explore other occlusion techniques, such as contextual ambient occlusion31, which might be useful to improve the performance and quality.

Volumetric data from X-ray CT scans can be extremely large, thus, achieving high-resolution and detailed visualization can be challenging. Moreover, data formats and data transfer for collaborative VR can pose a significant challenge. To deal with large-scale and multi-resolution volumetric data, we developed an approach to stream and handle those massive datasets. By utilizing OpenVisus for data management, the process of managing and filtering data is enhanced. This could solve the issues of distributing and storing data locally for collaborative users. Moreover, dealing with parallel computing is crucial for VR because it is important to maintain the VR rendering while other computations should be processed in the background. However, it can be challenging to predict the exact time until it is fully rendered. It also requires significant computational resources to achieve smooth and responsive performance. While the proposed framework allows users to adjust the quality of the volumetric data, further investigation with a progressive rendering would be advantageous. Thus, instead of waiting for the entire volume to be rendered, it can provide an initial low-resolution and a rough approximation rendering to the user until the computational resources become available.

DTs in the Industrial Metaverse is an emergent concept, which promises various benefits for the DTs ecosystem. It opens new research directions where digital representations and virtual collaboration among users in the immersive environment would become powerful tools for design36, instruction and maintenance37,38, inspection, optimization, and training39,40. It enhances collaboration, interactivity, and team communication via real-time synchronization. Nonetheless, ensuring high frame rates for VR and maintaining real-time synchronization for collaborative users is challenging. It requires high-performance computing hardware and good internet connections to ensure smooth and synchronous communication between users. Future work can focus on minimizing latency to ensure smooth and synchronous communication between users.

Since AM DTs involve various properties such as materials, thermal conductivity and resistivity, sensor data, and other physical properties, investigating and leveraging artificial intelligence (AI) and data-driven techniques would be essential for predicting and optimizing data41. For instance, using deep conventional neural networks to extract layer-wise image measurements of the filament thickness and spacing to validate the part-scale reconstruction. Furthermore, AI-assisted inspection in the virtual environment would be an interesting research direction to assist the users in analyzing, highlighting, and further investigating potential defects.

In this work, AM parts were printed with nozzle sizes ranging from 125 \({\upmu }\text {m}\) to 500 \({\upmu }\text {m}\). Therefore, defect detection would be based on that range. Defect detection can vary based on AM characteristics, including excessive or insufficient material deposition, porosity defects, and internal voids. For defect size, we aimed to detect if a strand is 20% off nominal. For instance, if the nozzle size is 500 \({\upmu }\text {m}\), that would mean the size of 100 \({\upmu }\text {m}\) is considered too small. Moreover, the uncertainty of inspection results may depend on various factors, including material properties depending on the manufacturing process and material characteristics, measurement based on the calibration and specifications of the sensors used, process variability involving the flow and fluctuation for layer-wise parts, and part complexities, which require further detailed analysis.

Our evaluation and case studies involved real datasets obtained from the AM process. Two exploratory studies were conducted. The domain experts were asked to assess and use our framework for inspection, followed by semi-structured interviews. The proposed framework was evaluated as advantageous compared to previous inspection tools. While the primary focus of this work is on the innovative aspect of visualizing multimodal data within a collaborative VR environment, the evaluations were conducted based on subjective measures, including experts’ interviews and feedback. Qualitative feedback is invaluable and provides significant insights into potential benefits, applicability, and current limitations for further improvement. It not only demonstrates the potential and practical use of our framework but also offers valuable domain-based perspectives, which can inform and inspire future investigations as well as provide a foundation for future research directions. Nonetheless, future work should incorporate more case studies and aim to assess objective measures, such as quality control and error analysis. This includes increasing data streams, including process-level data of the sensors, and investigating visualization approaches to tackle new data modalities.

Apart from the immersive VR environment, team-based collaboration can be applied to situated analytics in a mixed-reality environment of manufacturing settings and across various fields11,42. It could also provide a new approach to incorporate sensor integration of the physical systems to monitor, diagnose, and lead to more accurate and dynamic simulations. While the users can communicate in VR, we have to acknowledge that they may have asymmetric knowledge regarding the data. Future work also aims to provide options to save the annotations and results from the inspection as well as their discussion, e.g., drawings on the virtual whiteboard.

Conclusion

DTs have become pervasive and increasingly used in industrial manufacturing. While most research focused on automation, 3D modeling, and interoperability of DTs, critical needs of handling and inspecting DTs data with intuitive and immersive approaches remain. Moreover, DTs are becoming increasingly complex with various data streams, which often require multiple experts to get involved for effective inspection. In this work, we explore the use case of DTs in AM part inspection. We present and demonstrate the use of our proposed collaborative VR framework to enable and enhance the inspection process of AM DTs. The framework not only focuses on collaborative interactions but also provides several innovative components to improve the inspection and understanding of complex DT data in AM. Those components include VR-based interactive alignment, multimodal data visualization, comprehensive occlusion management and rendering, streaming large-scale volumetric data, and features for team-based inspection and collaboration. Exploratory and semi-structured interviews were conducted with domain experts to evaluate the usability, applicability, and potential benefits. Case studies of AM inspection using real data collected from the AM process were also presented. The proposed framework is a promising tool to significantly enhance the inspection process by reducing inspection time and improving efficiency and accuracy. It provides a natural way to interact with DT representations by improving spatial awareness, enabling real-time communication, sharing exploration of complex structures, and remote inspection with multiple users. The framework offers a new benchmark for emerging DTs and pushes the boundaries of current AM DTs inspection methods. Our capability is broader than quality control alone, as it can enable process and facility planning, serve as a training platform, accelerate root cause analysis, and make quick use of data without custom software analysis codes and tools. Moreover, it opens new research directions and offers new opportunities for integrating with other DT domains, including the medical, aerospace, and consumer products industries.