Fig. 1: Integrating genomic, radiology and therapeutic cancer data in extended reality—a level playing field for enhanced multidisciplinary communication.
From: Blending space and time to talk about cancer in extended reality

a Overview of the information displayed in the model, including interactive phylogenetic tree (left), central body model showing tumours (red) overlaid on organs (blue), with genomic data sites labelled and coloured according to shared genomic information in the phylogenetic tree, and interactive annotated timeline (right). b See tumours changing through time, with respect to other clinical information. Users may slide through a timeline, annotated with therapeutic information, to access radiology-based tumour information displayed on the main body model (in red, shown behind participants). c Recognise genomic tumour heterogeneity. The lung has been isolated from the main body model for detailed inspection. Sampled tumour sites for which genomic data was generated are annotated. Labels relate to sample site codes, with colouring representing the genomic clades of tumour-relatedness. d Workspace for multidisciplinary collaboration. The tool was designed to be used by up to ten participants simultaneously, allowing multidisciplinary teams to discuss different facets of the data and integrate their personal understanding of the layers of data together. For example, major tumours can be ‘pulled’ out of the main skeletal model, as can be seen here, where the participants on the left are examining the pancreatic tumour while the participants on the right are examining the primary lung tumour. A further participant is discerning extra information about tumour genomic data on a large monitor at the back of the room. A phylogenetic tree based on DNA variants shown to the left of the main patient model allows users to view the spatial distribution of tumours that share common genomic changes and indicate relatedness. Users may highlight a clade on the phylogenetic tree and observe these samples highlighted on the main model. Many tasks can be completed simultaneously by small groups of people, or alternatively, one user may guide all other participants through the dataset, with all users seeing the same information through their headsets. b–d shows the view through one participant’s headset, while they are interacting with other participants.