Figure 2 | Scientific Reports

Figure 2

From: Synthetic data generation for a longitudinal cohort study – evaluation, method extension and reproduction of published data analysis results

Figure 2

Overview of the developed architecture. For all settings, the real-world data are pre-processed and then embeddings are learned by a heterogeneous-incomplete variational autoencoder (HI-VAE). Here, we compare three different model architectures: the original HI-VAE used in the baseline approach Variational Autoencoder Bayesian Network (VAMBN), VAMBN – Flattened Time Points (FT), where the structure of the feedforward network is changed in such a way that all visits per module are encoded together, i.e., learned in one model, and VAMBN – Memorised Time-Points (MT), where the default feedforward network is then changed to an LSTM layer in order to better cope with longitudinal dependencies. The changes in FT and MT lead to a reduction of complexity in the Bayesian Network.

Back to article page