Fig. 3: TransformEHR architecture and pretrain-finetune paradigm. | Nature Communications

Fig. 3: TransformEHR architecture and pretrain-finetune paradigm.

From: TransformEHR: transformer-based encoder-decoder generative model to enhance prediction of disease outcomes using electronic health records

Fig. 3

During Step 1, TransformEHR was pretrained with generative encoder-decoder transformer on a large set of longitudinal EHR data. TransformEHR learned the probability distribution of ICD codes (vs. random distribution) through correlation of cross attention. During Step 2, we then finetuned TransformEHR to the predictions of single disease or outcome. Through attention weights, TransformEHR was able to identify top indicators for the predictions. Encoder is colored in green, decoder is colored in red, and cross attention that connects both is colored in blue.

Back to article page