Fig. 2
From: Improving genetic prediction by leveraging genetic correlations among human diseases and traits

Improving prediction accuracy using information from multiple traits. a Expected gain from multi-trait vs cross-trait predictors as a function of rG. Two traits are considered. The first trait has a sample size of 20,000 and a SNP heritability of 0.5. The sample size and SNP heritability of the second trait vary between panels. The blue line shows the expected prediction accuracy of a single-trait predictor. The black line shows the expected prediction accuracy of a multi-trait predictor. The purple line shows the expected prediction accuracy of a cross-trait predictor (using only trait 2 to predict trait 1). The advantage of a multi-trait predictor over a cross-trait predictor decreases with increasing rG, h2, and sample size of the second trait. b Simulation results. Prediction accuracy is shown as correlation between simulated genetic value and predicted phenotype of individuals. Genotypes from European individuals in the GERA cohort were used for simulation. Boxplots show results across six replicates. In the left panels, the LD structure was removed by permuting dosage values for each SNP across all individuals. In the right panels, the original genotypes were used for simulation. Expected prediction accuracies were derived for the case of unlinked genotypes and are shown as red horizontal bars. In each section, the prediction accuracy of three predictors is shown: (1) single trait BLUP, (2) multi-trait BLUP (MT-BLUP), and (3) weighted approximate BLUP (summary statistic-based multi-trait predictor: wMT-SBLUP). Simulation in genotypes without LD results in prediction accuracies, which conform to expectations. In the presence of LD, the expected prediction accuracy depends very much on the choice of Meff