Fig. 2
From: Medium-sized protein language models perform well at transfer learning on realistic datasets

Mean reduction in \(R^2\) when embeddings are compressed with methods other than mean pooling. (A) Results for DMS data. (B) Results for diverse protein sequences (PISCES data). In all cases, the y-axis represents different compression methods and the x-axis shows the resulting difference in \(R^2\). Dots represent the fixed effects estimates from mixed-effects modeling, and error bars represent 95% confidence intervals.