Fig. 2: Performance analysis of benchmarking methods in biological matching tasks. | Nature Communications

Fig. 2: Performance analysis of benchmarking methods in biological matching tasks.

From: PhenoProfiler: advancing phenotypic learning for image-based drug discovery

Fig. 2

a Comparison of end-to-end feature representation performance across different methods in biological matching tasks using three benchmark datasets (BBBC022, CDRP-BIO-BBBC036, and TAORF-BBBC037) under leave-perturbations-out setting, evaluated with two evaluation metrics (MAP, FoE), and four comparison methods (DeepProfiler, ResNet50, ViT, OpenPhenom). b Performance comparison of different methods at different recall rates (recall@1, recall@3, recall@5, and recall@10). c Ablation experiments of PhenoProfiler, showing performance changes after sequential removal of each module. Specifically, “-MSE”, “-Con”, and “-CLS” represent the removal of regression, contrastive, and classification learning in the multi-objective module, while “-Gradient” represents the exclusion of difference operations. d Performance curve of PhenoProfiler under solely classification learning, showing variations in MAP and FoE as the classification loss decreases. e Sensitivity analysis of multi-objective learning of PhenoProfiler, exploring the impact of regression and contrastive learning (\({\lambda }_{2}\) and \({\lambda }_{3}\)) while maintaining fixed classification learning. f Hyperparameter analysis of θ₁ and θ₂ in the gradient encoder in parallel branches. MAP: Mean Average Precision; FoE: Folds of Enrichment; MSE: Mean Squared Error; Con: Contrastive Learning; CLS: Classification Learning. Source data are provided as a Source Data file.

Back to article page