Fig. 1: Framework overview. | Nature Communications

Fig. 1: Framework overview.

From: Multi-channel learning for integrating structural hierarchies into context-dependent molecular representation

Fig. 1

a The prompt guided pretrain-finetune framework. For each downstream task, the model is optimized additionally on the prompt weight selection, locating the best pre-trained channel compatible with the current application. b Molecule contrastive learning (MCD), where the positive samples \({G}_{i}^{{\prime} }\) from subgraph masking is contrasted against negative samples Gj by an adaptive margin. c Scaffold contrastive distancing (SCD), where the positive samples \({G}_{i}^{{\prime} }\) from scaffold-invariant perturbation is contrasted by against negative samples Gj by an adaptive margin. d Context prediction (CP) channel consists of masked subgraph prediction and motif prediction tasks. e Prompt-guided aggregation module, which conditionally aggregates atom representations into molecule representation by prompt token. It is realized via a multi-head attention with prompt embedding hp being the query.

Back to article page