Fig. 3: Overall performance of the RareDDIE in DDIE prediction.

a The AUC (Area Under the Curve), ACC (Accuracy) and F1 scores of DDIE (Drug-Drug Interaction Event) prediction on the common, fewer and rare event test set using seven comparison methods under the few-shot setting. In the experiments, our proposed RareDDIE extracts mechanism meta-knowledge from known events DDIs and subsequently transfers this knowledge to new events. Consequently, the model can generalize to the DDIs of novel events with only a few support samples. However, since existing DDI prediction methods are not designed for predicting interactions in unknown events, we directly provided these models with a few DDI samples from new events for training. These samples were used as support samples of test set in our framework. We compared our method against META-DDIE30, GMatching31, MetaR-In32, MetaR-Pre32, DSN-DDI33, MRCGNN34, and KnowDDI35. Each experiment is conducted five times, with a distinct set of randomly selected support samples used for training and prediction in each iteration. b The analysis of the prediction capability on an independent rare event test set. The model was first trained on the common event samples from collected Dataset1 and Dataset2, and then predicted directly on the independent rare event test set without any fine-tuning. Due to the limitation of the independent dataset, only three meta-learning models are utilized for comparison. Five independent results are obtained from the models of five independent training. c The AUC, ACC and F1 scores of DDIE prediction on the common, fewer and rare event test set with three variants under the zero-shot setting. To extend the meta-knowledge of RareDDIE to zero-shot tasks, we introduced a BST (Biological Semantic Transferring) module to create ZetaDDIE. The BST aligns dual-granular structure information with biological semantic information, leveraging a large-scale sentence embedding model for semantic information acquisition. Three variants are constructed: (1) ZetaDDIE without BST, which removes the Biological Semantic Transferring module; (2) ZetaDDIE with BioBERT, which extracts semantic information through BioBERT language model; (3) ZetaDDIE with Premodel, which uses the trained parameters from the RareDDIE model to initialize the ZetaDDIE model before training; (4) ZetaDDIE with BioSentVec, which extracts semantic information through BioSentVec language model. Each experiment is conducted five times, with a distinct set of randomly selected support samples used for training and prediction in each iteration. The significance test results of all experiments based on the two-tailed t-test without adjustment are reported in Supplementary Tables 1–3. Error bars represent the mean standard deviation across the 5 independent experiments and apply to all the panels. Source data are provided as a Source Data File.