Extended Data Fig. 4: The performance of RoBERTaLARGE when sequentially applying different delta-tuning methods.
From: Parameter-efficient fine-tuning of large-scale pre-trained language models

The performance of RoBERTaLARGE when different delta-tuning methods (adapter (AP), BitFit (BF) and prompt-tuning (PT)) are applied sequentially. The experiments are conducted on SST-2.