Designing methods to induce explicit and deep structural constraints in latent space at the sample level is an open problem in natural language processing-derived methods relying on transfer learning. McDermott and colleagues propose and analyse a pre-training framework imposing such structural constraints, and empirically demonstrate its advantages by showing that it outperforms existing pre-training state-of-the-art methods.
- Matthew B. A. McDermott
- Brendan Yap
- Marinka Zitnik