Fig. 3 | Scientific Reports

Fig. 3

From: Hierarchical contrastive learning for multi-label text classification

Fig. 3

Illustration of the Hierarchical structure and sampling strategy. Specifically, two parent nodes (e.g., \(p_1\) and \(p_2\)) and one child node (e.g., \(c_{11}\)) are randomly selected to compute the hierarchical contrastive loss. The objective is to maximize the distinctive information between \(p_1\) and \(p_2\) while minimizing the correlative information between \(p_1\) and \(c_{11}\). Here, \(c_{ij}\) represents the j-th child node of the i-th parent node.

Back to article page