Fig. 2: Contrastive learning of text and crystal graphs. | Nature Communications

Fig. 2: Contrastive learning of text and crystal graphs.

From: Exploration of crystal chemical space using text-guided generative artificial intelligence

Fig. 2

a Heatmaps of cosine similarity between text embeddings from text encoders and graph embeddings from graph neural networks (GNNs). The Baseline BERT (Bidirectional Encoder Representations from Transformers) model refers to MatSciTPUBERT23. Crystal CLIP (Contrastive Language-Image Pretraining) denotes a text encoder trained using contrastive learning to align with graph embeddings. Values are plotted for 128 randomly sampled unit cells, forming a 128 × 128 matrix. Diagonal elements represent positive pairs, while off-diagonal elements represent negative pairs. b A t-SNE (t-distributed stochastic neighbor) visualization of element embeddings generated by the text encoders, using element symbols as the textual input.

Back to article page