Fig. 1: Word2Vec skip-gram representation of a word embedding model. | npj Computational Materials

Fig. 1: Word2Vec skip-gram representation of a word embedding model.

From: An unsupervised machine learning based approach to identify efficient spin-orbit torque materials

Fig. 1: Word2Vec skip-gram representation of a word embedding model.

It consists of one-hot encoded input layer, a hidden layer, and an output layer that performs negative sampling with n = 15. The trained word embedding model is connected to a postprocessing module, which collects material word embeddings related to a target word and calculates the similarity (Γ) to a phenomenon within a multilayered stack. The final output of the postprocessing module ranks those materials according to Γ.

Back to article page