Fig. 1: RNN wave functions architecture details. | Communications Physics

Fig. 1: RNN wave functions architecture details.

From: Recurrent neural network wave functions for Rydberg atom arrays on kagome lattice

Fig. 1

a An illustration of a positive RNN wave function. Each RNN cell receives an input σn−1 and a hidden state hn−1 and outputs a new hidden state hn. This vector is taken as an input to the Softmax layer (denoted S) that computes the conditional probability Pi. b RNN autoregressive sampling scheme: after obtaining the probability vector yi from the Softmax layer (S) in step i, we sample it to produce σi. The latter is taken again as an input to the RNN along with the hidden state hi to sample the following degree of freedom σi+1. c Mapping of a Kagome lattice to a square lattice by embedding three atoms in a larger local Hilbert space. d A two-dimensional (2D) RNN with periodic boundary conditions for a 3 × 3 lattice for illustration purposes. A bulk RNN cell receives two hidden states hi,j−1 and \({{{{\boldsymbol{h}}}}}_{i-{(-1)}^{j},j}\), as well as two input vectors σi,j−1 and \({{{{\boldsymbol{\sigma }}}}}_{i-{(-1)}\,^{j},j}\) (not shown) as illustrated by the black solid arrows. RNN cells at the boundary receive additional hidden states hi,j+1 and \({{{{\boldsymbol{h}}}}}_{i+{(-1)}\,^{j},j}\), as well as two input vectors σi,j+1 and \({{{{\boldsymbol{\sigma }}}}}_{i+{(-1)}^{j},j}\) (not shown), as demonstrated by the blue curved and solid arrows. The sampling path is taken as a zigzag path, as demonstrated by the dashed red arrows. The initial memory states of the 2D RNN and the initial inputs are null vectors, as indicated by the dashed black arrows.

Back to article page