Fig. 6: Effective operators and parallel tensor networks.
From: Tensor networks for lattice gauge theories beyond one dimension

a Procedure for optimizing a TTN to find the ground state of a QMB system: the energy is computed by contracting the Hamiltonian H (yellow tensor) with the TTN, representing the state \(\left\vert \psi \right\rangle\), and its hermitian conjugate, representing \(\left\langle \right.\psi |\). The variational optimization starts from a target tensor T (red tensor), by computing its effective Hamiltonian Heff and then solving the local eigenvalue problem for the latter. The tensor T is then updated with the newly found ground state, and the procedure iterates over all the tensors in the network (sweep). b The workload itself consists of optimizing each tensor held by the MPI thread t, which requires effective operators calculated by other MPI threads. We dub delaysΔi the number of optimization cycles needed to obtain the information of tensors in the i-th MPI thread in another MPI thread via MPI communication. MPS naturally split into sub-chains, which communicate with one or two neighboring MPI threads to obtain updated effective operators. Delays for updates scale with the distance between two MPI threads along the chain. Each MPI thread can use threading or openMP, e.g., in a hybrid openMP-MPI approach. c Similarly, TTNs can be split into sub-trees for each MPI thread, allowing for optimizing the sub-tree without communication with other MPI threads. Delays due to updating scale logarithmically as any distance in a TTN.