Fig. 4: Illustration of mesoscale structures that we learn from images and networks. | Nature Communications

Fig. 4: Illustration of mesoscale structures that we learn from images and networks.

From: Learning low-rank latent mesoscale structures in networks

Fig. 4: Illustration of mesoscale structures that we learn from images and networks.

In each experiment in this figure, we form a matrix X of size d × n by sampling n mesoscale patches of size d = 21 × 21 from the corresponding object. For the image in (a), the columns of X are square patches with 21 × 21 pixels. In (b, c), we show portions of the adjacency matrices of the two networks. We take the columns of X to be the k × k adjacency matrices of the connected subgraphs that are induced by a path of k = 21 nodes, where a k-node path consists of k distinct nodes x1, …, xk such that xi and xi+1 are adjacent for all i {1, …, k − 1}. Using nonnegative matrix factorization (NMF), we compute an approximate factorization X ≈ WH into nonnegative matrices W and H, where W is called a network dictionary and has r = 25 columns. Because of this factorization, we can approximate any sampled mesoscale patches (i.e., the columns of X) of an object by a nonnegative linear combination of the columns of W, which we interpret as latent shapes for the image and as latent motifs (i.e., subgraphs) for the networks. The columns of H give the coefficients in these linear combinations. The network dictionaries of latent motifs that we learn from the (b) UCLA and (c) CALTECH Facebook networks have distinctive social structures. In the adjacency matrix of the UCLA network, we show only the first 3000 nodes (according to the node labeling in the data set). The image in (a) is from the collection Die Graphik Ernst Ludwig Kirchners bis 1924, von Gustav Schiefler Band I bis 1916 (Accession Number 2007.141.9, Ernst Ludwig Kirchner, 1926). We use the image with permission from the National Gallery of Art in Washington, DC, USA.

Back to article page