Fig. 8: Applications of our NDL and NDR algorithms to network denoising with additive and subtractive noise on a variety of real-world networks. | Nature Communications

Fig. 8: Applications of our NDL and NDR algorithms to network denoising with additive and subtractive noise on a variety of real-world networks.

From: Learning low-rank latent mesoscale structures in networks

Fig. 8: Applications of our NDL and NDR algorithms to network denoising with additive and subtractive noise on a variety of real-world networks.

In our experiments with subtractive noise, we corrupt a network by removing 50% of its edges uniformly at random. We seek to classify the nonedges of the corrupted network as true edges (i.e., removed edges) and false edges (i.e., nonedges of the original network), respectively. In our experiments with additive noise, we corrupt a network by uniformly randomly adding 50% of the number of its edges (i.e., 1000 random edges) for all but one network (we add 30000 random edges for H. SAPIENS) that we generate using the WS model. We seek to classify the edges of the resulting corrupted network as true edges (i.e., original edges) and false edges (i.e., added edges). To perform classification for a network, we first use NDL to learn latent motifs from a corrupted network and then reconstruct the network using NDR to assign a confidence value to each potential edge. We then use these confidence values to infer the correct labeling of potential edges of the uncorrupted network. Importantly, we never use information from the original networks to denoise the corrupted networks. For each network, we report the areas under the curves (AUCs) of the receiver-operating characteristic (ROC) curves, which plot false-positive rates on the horizontal axis and true-positive rates on the vertical axis. See Supplementary Figs. 5–7 for the values of other binary-classification measures.

Back to article page