Fig. 7: Ablation study and hyperparameters adaptability experiments for the computational workflow. | Humanities and Social Sciences Communications

Fig. 7: Ablation study and hyperparameters adaptability experiments for the computational workflow.

From: EasyHypergraph: an open-source software for fast and memory-saving analysis and learning of higher-order networks

Fig. 7: Ablation study and hyperparameters adaptability experiments for the computational workflow.

a Radar chart for comparing time consumption when excluding the cache and preloading, respectively. b Comparison of maximal memory usage with different storage formats for computing the incidence matrix. “OOM” indicates out-of-memory situations. ch Comparison of average time with two sparse storage options on different HNNs: HGNN, HGNN+, HNHN, HyperGCN, UniGCN, and UniGAT. in Hidden size and learning rate are selected as two representative hyperparameters for comparison. Here we train HGNN on various hyperparameter settings. ik The hidden size is tuned from 64 to 512 on corresponding datasets. For fair comparisons, we use the same learning rate of 0.001 for all the models. l–n The learning rate is tuned from 0.001 to 0.1 on the corresponding datasets. We use the same hidden sizes, which are also used in hypergraph learning efficiency comparisons for all the models.

Back to article page