Table 1 Comparative analysis based on area under the ROC curve (AUC).

From: Supervised learning of gene-regulatory networks based on graph distance profiles of transcriptomics data

Data

Methods

 

ARACNE

CLR

TIGRESS

mrnet

GENIE3

iRafNet

Wisdom of crowds

SIRENE (average)

Expression-based SVM

GRADIS

DREAM4

 Net1

0.56

0.71

0.50

0.69

0.77

0.5

0.82

0.54

0.81 (0.77–0.84)

0.86 (0.80–0.92)

 Net2

0.54

0.64

0.50

0.65

0.69

0.5

0.78

0.48

0.83 (0.79–0.87)

0.85 (0.82–0.88)

 Net3

0.56

0.71

0.52

0.72

0.73

0.5

0.79

0.5

0.72 (0.66–0.77)

0.77 (0.72–0.82)

 Net4

0.55

0.67

0.51

0.67

0.69

0.5

0.78

0.5

0.70 (0.67–0.73)

0.76 (0.72–0.80)

 Net5

0.58

0.68

0.51

0.52

0.76

0.5

0.80

0.48

0.71 (0.64–0.79)

0.77 (0.71–0.82)

DREAM5

 InSilico

0.50

0.50

0.74

0.74

0.82

0.81

0.42

0.84 (0.83–0.85)

0.85 (0.84–0.86)

 E. coli

0.51

0.59

0.59

0.59

0.69

0.69

0.41

0.87 (0.85–0.88)

0.94 (0.93–0.94)

 S. cerevisiae

0.50

0.52

0.52

0.52

0.54

0.54

0.49

0.80 (0.79–0.81)

0.96 (0.96–0.97)

  1. The performance of GRADIS is compared with that of unsupervised approaches (ARACNE, CLR, GENIE3, iRafNet, mrnet, TIGRESS), their combination based on wisdom of crowds and two supervised approaches (SIRENE and expression-based SVM classifier). Since the performance is based on the global (i.e., network-centric) approach, for SIRENE we report the average AUC over all TFs (for local comparison, refer to ‘Methods'). The numbers in parentheses refer to confidence intervals (see ‘Methods'). The comparison includes the five synthetic data sets from the DREAM4 challenge as well as the one synthetic and the two real-world data sets from the DREAM5 challenge. Results from iRafNet are not provided for the data sets in DREAM5 due to lack of data on knockout experiments and protein–protein interactions.