Table 1 Performance comparison of our work with other existing tools for H. sapiens. The performance was evaluated using six measures such as MCC, ACC, SEN, SPE, PRE and AUC, based on the three tests: benchmark (our method is based on 5-fold cross-validation), independent test and independent test datasets with negatives selected on the same proteins

From: Accurate in silico identification of species-specific acetylation sites by integrating protein sequence-derived and functional features

Datasets

Tools

MCC

ACC

SEN

SPE

PRE

AUC

Benchmark test

PLMLA

0.274

0.667

0.560

0.721

0.503

0.691

 

Phosida

0.191

0.618

0.542

0.657

0.444

0.631

 

LysAcet

0.131

0.579

0.540

0.598

0.405

0.591

 

ensemblePail

0.107

0.565

0.529

0.583

0.391

0.564

 

PSKAcePred

0.187

0.602

0.589

0.608

0.432

0.622

 

BRABSB

0.345

0.694

0.630

0.726

0.538

0.675

 

Our Work

0.409

0.709

0.736

0.695

0.549

0.794

Independent test

PLMLA

0.312

0.672

0.633

0.692

0.515

0.701

 

Phosida

0.141

0.599

0.491

0.655

0.424

0.599

 

LysAcet

0.089

0.558

0.512

0.582

0.388

0.552

 

ensemblePail

0.065

0.558

0.457

0.610

0.378

0.537

 

PSKAcePred

0.169

0.591

0.583

0.595

0.427

0.602

 

BRABSB

0.278

0.655

0.612

0.678

0.496

0.653

 

Our Work

0.325

0.664

0.694

0.648

0.505

0.756

Independent test with negative set selected on the same Protein

PLMLA

0.296

0.648

0.633

0.663

0.667

0.689

 

Phosida

0.136

0.568

0.553

0.583

0.585

0.597

 

LysAcet

0.120

0.558

0.503

0.616

0.583

0.552

 

ensemblePail

0.076

0.535

0.457

0.618

0.560

0.534

 

PSKAcePred

0.111

0.556

0.553

0.558

0.571

0.556

 

BRABSB

0.275

0.637

0.612

0.663

0.659

0.645

 

Our Work

0.214

0.600

0.482

0.725

0.652

0.606