Table 1 Performance comparison of our work with other existing tools for H. sapiens. The performance was evaluated using six measures such as MCC, ACC, SEN, SPE, PRE and AUC, based on the three tests: benchmark (our method is based on 5-fold cross-validation), independent test and independent test datasets with negatives selected on the same proteins
Datasets | Tools | MCC | ACC | SEN | SPE | PRE | AUC |
|---|---|---|---|---|---|---|---|
Benchmark test | PLMLA | 0.274 | 0.667 | 0.560 | 0.721 | 0.503 | 0.691 |
| Â | Phosida | 0.191 | 0.618 | 0.542 | 0.657 | 0.444 | 0.631 |
| Â | LysAcet | 0.131 | 0.579 | 0.540 | 0.598 | 0.405 | 0.591 |
| Â | ensemblePail | 0.107 | 0.565 | 0.529 | 0.583 | 0.391 | 0.564 |
| Â | PSKAcePred | 0.187 | 0.602 | 0.589 | 0.608 | 0.432 | 0.622 |
| Â | BRABSB | 0.345 | 0.694 | 0.630 | 0.726 | 0.538 | 0.675 |
| Â | Our Work | 0.409 | 0.709 | 0.736 | 0.695 | 0.549 | 0.794 |
Independent test | PLMLA | 0.312 | 0.672 | 0.633 | 0.692 | 0.515 | 0.701 |
| Â | Phosida | 0.141 | 0.599 | 0.491 | 0.655 | 0.424 | 0.599 |
| Â | LysAcet | 0.089 | 0.558 | 0.512 | 0.582 | 0.388 | 0.552 |
| Â | ensemblePail | 0.065 | 0.558 | 0.457 | 0.610 | 0.378 | 0.537 |
| Â | PSKAcePred | 0.169 | 0.591 | 0.583 | 0.595 | 0.427 | 0.602 |
| Â | BRABSB | 0.278 | 0.655 | 0.612 | 0.678 | 0.496 | 0.653 |
| Â | Our Work | 0.325 | 0.664 | 0.694 | 0.648 | 0.505 | 0.756 |
Independent test with negative set selected on the same Protein | PLMLA | 0.296 | 0.648 | 0.633 | 0.663 | 0.667 | 0.689 |
| Â | Phosida | 0.136 | 0.568 | 0.553 | 0.583 | 0.585 | 0.597 |
| Â | LysAcet | 0.120 | 0.558 | 0.503 | 0.616 | 0.583 | 0.552 |
| Â | ensemblePail | 0.076 | 0.535 | 0.457 | 0.618 | 0.560 | 0.534 |
| Â | PSKAcePred | 0.111 | 0.556 | 0.553 | 0.558 | 0.571 | 0.556 |
| Â | BRABSB | 0.275 | 0.637 | 0.612 | 0.663 | 0.659 | 0.645 |
| Â | Our Work | 0.214 | 0.600 | 0.482 | 0.725 | 0.652 | 0.606 |