Table 3 Comparison results based on transfer learning on ECG datasets

From: Transforming label-efficient decoding of healthcare wearables with self-supervised learning and “embedded” medical domain expertise

Methods

CPSC

CinC17

 

F1

Recall

Prec.

AUC

F1

Recall

Prec.

AUC

Random Init.

0.585

0.599

0.605

0.905

0.536

0.599

0.547

0.841

SimCLR9

0.658

0.689

0.652

0.930

0.627

0.705

0.626

0.883

BYOL37

0.658

0.672

0.654

0.925

0.630

0.712

0.616

0.880

MoCo38

0.640

0.663

0.639

0.929

0.651

0.719

0.625

0.881

NNCLR39

0.640

0.667

0.640

0.926

0.644

0.717

0.620

0.885

TS40

0.627

0.638

0.630

0.921

0.624

0.715

0.602

0.887

SwAV41

0.654

0.670

0.650

0.929

0.642

0.716

0.628

0.889

AMCL42

0.660

0.668

0.658

0.933

0.649

0.725

0.630

0.892

CLOCS10

0.647

0.671

0.637

0.927

0.652

0.740

0.619

0.887

TFC15

0.640

0.661

0.650

0.922

0.648

0.717

0.614

0.893

SoftIns43

0.657

0.670

0.652

0.935

0.645

0.730

0.629

0.890

RNC49

0.642

0.652

0.653

0.926

0.654

0.734

0.634

0.893

Ours

0.670

0.667

0.675

0.945

0.680

0.751

0.644

0.901

  1. Based on the pretrained model with self-supervised learning on the large-scale MIMIC-III-WDB dataset, the results on CPSC and CinC17 are reported, respectively. We compared the class-average F1, AUC, Precision, Recall, and Accuracy on the test subset based on the model with the best F1 value on the validation set. Bold indicates the best result, and Underline indicates the second-best result.