Table 2 Comparison results based on self-supervised learning

From: Transforming label-efficient decoding of healthcare wearables with self-supervised learning and “embedded” medical domain expertise

Methods

KNN

Linear Probing

 

ECG

EEG

IMU

PPG

ECG

EEG

IMU

PPG

Fully Sup.

0.641 ± 0.02

0.658 ± 0.01

0.484 ± 0.05

0.520 ± 0.04

Domain Feat.

0.489 ± 0.05

0.392 ± 0.02

0.470 ± 0.05

0.373 ± 0.04

0.509 ± 0.04

0.548 ± 0.04

0.422 ± 0.06

0.377 ± 0.05

SimCLR9

0.427 ± 0.03

0.574 ± 0.02

0.393 ± 0.04

0.395 ± 0.10

0.532 ± 0.07

0.617 ± 0.03

0.433 ± 0.03

0.414 ± 0.05

BYOL37

0.380 ± 0.05

0.445 ± 0.09

0.418 ± 0.10

0.243 ± 0.12

0.428 ± 0.04

0.431 ± 0.10

0.449 ± 0.09

0.326 ± 0.08

MoCo38

0.450 ± 0.02

0.568 ± 0.04

0.408 ± 0.04

0.368 ± 0.05

0.524 ± 0.05

0.594 ± 0.08

0.425 ± 0.01

0.365 ± 0.04

NNCLR39

0.439 ± 0.04

0.511 ± 0.04

0.383 ± 0.06

0.394 ± 0.07

0.517 ± 0.05

0.555 ± 0.03

0.387 ± 0.04

0.355 ± 0.06

TS40

0.474 ± 0.05

0.580 ± 0.10

0.449 ± 0.03

0.395 ± 0.04

0.549 ± 0.10

0.627 ± 0.03

0.501 ± 0.06

0.452 ± 0.03

SwAV41

0.452 ± 0.10

0.561 ± 0.05

0.438 ± 0.04

0.343 ± 0.02

0.510 ± 0.08

0.638 ± 0.10

0.463 ± 0.07

0.406 ± 0.03

AMCL42

0.402 ± 0.06

0.570 ± 0.07

0.413 ± 0.02

0.301 ± 0.05

0.503 ± 0.04

0.626 ± 0.05

0.447 ± 0.05

0.419 ± 0.07

CLOCS10

0.488 ± 0.01

0.581 ± 0.06

0.451 ± 0.05

0.376 ± 0.05

0.537 ± 0.03

0.642 ± 0.05

0.483 ± 0.05

0.451 ± 0.06

TFC15

0.405 ± 0.03

0.543 ± 0.03

0.267 ± 0.04

0.383 ± 0.04

0.411 ± 0.02

0.571 ± 0.07

0.307 ± 0.01

0.401 ± 0.04

SoftIns43

0.481 ± 0.04

0.539 ± 0.03

0.420 ± 0.05

0.337 ± 0.05

0.557 ± 0.10

0.602 ± 0.03

0.498 ± 0.08

0.382 ± 0.06

RNC49

0.478 ± 0.04

0.504 ± 0.08

0.465 ± 0.01

0.342 ± 0.02

0.550 ± 0.03

0.560 ± 0.02

0.517 ± 0.05

0.424 ± 0.05

MOMENT47

0.443 ± 0.03

0.464 ± 0.02

0.517 ± 0.02

0.366 ± 0.01

0.514 ± 0.03

0.456 ± 0.01

0.522 ± 0.02

0.411 ± 0.03

Chronos46

0.325 ± 0.03

0.452 ± 0.04

0.445 ± 0.04

0.387 ± 0.05

0.301 ± 0.08

0.406 ± 0.06

0.481 ± 0.06

0.410 ± 0.10

Ours

0.509 ± 0.04

0.591 ± 0.07

0.465 ± 0.02

0.420 ± 0.04

0.574 ± 0.03

0.643 ± 0.03

0.526 ± 0.04

0.499 ± 0.02

  1. To test the efficacy of the downstream knowledge learned during self-supervised learning, we froze the backbone as a feature extractor and applied K-nearest neighbors (KNN) and a linear classifier for supervised training. We compared the class-average F1 on the test subset based on the model with the best F1 value on the validation subset. Comparison includes a series of SSCL methods, domain feature-based models (Domain Feat.), and fully supervised learning models (Fully Sup.) with randomly initialized backbones. Each method was run with five different random seeds to assess the performance. We reported the mean ± standard deviation of the five results. Bold indicates the best result, and Underline indicates the second-best result.