Table 1 Quantitative experiments compare the proposed method with representative approaches across three datasets.
From: Leveraging vision transformers and entropy-based attention for accurate micro-expression recognition
Model | Metric | SMIC | SAMM | CASME II |
|---|---|---|---|---|
LBP-TOP | UF1 | 0.2000 | 0.3954 | 0.7026 |
UAR | 0.5280 | 0.4102 | 0.7429 | |
Bi-WOOF | UF1 | 0.5727 | 0.5211 | 0.7805 |
UAR | 0.5829 | 0.5139 | 0.8026 | |
OFF-ApexNet | UF1 | 0.6817 | 0.5409 | 0.8764 |
UAR | 0.6695 | 0.5392 | 0.8681 | |
STSTNet | UF1 | 0.6801 | 0.6588 | 0.8382 |
UAR | 0.7013 | 0.6810 | 0.8686 | |
MobileViT | UF1 | 0.7141 | 0.7428 | 0.7251 |
UAR | 0.7356 | 0.6781 | 0.6997 | |
MMNet | UF1 | – | 0.8391 | 0.9494 |
UAR | – | – | – | |
Micro-BERT | UF1 | 0.8550 | 0.8386 | 0.9034 |
UAR | 0.8384 | 0.8475 | 0.8914 | |
HSTA | UF1 | 0.8470 | 0.8470 | 0.9250 |
UAR | 0.7800 | 0.8390 | 0.9220 | |
Ours | UF1 | 0.8203 | 0.8392 | 0.9676 |
UAR | 0.8137 | 0.8306 | 0.9613 |