Table 1 The performance of emotion recognition was evaluated in each modality (EEG, Visual, and Audio data).

From: EAV: EEG-Audio-Video Dataset for Emotion Recognition in Conversational Contexts

Database

Primary Modalities

Language

Subjects

Elicitation Method

Types

MAHNOB-HCI6

EEG, Face, Audio

—

27 subjects

Videos/Pictures

S, I

SEED-IV11

EEG and EM

—

15 subjects

Videos

S, I

DREAMER10

EEG & ECG

—

23 subjects

Movies

S, I

MPED9

EEG, GSR, RR, ECG

—

23 subjects

Videos

S, I

ASCERTAIN8

EEG, ECG and GSR

—

58 subjects

Videos

S, I

AMIGOS7

EEG, GSR and ECG

—

40 subjects

Movies

S, I

DEAP12

EEG, PS, Face

—

32 subjects

Music videos

S, I

IEMOCAP15

Face, Speech, Head

English

10 professional actors

Conversations

S, I

SEMAINE16

Face, Speech

English

150 subjects

Conversations

S, I

NNIME19

Audio, Video, ECG

Chinese

44 subjects

Conversations

P, N

RAVDESS20

Audio, Video

English

24 professional actors

Speech, Song

P, I

BAUM-121

Face, Speech

Turkish

31 subjects

Images/Videos

S, I

SAVEE26

Face, Speech

English

4 subjects

Videos/Texts/Pictures

S, I

K-EmoCon14

Face, Speech, 1ch EEG

Korean

32 subjects

Conversations (Debate)

S, N

PEGCONV13

EEG, GSR, PPG

English

23 subjects

Conversations

S, N

EAV (ours)

EEG (30ch), Audio, Video

English

42 subjects

Conversations

S, I

  1. Specifically, accuracy and AUC scores for 5 balanced classes were calculated across individual subjects.