Figure 1

Comparison of standard and novel paradigm. Panel A. Acoustic features extracted from the recorded speech are used to recognize the expressed emotion (Emotional model) (e.g., valence profile) and the pathology (DD speech model). Panel B. Acoustic features are used to construct the emotional modulation model. The Emotional Modulation Function (EMF) of different subjects is then used to train an DD – EMF model.