Table 4 FLICC model fallacy classification report. For each class, we report precision (P), recall (R), \(F_1\) score for validation and test partitions.
From: A technocognitive approach to detecting fallacies in climate misinformation
Validation | Test | |||||
|---|---|---|---|---|---|---|
P | R | \(F_1\) | P | R | \(F_1\) | |
Ad hominem | 0.76 | 0.75 | 0.75 | 0.81 | 0.78 | 0.79 |
Anecdote | 0.95 | 0.86 | 0.90 | 0.88 | 0.92 | 0.90 |
Cherry picking | 0.69 | 0.66 | 0.67 | 0.77 | 0.77 | 0.77 |
Conspiracy theory | 0.78 | 0.82 | 0.80 | 0.78 | 0.82 | 0.80 |
Fake experts | 1.00 | 0.92 | 0.96 | 1.00 | 1.00 | 1.00 |
False choice | 0.83 | 0.77 | 0.80 | 0.62 | 0.71 | 0.67 |
False equivalence | 0.50 | 0.43 | 0.46 | 0.50 | 0.38 | 0.43 |
Impossible expectations | 0.69 | 0.73 | 0.71 | 0.69 | 0.86 | 0.77 |
Misrepresentation | 0.63 | 0.63 | 0.63 | 0.68 | 0.68 | 0.68 |
Oversimplification | 0.88 | 0.58 | 0.70 | 0.78 | 0.70 | 0.74 |
Single cause | 0.81 | 0.74 | 0.77 | 0.81 | 0.66 | 0.72 |
Slothful induction | 0.54 | 0.82 | 0.65 | 0.50 | 0.56 | 0.53 |
Accuracy | 0.73 | 0.74 | ||||
Macro avg | 0.75 | 0.73 | 0.73 | 0.74 | 0.74 | 0.73 |
Weighted avg | 0.75 | 0.73 | 0.73 | 0.75 | 0.74 | 0.74 |