An interpretability framework that compares how humans and deep neural networks process images has been presented. Their findings reveal that, unlike humans, deep neural networks focus more on visual properties than semantic ones, highlighting divergent representational strategies.
- Florian P. Mahner
- Lukas Muttenthaler
- Martin N. Hebart