In the quest to understand how deep neural networks work, identification of slow and fast variables is a desirable step. Inspired by tools from theoretical physics, the authors propose a simplified description of finite deep neural networks based on two matrix variables per layer and provide analytic predictions for feature learning effects.
- Inbar Seroussi
- Gadi Naveh
- Zohar Ringel