Although artificial reinforcement learning agents do well when rules are rigid, such as games, they fare poorly in real-world scenarios where small changes in the environment or the required actions can impair performance. The authors provide an overview of the cognitive foundations of hierarchical problem-solving, and propose steps to integrate biologically inspired hierarchical mechanisms to enable problem-solving skills in artificial agents.
- Manfred Eppe
- Christian Gumbsch
- Stefan Wermter