Novel active exploration strategies that combine and extend existing approaches for exploration with Deep RL architectures are contributed, showcasing the positive impact of active exploration in the learning performance of RL algorithms with neural network approximations.
This thesis addresses the problem of efficient exploration in reinforcement learning (RL) with neural network approximations. However, and in spite of their recent successes, these methods require significant amounts of data. Efficient exploration strategies — in which the agent actively seeks to visit promising or less-visited portions of the state-action space — have been actively investigated in classical RL domains, significantly improving the learning efficiency of such methods. This thesis contributes novel active exploration strategies that combine and extend existing approaches for exploration with Deep RL architectures. The impact of our proposed approaches is tested in several benchmark domains in the RL literature, showcasing the positive impact of active exploration in the learning performance of RL algorithms with neural network approximations.