Title: Count-Based with the

Authors: Marlos C. Machado, Marc G. Bellemare, Michael Bowling

Abstract: The problem of exploration in reinforcement learning is well- understood in the tabular case and many sample-efficient algorithms are known. Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required. Recent promising developments generally depend on problem-specific density models or handcrafted features. In this paper we introduce a simple approach for exploration that allows us to develop theoretically justified algorithms in the tabular case but that also give us intuitions for new algorithms applicable to settings where function approximation is required. Our approach and its underlying theory is based on the substochastic successor representation, a concept we develop here. While the traditional successor representation is a representation that defines state generalization by the similarity of successor states, the substochastic successor representation is also able to implicitly count the number of times each state (or feature) has been observed. This extension connects two until now disjoint areas of research. We show in traditional tabular domains (RiverSwim and SixArms) that our algorithm empirically performs as well as other sample-efficient algorithms. We then describe a deep reinforcement learning algorithm inspired by these ideas and show that it matches the performance of recent pseudo- count-based methods in hard exploration Atari 600 games.

PDF link Landing page

Source link
thanks you RSS link
( https://www.reddit.com/r/MachineLearning/comments/93mbnj/r__exploration_with_the_successor/)


Please enter your comment!
Please enter your name here