Abstract: Experience replay is an important technique for addressing sample- inefficiency in deep reinforcement learning (RL), but faces difficulty in learning from binary and sparse due to disproportionately few successful experiences in the replay buffer. experience replay (HER) was recently proposed to tackle this difficulty by manipulating unsuccessful transitions, but in doing so, HER introduces a significant bias in the replay buffer experiences and therefore achieves a suboptimal improvement in sample- efficiency. In this paper, we present an analysis on the source of bias in HER, and propose a simple and effective method to the bias, to most effectively harness the sample-efficiency provided by HER. Our method, motivated by -factual reasoning and called ARCHER, extends HER with a trade-off to make rewards calculated for experiences numerically greater than real rewards. We validate our algorithm on two continuous control environments from DeepMind Control Suite 211; Reacher and Finger, which simulate manipulation tasks with a robotic arm 211; in combination with various reward functions, task complexities and goal sampling strategies. Our experiments consistently demonstrate that countering bias using more aggressive rewards increases sample efficiency, thus establishing the greater benefit of ARCHER in RL applications with limited computing budget.



Source link
thanks you RSS link
( https://www.reddit.com/r//comments/9dz8re/archer_aggressive_rewards_to_counter_bias_in/)

LEAVE A REPLY

Please enter your comment!
Please enter your name here