Skip to content

Latest commit

 

History

History
17 lines (10 loc) · 1.12 KB

SPU.md

File metadata and controls

17 lines (10 loc) · 1.12 KB

SUPERVISED POLICY UPDATE FOR DEEP REINFORCEMENT LEARNING

Quan Vuong, Yiming Zhang, and Keith Ross

ABSTRACT

We propose a new sample-efficient methodology, called Supervised Policy Update (SPU), for deep reinforcement learning.

Starting with data generated by the current policy, SPU formulates and solves a constrained optimization problem in the non-parameterized proximal policy space.

Using supervised regression, it then converts the optimal non-parameterized policy to a parameterized policy, from which it draws new samples.

The methodology is general in that it applies to both discrete and continuous action spaces, and can handle a wide variety of proximity constraints for the non-parameterized optimization problem.

We show how the Natural Policy Gradient and Trust Region Policy Optimization (NPG/TRPO) problems, and the Proximal Policy Optimization (PPO) problem can be addressed by this methodology.

The SPU implementation is much simpler than TRPO.

In terms of sample efficiency, our extensive experiments show SPU outperforms TRPO in Mujoco simulated robotic tasks and outperforms PPO in Atari video game tasks.