Direct Policy Optimization

Direct Policy Optimization

We present an approach for approximately solving discrete-time stochastic optimal-control problems by combining direct trajectory optimization, deterministic sampling, and policy optimization. Our feedback motion-planning algorithm uses a quasi-Newton method to simultaneously optimize a reference trajectory, a set of deterministically chosen sample trajectories, and a parameterized policy. We demonstrate that this approach exactly recovers LQR policies in the case of linear dynamics, quadratic objective, and Gaussian disturbances. We also demonstrate the algorithm on several nonlinear, underactuated robotic systems to highlight its performance and ability to handle control limits, safely avoid obstacles, and generate robust plans in the presence of unmodeled dynamics.

An open-source implementation of Direct Policy Optimization, and accompanying examples from our paper, can be found here.

Related Papers

2021
May
PDF Direct Policy Optimization using Deterministic Sampling and Collocation
Taylor Howell, Chunjiang Fu, and Zac Manchester
International Conference on Robotics and Automation (ICRA). Xi'an, China.

People

Taylor Howell
Model-predictive Control and Differentiable Simulators for Contact
Zac Manchester
Assistant Professor
Last updated: 2021-04-04