Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Enforcing the consensus between Trajectory Optimization and Policy Learning for precise robot control

Abstract : Reinforcement learning (RL) and trajectory optimization (TO) present strong complementary advantages. On one hand, RL approaches are able to learn global control policies directly from data, but generally require large sample sizes to properly converge towards feasible policies. On the other hand, TO methods are able to exploit gradient-based information extracted from simulators to quickly converge towards a locally optimal control trajectory which is only valid within the vicinity of the solution. Over the past decade, several approaches have aimed to adequately combine the two classes of methods in order to obtain the best of both worlds. Following on from this line of research, we propose several improvements on top of these approaches to learn global control policies quicker, notably by leveraging sensitivity information stemming from TO methods via Sobolev learning, and augmented Lagrangian techniques to enforce the consensus between TO and policy learning. We evaluate the benefits of these improvements on various classical tasks in robotics through comparison with existing approaches in the literature.
Document type :
Preprints, Working Papers, ...
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03780392
Contributor : Quentin Le Lidec Connect in order to contact the contributor
Submitted on : Monday, September 19, 2022 - 11:52:26 AM
Last modification on : Wednesday, September 28, 2022 - 1:43:41 PM

File

lelidec2022policy.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03780392, version 1

Citation

Quentin Le Lidec, Wilson Jallet, Ivan Laptev, Cordelia Schmid, Justin Carpentier. Enforcing the consensus between Trajectory Optimization and Policy Learning for precise robot control. 2022. ⟨hal-03780392⟩

Share

Metrics

Record views

0

Files downloads

0