Background

Dynamic programming (DP) based algorithms, which apply various forms of the Bellman operator, dominate the literature on model-free reinforcement learning (RL). While DP is powerful, the value function estimate can oscillate or even diverge when function approximation is introduced with off-policy data, except in special cases. This problem has been well-known for decades (referred to as the deadly triad in the literature), and has remained a critical open fundamental problem in RL.

More recently, the community witnessed a fast-growing trend that frames RL problems as well-posed optimization problems, in which a proper objective function is proposed whose minimization results in the optimal value function. Such an optimization-based approach provides a promising perspective that brings mature mathematical tools to bear on integrating linear/nonlinear function approximation with off-policy data, while avoiding DP’s inherent instability. Moreover, the optimization perspective is naturally extensible to incorporating constraints, sparsity regularization, distributed multi-agent scenarios, and other new settings.

In addition to being able to apply powerful optimization techniques to a variety of RL problems, the special recursive structure and restricted exploration sampling in RL also naturally raises the question of whether tailored algorithms can be developed to improve sample efficiency, convergence rates, and asymptotic performance, under the guidance of the established optimization techniques.

The goal of this workshop is to catalyze the collaboration between reinforcement learning and optimization communities, pushing the boundaries from both sides. It will provide a forum for establishing a mutually accessible introduction to current research on this integration, and allow exploration of recent advances in optimization for potential application in reinforcement learning. It will also be a window to identify and discuss existing challenges and forward-looking problems of interest in reinforcement learning to the optimization community.

Invited Speakers

Dates

Awards

We will provide student travel awards to the a few of authors of the accepted papers. We will reimburse the workshop registration.

To apply for the travel awards, please send the following information to optrl2019@gmail.com by the application deadline Oct. 28th:

Sponsors

We thank our sponsor to make this workshop possible:

Committees

Organizers

Program committee

  • Alekh Agarwal
  • Zafarali Ahmed
  • Kavosh Asad
  • Marlos C. Machado
  • Jianshu Chen
  • Yinlam Chow
  • Adithya Devraj
  • Thinh Doan
  • Simon Du
  • Yihao Feng
  • Roy Fox
  • Matthieu Geist
  • Saeed Ghadimi
  • Shixiang Gu
  • Botao Hao
  • Nan Jiang
  • Ajin Joseph
  • Donghwan Lee
  • Alex Lewandowski
  • Vincent Liu
  • Rupam Mahmood
  • Jincheng Mei
  • Ofir Nachum
  • Gergely Neu
  • Mohammad Norouzi
  • Andrew Patterson
  • Yash Satsangi
  • Matthew Schlegel
  • Karan Singh
  • Ziyang Tang
  • Valentin Thomas
  • Sergio Valcarcel Macua
  • Junfeng Wen
  • Zheng Wen
  • Adam White
  • Tengyang Xie
  • Zhuoran Yang
  • Shangtong Zhang
  • Tuo Zhao

For questions, please contact us: optrl2019@gmail.com