Dynamic programming (DP) based algorithms, which apply various forms of the Bellman operator, dominate the literature on model-free reinforcement learning (RL). While DP is powerful, the value function estimate can oscillate or even diverge when function approximation is introduced with off-policy data, except in special cases. This problem has been well-known for decades (referred to as the deadly triad in the literature), and has remained a critical open fundamental problem in RL.
More recently, the community witnessed a fast-growing trend that frames RL problems as well-posed optimization problems, in which a proper objective function is proposed whose minimization results in the optimal value function. Such an optimization-based approach provides a promising perspective that brings mature mathematical tools to bear on integrating linear/nonlinear function approximation with off-policy data, while avoiding DP’s inherent instability. Moreover, the optimization perspective is naturally extensible to incorporating constraints, sparsity regularization, distributed multi-agent scenarios, and other new settings.
In addition to being able to apply powerful optimization techniques to a variety of RL problems, the special recursive structure and restricted exploration sampling in RL also naturally raises the question of whether tailored algorithms can be developed to improve sample efficiency, convergence rates, and asymptotic performance, under the guidance of the established optimization techniques.
The goal of this workshop is to catalyze the collaboration between reinforcement learning and optimization communities, pushing the boundaries from both sides. It will provide a forum for establishing a mutually accessible introduction to current research on this integration, and allow exploration of recent advances in optimization for potential application in reinforcement learning. It will also be a window to identify and discuss existing challenges and forward-looking problems of interest in reinforcement learning to the optimization community.
- Shipra Agrawal (Columbia University)
- Sham Kakade (University of Washington)
- Benjamin Van Roy (Stanford University)
- Mengdi Wang (Princeton University)
- Huizhen Yu (University of Alberta)
Call for papers
We will invite submissions on topics such as, but not limited to:
- Optimization formulations and algorithms for RL
- Optimization correspondences of the components in RL (e.g., on/off-policy, exploration, replay buffer, entropy regularization, target networks, reward shaping, variance reduction, stability, and generalization, etc)
- Theoretical analysis of existing RL algorithms (e.g., temporal difference algorithms);
- Optimization for other RL settings (e.g., distributed/multi-agent RL, robust RL, partial observed RL, and hierarchical RL)
- Empirical comparison of optimization algorithms in RL applications and benchmarks Other topics at the intersection between reinforcement learning and optimization
- Open problems in optimization raised in RL tasks
A submission is up to 6 pages long in the NeurIPS style, excluding references and appendices. The submission process will be handled via CMT. Author names need not be anonymised. Parallel submissions (e.g., AISTATS and ICLR) are permitted.
The submission deadline is
September 10th September 17th, 2019, 11:59pm AOE and the acceptance notification will be distributed no later than October 1st, 2019. Submissions will be accepted as contributed talks, spotlight or poster presentations based on novelty, technical merit and alignment to the workshop’s goals. Final versions will be posted on the workshop website.
Paper submission portal: https://cmt3.research.microsoft.com/OPTRL2019
- Submission deadline:
September 10th, September 17th, 2019 (11:59 pm AOE)
- Notifications: Octorber 1st, 2019
- Camera ready: November 15th, 2019 (11:59 pm AOE)
We are working to find funding to support travel for students, especially those in underrepresented groups.
- Bo Dai (Google Brain)
- Niao He (University of Illinois at Urbana-Champaign)
- Nicolas Le Roux (Google Brain)
- Lihong Li (Google Brain)
- Dale Schuurmans (Google Brain & University of Alberta)
- Martha White (University of Alberta)
For questions, please contact us: firstname.lastname@example.org