Reinforcement Learning for Control

Open Invited Track at IFAC World Congress 2017

Most systems in practical control applications are partly unknown, often to such an extent that fully model-based design cannot achieve satisfactory results. Reinforcement learning (RL) offers a principled way to solve such problems, in cases where a cumulative performance index must be optimized. RL methods are enjoying great popularity due to their ability to deal with general and complex systems, which in addition to having partly or fully unknown dynamics, may also be highly nonlinear and stochastic. Recently, deep learning methods have found a fertile intersection with RL, with well-known success stories like AlphaGo. In control, a name often used for RL methods is adaptive dynamic programming (ADP), where the focus is placed on exploiting the known structure of the model and ensuring stability guarantees. RL and ADP techniques have made great inroads in application areas like robotics, automotive systems, smart grids, game playing, resource management, and traffic control.

However, many issues remain open, including strong safety and performance guarantees, computational complexity, scalability to large-dimension state and action variables, applications to multiagent, distributed or partially observable problems, etc. The Open Track on Reinforcement Learning for Control provides an outlet for novel contributions in these and other open areas, as well as a forum for interaction between researchers and practitioners in the field. We are particularly interested in the -- so far largely unexplored -- interactions between artificial-intelligence and control-theoretic approaches to RL; deep learning and ADP are two typical such areas that have stayed mostly separate on the two respective sides. Synergy between artificial intelligence and control theory in RL can lead to major breakthroughs such as computationally efficient algorithms with strong, simultaneous performance and safety (stability) guarantees.

We equally welcome contributions from control theory, computational intelligence, computer science, operations research, neuroscience, and other novel perspectives on RL. We host original papers on methods, analysis, and applications of RL for control, as well as surveys from established researchers in the field. We are interested in applications from engineering, artificial intelligence, operations research, economics, medicine, and other fields. Topics of interest include, but are not limited to:

Paper submission

To submit a paper to this track, go to the IFAC World Congress submission site and follow the instructions, choosing the appropriate category for your paper, e.g. "Open invited track paper" or "Open invited track survey paper". You will need to enter the track code 53f38.


Organizers