Open Invited Track at IFAC ICONS 2022 (Conference on Intelligent Control and Automation Sciences)
A key aspect of solving complex tasks is to learn from mistakes and improve strategies, control policies, and models through experience. Recent advances in machine learning provide a way to improve control of dynamical systems through this approach, with particular success in robotics. In particular, reinforcement learning (RL) provides a way to make sequential decisions in initially unknown problems, modeled as Markov decision processes. In control, this provides a way to perform optimal control in unknown nonlinear stochastic systems. The primary objective is to optimize a cumulative reward or cost over time. Over the last decade, the integration of deep neural networks and deep learning techniques into RL has led to the highly successful field of deep RL, with impressive applications in game playing and artificial intelligence, robotics, and so on. Alongside deep RL, control-oriented approaches to solve Markov decision processes keep advancing, such as those from adaptive dynamic programming (ADP), optimal control, etc. Moreover, more general machine learning approaches such as data-driven control seem promising, bringing together model-based and learning-based control systems. They offer a principled way to generate control policies directly from data, which enables considering uncertainty in the models and measurements in the control design. This can lead to safe control and learning approaches, which is essential for safety-critical systems and in general for the transfer from simulation to the real world.
This open track provides a forum of interaction and an outlet for all areas of RL and machine learning for control, from deep RL to data-driven control and more classical optimal control techniques. We welcome algorithmic, analytical, and application-oriented contributions. Our application focus is robotics, but we are also interested in other showcases in engineering, artificial intelligence, operations research, economics, medicine, and other relevant fields. We moreover invite surveys by established researchers in the field.
We are especially interested in the promising interactions between artificial-intelligence and control-theoretic approaches to RL, with the open issues they entail, including stable RL control of unknown systems. Synergy between artificial intelligence and control theory in RL can lead to major breakthroughs such as computationally efficient algorithms with strong, simultaneous performance and stability guarantees.
Topics of interest include, but are not limited to: