**Lecturer: Lucian Busoniu**
**TA: Matthias Rosynski**

Navigation: [Lecture slides|Labs|Contact] [Back to the lecturer's webpage]

This course provides methods for controlling systems that are too complex or insufficiently known to apply classical control design techniques. The focus is placed on learning algorithms for control, in particular reinforcement learning (RL). Attention is also paid to model-based techniques related to RL, as they can be very useful in controlling complex systems even when a model is known. After introducing the RL problem, the dynamic programming algorithms that sit at the foundation of RL are described in the discrete-variable context. Then, classical RL algorithms are introduced in the same context. In the second part of the course, the dynamical programming and RL algorithms are extended with approximation techniques, in order to make them applicable to continuous-variable control, as well as to large-scale discrete-variable problems. We dedicate significant space to deep reinforcement learning techniques.

This course is part of the Master program ICAF of the Automation Department, UTCluj (1st year 2nd semester). As prerequisites, basic knowledge of analysis and linear algebra is needed, together with notions of discrete-time dynamical systems. The teacher responsible is Lucian Busoniu.

The course and lab sessions take place on Mondays and Wednesdays, alternating weeks, from 18:00, online via the Microsoft Teams platform. A detailed schedule is given next (things may still change if e.g. we move back to on-site teaching; any changes will be announced well in advance via Teams):

Grading rules:

- 50% exam.
- 10% lecture quizzes.
- 40% labs 1 and 2. Each lab graded up to 10, reduced to 5 if handed in late. Required to participate in the exam, any copying forfeits the discipline.
- Bonus 30% labs 3 and 4, on deep learning and deep reinforcement learning (these, as well as the associated lecture material, are taught in English).

The slides are made available here in time for each lecture. The slides are required material for the exam. They, as well as the lectures, are in Romanian.

- Part 1: The reinforcement learning problem (covered in lecture 1).
- Part 2: The optimal solution. Dynamic programming (covered in lectures 2 and 3). You may also download the code for the demos in parts 2 and 3.
- Part 3: Reinforcement learning (covered in lectures 3 and 4).
- Part 4: Function approximation. Approximate dynamic programming. Batch RL (covered in lecture 5).
- Part 5: Online approximate RL. (covered in lectures 7 and 8).

In the lab classes, a set of assignments must be solved. A solution consists of a brief report in PDF and associated code, and must be submitted by a specified deadline. For each lab, the full code or a specified part of it should be completed during the lab session itself. In addition, a discussion session with mandatory participation will be organized before the exam (the exact date will be announced later), where the teachers will discuss the solutions separately with each student group. In this session, detailed questions will be asked to clearly assess whether the assignment solution is original, and the contribution of each student to this solution.

- Assignment 1: Markov decision processes. Dynamic programming (PDF) and the Matlab code used as basis for the assignment.
- Assignment 2: Q-learning (PDF). The Matlab code for Assignment 1 is needed, and in addition two m-files are supplied: a template for implementing the Q-learning algorithm in Matlab: qlearning.m, and a script to compute a (near-)optimal solution for the grid navigation problem: gridnav_nearoptsol.m.

Comments, suggestions, questions etc. related to this course or website are welcome; please contact the lecturer.