**Lecturer: Lucian Busoniu**

Navigation: [Lecture slides|Labs|Contact] [Back to the lecturer's webpage]

This course provides methods for controlling systems that are too complex or insufficiently known to apply classical control design techniques. The focus is placed on learning algorithms for control, in particular reinforcement learning (RL). Special attention is also paid to model-based techniques related to RL, as they can be very useful in controlling complex systems even when a model is known. After introducing the RL problem, the dynamic programming algorithms that sit at the foundation of RL are described in the discrete-variable context. Then, classical RL algorithms are introduced in the same context. In the second part of the course, the dynamical programming and RL algorithms are extended with approximation techniques, in order to make them applicable to continuous-variable control, as well as to large-scale discrete-variable problems. Finally, several online planning techniques are discussed.

This course is part of the Master program ICAF of the Automation Department, UTCluj (1st year 2nd semester). As prerequisites, basic knowledge of analysis and linear algebra is needed, together with notions of discrete-time dynamical systems. The lecturer is Lucian Busoniu.

The course and lab sessions take place on Thursdays from 18:00, in room C01, Dorobantilor. A detailed schedule is given next. Due to scheduling constraints two weeks are free.

Week; day | Session |
---|---|

#1; 28 Feb | Lecture 1 |

#2; 7 Mar | Lecture 2 |

#3; 14 Mar | Lab 1 |

#4; 21 Mar | Lecture 3 |

#5; 28 Mar | Free |

#6; 4 Apr | Lecture 4 |

#7; 11 Apr | Lab 2 |

#8; 18 Apr | Lecture 5 |

#9; 25 Apr | Lecture 6 |

#10; 9 May | Lab 3 |

#11; 16 May | Lecture 7 |

#12; 23 May | Lecture 8 |

#13; 30 May | Lab 4 |

#14; 6 Jun | Free |

Grading rules:

- 50% lab grades. There are 4 labs, and each student gets one grade for each lab, from 0 to 10 and formed of:
- up to 2 points optional minitest at the start of the lab, from the lecture material relevant to the assignment of that day
- up to 8 points the mandatory solution of the lab assignment, reduced to 4 points if the solution is delivered late

- 50% exam
- Up to 10% point bonus for lecture quizzes.

The slides are made available here in time for each lecture. The slides are required material for the exam. They, as well as the lectures, are in Romanian.

- Part 1: The reinforcement learning problem (covered in lecture 1).
- Part 2: The optimal solution. Dynamic programming (covered in lectures 2 and 3).
- Part 3: Reinforcement learning (covered in lectures 3 and 4).
- Part 4: Function approximation. Approximate dynamic programming. Batch RL (covered in lectures 5 and 6).
- Part 5: Online approximate RL. Perspectives (covered in lectures 7 and 8).

In the lab classes, a set of assignments must be solved. A solution consists of a brief report in PDF and associated Matlab code, and must be submitted by a specified deadline. For each lab, the full code or a specified part of it should be completed during the lab session itself. In addition, an oral session with mandatory participation will be organized before the exam (the exact date will be announced later), where the lecturer will discuss the solutions separately with each student group. In this session, detailed questions will be asked to clearly assess whether the assignment solution is original, and the contribution of each student to this solution.

Submitting the solutions to all the assignments, as well as validating these solutions by discussing them in the oral session, is required before being admitted to the exam. There is zero tolerance for copying; each copied solution is graded 0, and having two or more copied solutions automatically invalidates the entire solution set and the student must redo the discipline in the following year. More details on the requirements (including individual deadlines) are available in the assignment descriptions, which will appear here shortly before the corresponding lab session.

In addition, 2 points of each lab grade are awarded as a result of a short (5 minutes) test in the beginning of the class, which covers lecture material relevant to that lab.

- Assignment 1: Markov decision processes. Dynamic programming (PDF) and the Matlab code used as basis for the assignment.
- Assignment 2: Q-learning (PDF). The Matlab code for Assignment 1 is needed, and in addition two m-files are supplied: a template for implementing the Q-learning algorithm in Matlab: qlearning.m, and a script to compute a (near-)optimal solution for the grid navigation problem: gridnav_nearoptsol.m.
- Assignment 3: Offline approximate methods (PDF), and the Matlab code.
- Assignment 4: Online approximate methods (PDF), and the Matlab code.

The lab discussion session is scheduled on June 13th from 5:30 PM in room C01 (the usual course location).
To ensure they can be graded, the final deadline for all assignments is **Thursday June 6th**; any student who at this date has not handed in 4 assignments is not eligible for the exam, and will not be received for the discussions. An assignment of students to timeslots will be published soon.

Comments, suggestions, questions etc. related to this course or website are welcome; please contact the lecturer.