ALG_GUI MARL algorithms demonstration GUI. Use ALG_GUI to start the demonstration interface. This demonstration shows the performance of various MARL algorithms on two types of tasks - fully cooperative and mixed. Select a type of task from the left listbox, an algorithm from the right, and click "Run" to run learning. Once learning has been run, the learned policies can be replayed by clicking "Replay", and the convergence of the algorithm can be verified by clicking "Conv. Plot". The "[ Optimal ]" items from the right listbox do not run learning, but instead load a predefined set of optimal policies on the given task. Learning and replaying can be customized from the configuration panel. The replay speed (from 0 - step by step - to 10 - full CPU speed) can be set via the sliding rule "Replay speed". The progress of learning can be shown "live" during the entire process, or starting from a given trial. For the latter, tick the "Show only after #trials" checkbox on, and move the sliding rule to set this number of trials. The progress of learning can be slowed down for visualization after a given trial. To do this, tick the "Slow down after #trials" checkbox on, and move the sliding rule to set this number of trials. Tick "Pause before run" to have the chance to examine a new world prior to the learning process being initiated. Tick "Use Star Wars robots" to use images of Star Wars robots to represent robots, instead of abstract colored discs. The goals then transform into color-coded power sockets. Quit the demo by pressing "Quit" or the Close button on the window. See also: run_algorithm, replay_algorithm