RUN_ALGORITHM Runs a given MARL algorithm on a given type of task. RUN_ALGORITHM(TASK, COOPLEV, METHOD, CFG) Runs a MARL algorithm on a given type of task. Uses 5x5 robots gridworlds as tasks. Specifically designed for use with alg_gui, but can also be used independently. Saves the algorithm data in a field of the 'algdata' structure in the base workspace. Creates the variable if not present. The field name is created as a combination of task type and algorihtm type. Any previous data for that task/algorithm pair is cleared and the resources associated with it released. Parameters: TASK - specifies the type of task. One of 'nav', 'rescue', for navigation task, or search & rescue task, respectively. COOPLEV - specifies the cooperation level in the task. One of 'coop', 'comp', 'mixed', for fully cooperative, fully competitive, and mixed tasks, respectively. METHOD - specifies the algorithm to run. One of 'q' (single-agent Q-learning), 'fsq' (full-state Q-learning), 'asf', (adaptive state focus Q-learning), 'teamq' (team Q-learning), 'wolf', (Win-or-Learn-Fast policy hill climbing), 'opt' (predefined optimal sequences on the given task). CFG - configuration of learning process. Structure with the following fields (all optional) 'stopearly' - if learning should be stopped upon convergence. Default 1 (yes). 'showafter' - make world visible only after this number of trials. If Inf, world will be invisible throughout the learnign process. Default Inf. 'slowdownafter' - similarly, but for slowing down the learning process for visualization. Takes effect only after 'showafter' trials. Default Inf. 'pausebeforerun' - if world should be held on screen until keypress before starting the learning. Default 1 (hold on screen). 'useseeds' - whether the rand generator should be deterministically initialized for reproducible behavior of algorithms. Default 1 (use reproducible behavior). See also replay_algorithm