Runs a sequence of experiments using the MARL framework RUNEXP(EXPCONFIGS[, DATAFILE, DATADIR, NRUNS, SILENT]) Runs a sequence of experiments using the reinforcement learning framework. Parameters: EXPCONFIGS - the configuration of the experiments (see below). DATAFILE - (optional) the name of the data file where to store the results. Default: an automatically generated unique filename. DATADIR - (optional) the directory where the data file should be saved, if different from the current directory. Set to empty string, has the same effect as not being specified. Default: the current directory. NRUNS - (optional) the number of independent learning runs for each individual experiment. This number can also be set on a per-experiment basis via the experiment options. If set to anything less than 1, the default will be enforced. Default: 50 SILENT - (optional) 'on'/'off' - silences all text and graphical output when 'on'. Silence can also be set on a per-experiment basis via the experiment options. However, this global value, if 'on', overrides the per-experiment setting. Default: 'off'. EXPCONFIGS is a cell array, every element specifying the configuration of an individual experiment. It is not necessary to specify all the configuration fields for each experiment; anything that is not specified for a given experiment is inherited from the previous one (except the options). There are initial defaults, but it is recommended to specify a full configuration for the first experiment, to maintain consistency among the experiments. An experiment configuration specifier is a structure with the following optional fields: worldtype - the type of the world where the experiment will run. worldargs - the arguments for the construction of the world. The first argument will be overwritten by the agents collection. A simple empty matrix may be supplied on its position. lp - the learning parameters. See learn() agn - the number of agents aglp - the agent learning parameters. See agent() agfun - the agent functions, a cell array of strings, in the order: learning function, action function, exploration function. See agent() options - options for running the experiment. The 'alp' and 'agentfuns' can either be structures, in which case all agents will be constructed with the same learning parameters and functions, or can be cell arrays with the same length as the number of agents, in which case each agent gets its own learning parameters and functions. The following options are available: nruns - override the default number of independent runs for the experiment. Default is either 50 or the value specified on the function parameters. silent - 'on'/'off' silence the experiment. No text or graphical output will be displayed over the duration of the experiment. Default is off. show - 'on'/'off' whether to display the world view while learning. Default is off. If silent, this option has no effect. Obviously, the world must have a graphical representation if this option is used. storeobjects - 'on'/'off' whether to store the objects after each learning run. Worlds and cleaned up agents will get stored if this option is on. Default is on. convplot - whether to display convergence summaries after each experiment run. Default is off. If silent, this option has no effect. plotpause - how long to keep the plots on the screen. Default is three seconds. See also processexp