Run Experiments in Parallel
By default, Experiment Manager runs one trial of your experiment at a time on a single CPU. If you have Parallel Computing Toolbox™, you can configure your experiment to run multiple trials at the same time or to run a single trial at a time on multiple GPUs, on a cluster, or in the cloud.
Tip
In built-in training experiments, the results table displays whether each trial runs on a single CPU, a single GPU, multiple CPUs, or multiple GPUs. To show this information, click the Show or hide columns button located above the results table and select Execution Environment.
Run Multiple Simultaneous Trials
To run multiple trials of your experiment at the same time using one parallel worker for each trial:
Set up your parallel environment as described in Set Up Parallel Environment.
On the Experiment Manager toolstrip, set Mode to
Simultaneous
.Alternatively, to offload the experiment as a batch job, set Mode to
Batch Simultaneous
and specify your cluster and pool size. For more information, see Offload Experiments as Batch Jobs to a Cluster.Click Run .
Experiment Manager runs as many simultaneous trials as there are workers in your parallel pool. All other trials in your experiment are queued for later evaluation.
Note
When running multiple simultaneous trials, follow these guidelines:
Experiment Manager does not support
Simultaneous
orBatch Simultaneous
execution when you set the training optionExecutionEnvironment
to"multi-gpu"
or"parallel"
or when you enable the training optionDispatchInBackground
. Use these options to speed up your training only if you intend to run one trial of your experiment at a time.Load data for your experiment from a location that is accessible to all your parallel workers. For example, store your data outside the project and access the data by using an absolute path. Alternatively, create a datastore object that can access the data on another machine by setting up the
AlternateFileSystemRoots
property of the datastore. For more information, see Set Up Datastore for Processing on Different Machines or Clusters.
Run Single Trial on Multiple Workers
Built-In Training Experiments
To run a single trial of your built-in training experiment at a time on multiple parallel workers:
In your setup function, set the training option
ExecutionEnvironment
to"multi-gpu"
or"parallel"
. For more information, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud.If you are using a partitionable datastore, enable background dispatching by setting the training option
DispatchInBackground
totrue
. For more information, see Preprocess Data in the Background or in Parallel.Set up your parallel environment, as described in Set Up Parallel Environment.
On the Experiment Manager toolstrip, set Mode to
Sequential
.Alternatively, to offload the experiment as a batch job, set Mode to
Batch Sequential
and specify your cluster and pool size. Experiment Manager does not support this execution mode when you set the training optionExecutionEnvironment
to"multi-gpu"
. For more information, see Offload Experiments as Batch Jobs to a Cluster.Click Run .
Custom Training Experiments
To run a single trial of your custom training experiment at a time on multiple parallel workers:
In the experiment training function, set up your parallel environment as described in Set Up Parallel Environment. Then, use an
spmd
block to define a custom parallel training loop. For more information, see Custom Training with Multiple GPUs in Experiment Manager.On the Experiment Manager toolstrip, set Mode to
Sequential
.Alternatively, to offload the experiment as a batch job, set Mode to
Batch Sequential
and specify your cluster and pool size. For more information, see Offload Experiments as Batch Jobs to a Cluster.Click Run .
Set Up Parallel Environment
Run on Multiple GPUs
If you have multiple GPUs, parallel execution typically increases the speed of your
experiment. Using a GPU requires Parallel Computing Toolbox and a supported GPU device. For more information, see GPU Computing Requirements (Parallel Computing Toolbox). To determine whether
a usable GPU is available, call the canUseGPU
function.
GPU support depends on the type of experiment you run:
For built-in training experiments, GPU support is automatic. By default, these experiments use a GPU if one is available.
For custom training experiments, computations occur on a CPU by default. To train on a GPU, convert your data to
gpuArray
objects.
For best results, before you run your experiment, create a parallel pool with as many
workers as GPUs. Otherwise, multiple workers share the same GPU, so you do not get the
desired computational speed-up and you increase the chance that the GPUs run out of
memory. You can check the number of available GPUs by using the gpuDeviceCount
(Parallel Computing Toolbox) function.
numGPUs = gpuDeviceCount("available");
parpool(numGPUs)
Run on Cluster or in Cloud
If your experiments take a long time to run on your local machine, you can improve performance by using a computer cluster on your onsite network or by renting high-performance GPUs in the cloud. After you complete the initial setup, you can run your experiments with minimal changes to your code. Working on a cluster or in the cloud requires MATLAB® Parallel Server™. For more information, see Deep Learning in the Cloud.
See Also
Apps
Functions
trainingOptions
|spmd
(Parallel Computing Toolbox) |canUseGPU
|gpuDeviceCount
(Parallel Computing Toolbox) |parpool
(Parallel Computing Toolbox)
Objects
gpuArray
(Parallel Computing Toolbox)