How to Set Up Your Own Deep Learning Experiments
From the series: Deep Neural Networks
The Experiment Manager app allows you to set up experiments for training, fine-tuning, and explaining your deep learning networks under a variety of initial conditions. See how you can set up your deep learning experiments with a detailed walk-through of the following steps:
- Reviewing an existing training script that could be turned into an experiment
- Turning a script into a function that the Experiment Manager will accept
- Highlighting the parameters you’d like to perform trials over
- Adding the experiment setup function into the Experiment Manager
Published: 30 Sep 2020
Hi. My name's Joe Hicklin. I'm a senior developer at The MathWorks. In my last video, I showed you how Experiment Manager can automate a lot of the experimentation you were doing for your deep learning systems. In this video, I'll show you what I had to do to get the Experiment Manager to run my experiments.
To configure the experiment manager to run your experiments, you follow a four-step process. First, you need to make the script that runs some kind of deep learning experiment. You've probably already got something like this.
The next step is to turn it into a function. Here I've added a function statement at the beginning and an end statement at the end. This function has to return three things. It has to return the data store with your data, the layers of the network, and the training options. And it's got to take one argument called params, and I'll talk about that more in a minute. Also, remove your call to trainnetwork because the experiment manager will do that for you.
The third step is the most work. You have to make your function perform different trials based on the value of the param argument. In this case, I'm going to use a larger or smaller data set, and I'm going to augment the data or not, depending on the value of this parameter. And here's how I've done that. I have a switch statement that is looking at the data set field of the parameter and based and switching off that. And depending which one of these strings it is, I'm using larger or smaller data set, and I'm doing augmentation or not. We're counting on the experiment manager to call this function now with different values for param.dataset, and for each one of those values I will do a different thing.
The final step is to tell the Experiment Manager about your function. If I go to the Experiment Manager and say New Experiment, he wants to know the name of my function-- that's the thing I just wrote-- the name of my parameter-- let's take a look. That was data set-- and possible values for that parameter. And I happened to store those right here.
And that's it. And so now when I run this experiment, Experiment Manager will call my function, and for the value of the data set parameter, we'll pass in each of these different strings one at a time. And the result of that is this data set, which I did earlier, producing this. The other two experiments were set up in exactly the same way. In the second experiment, I varied the network architecture, so in my function I added a switch statement that switched on another parameter, net, and depending on its value, created one of four different kinds of networks.
To tell Experiment Manager about that, we went to the Network Definition, told it, that's the name of my function, that's the name of my parameter, and there are the possible values. And that's all that took.
The last experiment-- I varied the training options a little bit, and in that one it was a little bit different. I simply passed the parameter values straight through to the trainingOptions command. I had a solver, epochs, miniBatchSize, and learnRate, and like I said, I simply passed those straight on through. To tell the experiment manager about those, I did the same thing. There's the name of my function. There's the name of each parameter that I used, and there is the values.
And that's all it took to set it up to run those 54 trials for me. I hope I've shown you that Experiment Manager can be an excellent way to automate, document, and store your deep learning experiments. If you want to learn more about it, follow the links at the bottom of the page.