Design Optimization with MATLAB
Overview
Engineers use design optimization tools to automate finding the best design parameters while satisfying project requirements and to evaluate trade-offs among competing designs. Using these tools results in faster design iterations and allows evaluating a larger number of parameters and alternative designs compared with manual approaches.
We will show how to use apps and functions in Optimization Toolbox and Global Optimization Toolbox to define and solve design optimization problems. Optimization can be applied to design models that are either analytic or black-box including those built with machine learning and simulations. We will use examples from different engineering domains to demonstrate these capabilities.
Highlights
- Defining objectives, constraints and design variables
- Interactively creating and solving optimization problems with an app
- Choosing the best solver for your problem
- Setting options to improve results
- Using parallel computing to accelerate design studies
About the Presenters
Mary Fenelon is the product manager for the MATLAB optimization products. Before joining MathWorks, she managed the CPLEX Optimization Studio development team at IBM and developed early versions of the CPLEX mixed-integer programming solver. Mary earned a PhD in Operations Research at Stanford University.
Jason Rodgers is a senior application engineer at MathWorks. Prior to MathWorks, he spent five and a half years at Toyota R&D in the Model-Based Design group. He specialized in powertrain modeling and using model-based control, along with various optimization techniques to develop new powertrain systems. Jason earned a B.S.M.E. and an M.S.C. from the University of Michigan.
Peter Brady is an application engineer with MathWorks striving to accelerate our customer’s engineering and scientific computing workflows across maths, statistics, finance and machine learning. Prior to joining MathWorks, Peter worked in computational fluid and thermodynamics as well as high performance computing for a number of defence and civil contractors as well as a few universities. He has worked in fields as diverse as cavitation, wave/turbulence interactions, rainfall and runoff, nano-fluidics, HVAC and natural convection including scale out cloud simulation techniques. Peter holds doctorate in free surface computational fluid dynamics and a Bachelor of Civil Engineering both from the University of Technology Sydney.
Recorded: 23 Sep 2020
Hello. And thanks for joining us today for the webinar, Design Optimization with MATLAB. My name is Mary Fenelon. I'm the Product Manager for the Optimization Products here at MathWorks.
Let me walk you through a few logistics before we begin. If you have any problems hearing the audio or seeing the presentation, please contact the webinar host by typing in the chat panel. If you have any questions for the presenter related to the topic, you can type them in the Q&A panel. Those questions will be answered at the end of the webinar. Thank you.
I'm joined today by two of my MathWorks colleagues, Peter Brady and Jason Rodgers. Peter and Jason, please introduce yourselves.
Hi. My name's Peter Brady. And I'm a Senior Application Engineer with the MathWorks way out in our Australia office in Sydney. I'm generally working with customers today in the maths and stats and optimization space. And prior to joining MathWorks, my background and actually my PhD and research industry experience is in computational fluid dynamics and thermodynamics and high performance computing. So I've worked that across a number of defense and civil contractors and taken those skills and that background here in assisting MathWorks customers.
Hello. My name is Jason Rodgers. I'm an Application Engineer based set up Novi. And my main role is to support our automotive toolboxes. So things like powertrain blocks, vehicle dynamics blocks, and model based calibration. A few of those, I'll be talking about today. Before joining MathWorks, I worked at Toyota for a number of years, working for a group whose main role was to investigate different optimization techniques and look at how we can apply optimization to various problems within power train design and powertrain controls.
Here's our agenda for today. We'll start off with an introduction to design optimization and then show you three examples of using MATLAB for design optimization. I'll do the first one on multi-stage rocket design, where the system is modeled with analytical expressions. Peter will do the second, on designing an electrical cable modeled with partial differential equations. Jason will do the third one, on determining optimal gear ratios under multiple objectives using a lookup table as the model.
I'll finish by giving some tips on choosing the best optimization tools for your project and some key takeaways, followed by a question and answer session. Let's get started.
Here's a diagram of the design optimization workflow. I have a system that I want to optimize according to some performance measure. The system is affected by some design variables. I need to choose values for these variables. I run the system using these values and get a result. I check the result against my goals.
Did I improve the performance measure that is the objective of the optimization process? Did the values I chose meet the requirements or constraints on the design variables? If my goals are not met, I need to modify the design variables and try again. It's an iterative process.
One way to iterate is by trial and error. But how do I know what combinations to try? How do I know that I have an optimal solution when I'm done? That's where optimization solvers make a difference. They figure out how to adjust the variable values to converge to an optimal solution automatically. To make this concrete, let's look at an example.
In this example, I have an engine model. The engine model takes two input parameters, the engine speed and the pressure ratio. I need to choose a value for the engine speed and a value for the pressure ratio. And then I can run my engine model. Now when I run my engine model, I get out a volumetric efficiency, which is the performance measure that I'm looking to maximize.
So how do I adjust the engine speed and pressure ratio to maximize the value metric efficiency? Let's see how we can do that in MATLAB.
I've set up the problem in this live script. I have experimental data on the volumetric efficiency of the engine at various combinations of pressure and speed. It's in this file here that I'll load up. And then from that, I'm going to build a map, a surface that we can use to interpolate to get a value for any point that we want to evaluate. So let's run that.
So I'll use that map for both evaluating the points to get the volumetric efficiency and then also to plot the points. Because we want to be able to keep track of our approach. So we're going to start with a trial and error approach. So we start with an initial value here of 3,500 RPM and pressure ratio of 0.5. We're going to save this because we'll use that when we try the optimization a bit later. So let's see what we get with that.
We get a volumetric efficiency of 0.8963. And there's our plot. I'll make that a little bigger. So let's try to increase that. That's a little better. Let's try some more. Better. Keep going. Oh, this distance is good. So let's back up a little bit.
Well, so we found a pretty good point by adjusting the pressure ratio. Let's see what we can do by adjusting the engine speed. See if there's a better point out there. So we decrease it. And we increase it. Now because we have the plot, we could see where this was going. But just the trial and error process, this is maybe the best you can do. We did some reasonable guesses from a reasonable starting point. And this is the best we can do. So can optimization do any better?
So how can I use optimization? Well, starting with 20B release, we have an optimized live task that will guide us through this process. Because, for an optimization, you have to specify an objective function. And you have to specify constraints, bounds, and the variables, and a starting point if it's not linear. So this is going to help us do that.
So what we've got, the first thing we do is we start with an objective. We have a nonlinear objective. We don't have any constraints, so we can skip that part. And then once we do that, it's going to tell us what solver we can use. So FMinCon is the one that's recommended. But since it's unconstrained, let's just use the unconstrained solver.
Now the objective has to be in a function file. And it has to be for a minimization. The function we have is for a maximization. So we have to flip that around. So I wrote this little function here that does that. And that's what we'll do for the optimization.
So it's a local function. And we'll just choose it here. And then there are two arguments to that function. The input that the optimizer is going to be changing. And then the data that it needs to actually calculate that volumetric efficiency, the map. Then we need to specify an initial point. So we'll use that one that we used before. We want to have each iteration displayed.
And I want to see the plot as we go. And so, we can do that by setting this output function, which does have a bit-- it's that plot function we were using before. So I'm just going to, I put the code that we needed, the format of the code right here to use so I'd get it right. And we should be ready to go. So let's give this a try.
And indeed, we found a better point here. We've got a volumetric efficiency of 0.9734. And you can see it was pretty quick to get to that maximum point in contrast to what we saw with the trial and error approach, where we just couldn't get there with the initial point we had chosen and the way we decided to explore the space.
So that's one of the benefits of optimization. It's the ability to find a better optimal design. Other benefits of optimization include the ability to perform faster design evaluations. Because the optimization is automatically modifying the parameters and in a smart way, and running the model again. You don't have to do that yourself.
Optimization is also useful for trade off analysis, because it can automatically determine which parameters have more of an effect on the solution. This becomes especially important when you have problems with tens, hundreds, even thousands of variables. Figuring out which of the variables in your problems have the largest effect on the overall solution is done automatically through the optimization process. Doing a parameter sweep with a large number of variables just may not even be feasible. And even with parallel processing, it just may be too time consuming.
The last benefit of optimization to note is that optimization can lead to non-intuitive designs being found. This is because optimization routines can sometimes search areas of the solution space that may have been previously considered as not being good places to look, not a place where there might be a good viable solution.
An example of this is this antenna design. This antenna needed to be fit into a very tight space. And the optimal antenna design that was found with a genetic algorithm isn't like probably any antenna that you've seen before. But it performs as needed and fit in the space.
Now we'll turn to the example of designing a multi-stage rocket but using a different approach than we used in the engine example. Let's return to MATLAB. This example used what we call the solver based approach. In that approach, you specify the objective function and the constraints and choose an appropriate solver. And the live editor task helps us do that. And the objective function and constraints are specified in the terms that the solver needs to actually use to do with computation, which is functions for anything that's non-linear and matrices of coefficients for anything that's linear.
There is another way to do this. And it's called the problem based approach. And that's what we're going to use for this other example. And in the problem based approach, you specify variables that are contained in an optimization problem. And then you build expressions and, from the expressions, constraints and equations. And then you solve the problem with an automatically selected solver. I'll open up the script that I set up for that.
This design problem is to find the fuel needed for each stage of this three stage rocket. We want to maximize the payload while meeting the delta V requirements to achieve Earth escape. First, I set some parameters. Then I define the optimization variables for the fuel mass and the optimization problem. I set a lower bound of zero on the variables.
Next, I define some optimization expressions using these variables and some of the constants that we defined. And use those to set the objective. Now, to define the constraints. The first group is to ensure that the fuel mass is non-decreasing. The second group ensures that the minimum delta V is met.
One advantage of the problem based workflow is that it gives you a way to check your formulation. It's a text based display of the problem. And this looks like what we want. We need to set a starting point. And we can evaluate the starting point to see if it's feasible.
Unfortunately, it's not. So the optimizer is going to have to work on finding a feasible solution, as well as finding an optimal one. We can set options. And then we're ready to solve. There's one other set of options to enable automatic differentiation, which will be enabled by default. Automatic differentiation means that you don't need to do extra function evaluations to get the derivatives, like you would have to do with finite differences, which has been the only option up until the 20B release.
So we can run that. We quickly find a solution. It's a local minimum. We can check the values. There's some other information that we might want to look at about the status of the solve. And then perhaps more importantly, we want to understand a little bit more about why we're getting the solution we are. And one thing to look at is which constraints are active, which ones are binding.
And when we look at the look range multipliers, what we find out is that all the constraints are binding. And so, we aren't going to be able to improve the payload unless we can change the delta V requirement. So what if you would like to use this workflow but not everything that you have in your problem is able to be expressed as an optimization expression? Let's say, if part of the model is a machine learning model or simulation or ordinary differential equation.
You can still use this approach by converting whatever function you have for whatever part of the problem, constraint or objective, into an optimization expression with this function to optimization expression. And you can run that. I have a version of the rocket objective written as a function. And it runs just fine. Though you can see that there's way more function evaluations, because we are unable to use automatic differentiation with arbitrary functions. We can only use that on things that are expressed as optimization expressions.
You might also notice that these objective values are different. And that's just because of the way the constant terms are being handled differently. So this problem based approach gives you a really nice compact way to represent a problem. Especially when you've got different kinds of variables, you can you can keep them separate. And you can check your work as you go. Now we're ready for Peter and his demonstration of designing an electrical cable.
Hi. And thanks for joining me for this demonstration today. I've got my MATLAB project open. So I'm going to just jump right on into the Live Editor. So what we're doing today is we're doing a combination of finite element analysis, parallel computing, and optimization.
So this was inspired by a real world program I was working with with a customer. And imagine you're a wind farm or a generator. And what you're going to do is you've got to bury those current carrying conductors from your turbine back to your central location. So what we're interested in is, we're interested in, for a given cable size, how much current can I shove down there, subject to the conditions in the Earth, before I exceed the maximum temperature that my insulator is allowed to reach?
So this is what we've got. We've got the conductor buried in a trench. We've got some sand around it. We've backfilled the natural. And we've got the natural earth around it. So what we're going to do is we're going to just walk through the process of how we do that.
So the first step is, I actually have to use the partial differential equation tool box to set up the basic heat transfer process for what we're doing here. So the process with that is actually remarkably the same as any other sort of [INAUDIBLE] print workflow. And what we do is we bring in our geometry, set up the thermal model and mesh it, go through and set out material properties, boundary conditions, initial conditions, and run. Conceptually easy. Right? Well, let's just jump on in.
This is the geometry we end up with. I'm highlighting this figure because I've plotted the edge labels and the face labels. Now what I do is, when I'm doing those, is I don't like isolated constants throughout my code. So I set up some face indexes and some edge groups. So I have face soil as one. Right? So we can see edge one is down at the bottom here. That's our soil. I'm going to reuse those throughout the rest of the process.
So the first step, set up an empty thermal model. It's a transient problem. It's going to be thermal. So we can let it go for that. The next thing we do is we create our mesh. So we create our geometry from [INAUDIBLE] model. Generate the mesh. And we end up with something that looks like this.
By the way, I should say, I cheated. Just before this demo, I pre-computed this. It takes a little bit longer than I can just talk about while we're talking about heat. I've done a little bit of cell refinement around the mesh. But I've been pretty relaxed with my mesh out in the bulk domain. It's linear heat transfer. So we can be a little bit relaxed about our mesh constraints.
So we've got our geometry. We've got our mesh. The next thing we do is we just step through and set our thermophysical properties. So these are constant. And we use the thermal properties function. We put in our thermal model. We say which face we're working on. And then we just provide the things that you would normally expect for a heat transfer problem. Conductivity, density, specific heat. I'm using SI units in this case. But as long as you're working in consistent units, it should be fine.
So we repeat this process for the sand, the backfill. We now do the boundary conditions. So we have a thermal BC function. You push in the edges that you wish to set and the temperature condition I'm using in this case for the boundary conditions. Similarly for the initial conditions, we have a thermal IC function. You push in your model and what your initial temperature is going to be for those initial conditions.
Now this last section here, this is where it gets interesting for me. Because these are some non linearities within this problem. Now, we all know that, as you push more current down a conductor, it gets hotter. So we have some non linearity in this case.
So the first thing we do is we set up our thermal properties, as we would. This is an aluminum alloy for a conductor. So we've got our conductivity, density, and specific heat. And then what we do is we're providing an internal heat source at this point. So this provides some power or some energy into the model. And I've wrapped this up in a function. Have a look at the code you can download that goes with this demo.
But what it does is, it's an adaptation of Ohm's law essentially, where we're pushing in a current and we're doing an estimation of resistivity to for that particular aluminum alloy to convert this into sort of an amount of energy that's being pushed into our model. The key take home here is, the way we're doing it is we're using a function handle to attach my external function into the thermal model.
So the next nonlinear entity within this problem is the insulator. And the insulator is XLPA in this case, which has a nonlinear conductivity as a function of temperature. And it just doesn't necessarily get more conductive as it gets hotter. So what we're doing is we're using the same thermal properties function as we've used above, but we're using that technique from internal power source to plug a function handle into the thermal conductivity.
And this function is simply a lookup table that says, for a given temperature, what is the thermal conductivity for this particular temperature? The take home I want you to think about here is you can actually apply this function handle technique to any of these properties. We can have a nonlinear mass density, nonlinear specific heat. Use as you need.
So what we do then is we say, what sort of time domain do we want? And we just run it. If you wish down on this code, there's an animation. But the final result, as you can see, is, as we would expect, we have a hot central current carrying conductor. And the heat is being conducted out into the bulk domain surrounding it. OK. We've now explored our PDE.
The trick with using this for optimization is we have to wrap it up into a function that takes a current as an input and returns it as a temperature. Now that's actually really easy. Literally, all we do is we cut and paste the above code and wrap it into a function. Now, there are two catches. There's two changes that I've made.
The first change here is down in the geometry section. The geometry really isn't going to change run per run. What's changing is the current. So just create the geometry once. For 2D, it's pretty quick. But for 3D meshing, it can be quite computationally expensive. So just do it once. Store it. Reuse it.
The other change I've made is we're not really interested in the entire domain. All we want to do is get the maximum. So I'm just pulling the maximum out, the maximum temperature out and returning that. And that's it. Two changes and I've wrapped up my PDE into a function to use.
Now we can get into our minimization. So before we get to that, I like to try and have a quick look at my objective function if I can. In this case, it's pretty easy. I've only got one independent variable. So what I'm doing is I've just used the parallel toolbox. I'm running a path over here to compute into my parameter space between 10 and 1,000 amps to have a look at what's happening.
And look, as we would kind of expect intuitively, if you're pushing more current down a conductor, it gets hotter. But we now also know that that's what's happening. It's really going to maximize as we push current down it. But MATLAB works in minimization. So let's think about our constraints.
So what we do is, the constraints I've come up with in this case are-- let's just limit it to 10 to 1,000 amps. And let's keep that temperature, we must keep that temperature about 353 Kelvin. Now constraint one, that's pretty easy. Right? What we're doing is, well, theoretically we can pump as much amps down this as we want. But let's just reduce our surge space to make it a bit quicker.
Constraint two seems a bit weird on first glance. But what I want you to remember here is this is a minimization problem. Essentially, the MATLAB functions will just walk down that blue line until they get to zero and stop. So what the second constraint says is, keep walking down that line. But you can't get beyond this point. And that 353 Kelvin is my 80 degrees Celsius, which is the limit that my conductor can get to.
So that's it. That's what we're now understanding about what our constraints. So we jump into our optimization. And I'm actually going to go two methods here. The first one I'm going to start with is surrogate optimization. Now surrogate methods are what we recommend when you've got an expensive optimization or expensive objective function. So our partial differential equation at this point is expensive to compute.
The problem with surrogates is they're not guaranteed to get you to the optimal. But they'll get you close quickly. So I then use this near solution for a local optimizer to speed up this search.
Process for using these two is pretty much the same. So what we do is, the first thing we do is we code in our bounds. We've decided on our upper bounds and our lower bounds of our current. We know what our maximum temperature is. So we're going to hardcode our T max there as 80 degrees Celsius. Convert that into Kelvin because I'm using SI units. It's important to provide a sane starting point for these options. So I'm going to start the objective calculations at 300 Kelvin and let the surrogate message go from there.
Now the with surrogate methods is we have to create an object constructor function. Have a look at my code. But basically what that does is it returns two things. It returns a limit and a function estimation. So it's calculating the PDE for us. We then tell the surrogate optimization we only have one variable at this point. And then we get to optimization.
And this is where it's interesting. 2020B has released a new live task key which allows us to sort of rapidly and graphically explore our different options. So we can plug-in, we've got a nonlinear function. We know we've got some lower and some upper bounds. And it's going to suggest a different option for us. So it suggests FMinCon. But I know Surrogate is going to be best to get me started here.
We then plug-in our problem data. So we put in our minimum function. We put in the number of variables. We plug-in our bounds. And then I've said to use the parallel toolbox to run this in fast. And I only run 50 iterations at this point because I just want to use this to get sort of a fast approximization.
And what it looks like is we end up with something like this. And I just love looking at how the Surrogate works. But as you can see, we've ended up with around 353 Kelvin. So it's actually pretty close to where we're at. But I think we can do better. So this is where FMinCon comes in handy.
FMinCon says it's local solver. Its design takes that output from surrogate and goes in. The setup is pretty much the same. We set our upper bounds and our lower bounds. We set our temperature bounds. And here's the trick, though. We pull our initial solution, our initial starting point out of the solution from the surrogate optimization.
We set up an anonymous function that we're going to use as our minimization. In this case, it's just that maximum T from current function we set up to wrap the PDE. I've got my nonlinear constraint here programmed so that it stays above that temperature. And when we come back to the optimizer here. So we've got our lower bounds, upper bounds, we have our nonlinear optimization. And we're using FMinCon in this case.
Same, same problems here. We set up our problem data. What's my function handle? What's my initial condition? Bounds and my function handles. Again, I'm running in parallel. And what does it look like? Well, we end up with something like this. And we have converged back into-- well, the best function we have is 353.3 Kelvin. Well, that's pretty much what we're hoping for. Let's just double check it. Objective value 353.3. So 80 degrees Celsius. And to get to that point, we've pushed about 300 and 650 amps down the cable.
So that's the process of optimization. We've been able to walk through setting up a PDE using that as an objective function and two methods to really get us sort of quickly into the optimization. So thank you very much for your time. Hope you've enjoyed it.
Thank you, Peter. And now we'll turn to Jason for his example of finding optimal gear ratios in an electrified powertrain.
So in this section, I'm going to take you through some of the design work that I've been exploring using our tools. Specifically, we're looking at how to design gear ratios for a three motor electric vehicle to help optimize across competing objectives. So between fuel economy and acceleration, if you know anything about powertrains, you know that, as you have a more fuel efficient vehicle, maybe it's going to accelerate quickly. And if you want a fast car like a sports car, it's not going to have very good fuel efficiency.
So the architecture we're using is on the right hand side. All you really need to know is that in these boxes I have highlighted are different gear ratios. So on the front wheels, we have a front differential with some gear ratio. And on the back wheels, we have a different gear set with its own gear ratio. And these are the gear ratios that we're going to kind of explore and try to optimize to help improve the fuel economy or acceleration and look at the trade off between the two.
So to help do this, we use various MathWorks tools. So first is powertrain block set. And we use this to do the powertrain modeling. Second is to build a faster surrogate response model. So we use model based calibration toolbox for that. And third, this is where we get into the meat of the optimization work, we use the global optimization tool box.
I'll quickly go through the first two things here, powertrain blocks and model base calibration. If you're interested in more detail with those, please follow up with me as you would like. So if you're unfamiliar with powertrain block set, just know that it's a Simulink based add on to support powertrain modeling and control development in MBD design space. It's a well documented and open library for you to do powertrain modeling. And it comes with pre-built vehicle models that you can use to parameterize and customize how you want.
So typically, this model is very fast. And this is the model that we would normally use for doing simulations to look at what the fuel efficiency is of a vehicle or the acceleration time. For this study, we're going to run two different drive cycles, one city cycle and one highway cycle. And then wide open throttle. That's going to give us our zero to 60 mile per hour acceleration time.
So we have to run three simulations for each combination of gear ratios. So this model that we have, how we have a setup, it's three times faster than real time, which is great. It's perfect for control development. But when we kind of couple this in and think about how many times it's going to be called for optimization routines, it sort of implies we could consider building some kind of surrogate model for these responses. So how can we make the simulation kind of faster?
So one way to do that is to use model based calibration toolbox. This is a workflow. You can kind of see on the right hand side, there's a lot to it. But what I want you to know is, with this workflow, we can develop a design of experiments. So we did that for the various gear ratios. We then took those gear ratios, ran it through the powertrain box set model, and generated about 50 results for the energy usage and the wide open throttle.
We then took those 50 results and then fit statistical based models to those, automatically using NBC. And then those became the basis for our response models. So now, within the bounds of our search space, we can give a unique set of gear ratios and know what the acceleration time would be for that and what the energy usage would be.
We then took it a step further and extracted out look up tables to make the simulation run a little faster. And those lookup tables are what's going to be our surrogate model. And these are what we're going to use for our optimization, which actually leads us to the next part, which is the optimization that we did.
So now I'm going to switch over and show a demo of kind of the live script that I used on how we started to investigate the Pareto front of the optimization that we looked at, the the multi objective optimization problem that we had.
To help solve our optimization problems, I used our new optimization live script that is available with our latest release. And this was a really nice kind of workflow that's already been developed. And I just had to go through it and fill in the blanks. And it really helped me kind of set up these optimization problems fairly quickly.
So in this first section here, it's clearly listed that you can go through. This is where you define the input parameters. So for me, I'm defining some weights, which I just have set to one. So they really do nothing. We define the number of variables. Here's where we're loading in our response models. So our surrogate models that we just generated. And then I'm lumping all this information into a cell. The cell is going to get passed into our objective function later on.
Scrolling down, this is where we can setup the actual optimization solver calls. This is a really nice Gooey if you're unfamiliar with a lot of this stuff. But for me, we have a nonlinear optimization problem that we're trying to solve. We're going to enable some lower and upper bounds which, when you check those, you go down here. And we can say OK, well let's say it's got to be greater than one and less than 12.
Here, which is where we can change the solvers, right now we're going to look at pattern search. I do have a separate result that I'll show at the end where we use pattern search, as well. So to kind of continue on, we're selecting things. So now we have to choose the objective function.
So I'm going to use a local function called this objective function here. If we had wanted to add a different one, you click New. And it's at the bottom of the script. It's going to add a new function call for you to just fill in. And I'll show you what that looks like in a second. We're telling it that the optimization input is going to be some variable x. And then we're going to pass in this fixed input A. And again, this is that cell that we just defined.
Here's the constraints that we already discussed. We can select the solver configurations in the settings. So here, I'm choosing Pareto set of 100. And then we can choose what kind of metrics we want to plot. So I'll look at function count and objective function values.
This next part, this is automatically filled in by that Gooey that you interacted with. This is actually just a problem being set up. Scrolling down further to the bottom quickly, these are the helper functions. So this is that objective function that was created when you clicked New. It would create something down here. So this is the one I'm using. So I'm taking this value x. And we are extracting out the gear ratio for the front or the rear. And then from that Cell A, we're pulling out what's our lookup tables, et cetera.
So now we're just interpolating using the Interp2 function. So given a gear ratio for the front and rear and look up tables, what is the city energy, the highway energy, and the acceleration. And then I'm returning this value F. This is going to be a vector of inputs. This is a multi objective solver. So Pareto search is going to expect two different outputs since we're concerned with two different objectives. If you had three, this would be a vector of three.
So for me, the first entry is this combined energy efficiency-- excuse me-- energy usage value. And then the second entry is just acceleration time. And again, these weights are just one. So they're not really doing anything. Other than that, it's pretty easy to set up. So we can just go in and click Run here. And it's going to run. And you see how fast it runs now that we have the surrogate model. And then the speed of the solvers themselves I believe were already done. And we did 2,300 function evaluations here.
And this is our Pareto front. So we can look between the two different objectives. Going back to the script, scrolling down, we see that same figure here with our Pareto front. But as we scroll down here, we also see we have some results. So we have solution. These are all of our different gear ratios. And then we have the objective values.
We can actually manipulate those because now it's in the workspace. So I wrote this script on the bottom here that's going to generate this plot. So we can look at what our Pareto front gear ratios looks like with respect to the contour maps of the two different objectives.
So what this is going to tell us is that, as we go back to the designer-- excuse me-- the manufacturers, we can say, all right. Now give me gear ratios that are going to fall within this range of the Pareto curve. We don't want anything that's past 10, for example, because it didn't exist in our front. So that's really helping kind of on the design front narrow down the scope of this search that we're going to be doing for the different design requirements.
Here we have a summary of what we just talked about. So again, we quickly explored the Pareto front using paretosearch and helped understand and identify a range of potential gear ratios, which we would then take to our gear set suppliers players and look at available gears. So next question is, how can we take this further? Maybe link this to our design requirements?
So one way we can do that is combine our different objectives into one cost function, which I have written in the middle here using those weights that we've talked about already. And now these weights are a way that we can reflect our vehicle requirements. So if weight one is much greater than weight two, maybe we want a fuel efficient car. And if weight two is much greater than weight one, we want a sports car.
So we could take information about the requirements that we have. Maybe we want our vehicle to be twice as efficient as it is fast. We can directly link that and use pattern search and solve the optimization problem and say, OK. Well, from those given sets of potential gear ratios we have, which one is the optimal kind of set?
So what I did is I created a figure on the right hand side, which would be a great design figure to have because it shows the connection of the different kind of design requirements for the vehicle and what gear ratios are linked to those. I mean, the way I did that was by taking this pattern search script that I generated with a live script. And I just wrapped a loop around it. And now we can sweep the different weights and use pattern search to tell us what the optimal solution is.
And then I bend those into regions of our quote unquote available gear ratios. The figure on the left hand side shows our optimal quote truly optimal values and how it rides along the solutions from our paretosearch. So we have good matching between our two approaches there.
So just quickly in summary, we showed how Powertrain Blockset can be used and was the starting place for our plant model development with this process. We then briefly touched on how model based calibration tool box can be used to create circuit response models. Ultimately, it improved our objective function call from 18 minutes down to just less than a second because we're using look up tables.
Another workflow we could have investigated was using Surrogate Opt, which would I believe exist in the global optimization toolbox. And that is a potential workflow we could have investigated. I used the optimization app. It's really powerful. And it helped me rapidly set up and explored design space and look at different solvers. But ultimately, this workflow can be accomplished in five hours. Now that we have a setup, if we wanted to take our powertrain blockset model and change some of the motor maps, now we have different specs on the vehicle in the motor maps, we could quickly reuse this whole process and generate the similar results within around five hours. So it's a very repeatable and efficient process.
So thank you again for your time. And if anyone has any questions on powertrain blockset or model based calibration tool box, please feel free email me. Thank you.
Thanks, Jason. The demos used a variety of solvers and workflows. I have some tips on deciding which ones to use for your projects. And I will point out some that we did not cover in the demos. The solvers are packaged into two MATLAB toolboxes, Optimization Tool Box and Global Optimization Tool Box, which requires Optimization Tool Box for its use.
Optimization Tool Box has gradient based solvers that work on smooth problems. That is, where first and second derivatives exist and are continuous, like this function. The solvers converge to local Optima, which are global if the problem is convex. Global Optimization Toolbox solvers don't use gradients and work on both smooth and non-smooth problems, like this one. They search for global Optima. The optimization toolbox solvers will generally be faster and able to solve larger problems than the global optimization toolbox solvers.
Optimization Toolbox has solvers for many common types of optimization problems. New in 20B is a solver for second order cone problems, which can represent convex quadratic constraints and solve efficiently. I'll also note the least squares solvers. We didn't use them in any of the demos. But they are often used for design optimization projects for parameter estimation.
Global optimization Toolbox has solvers using popular metaheuristics, such as genetic algorithms, simulating annealing, and particle swarm. Peter and Jason used the pattern search, paretosearch, and surrogate app solvers in their examples. These solvers are often more efficient measured by function evaluations than the metaheuristic solvers. And we recommend that you try them first. Problems with integer variables can be solved with a genetic algorithm and with surrogate optimum solvers.
We've shown the two workflows for optimization, problem based and solver based. When should you choose one over the other? I almost always start with the problem based workflow. I find it easier to specify the problem, especially when there are sets of variables and constraints. In contrast with the solver based approach, there is no need to keep track of the indices in order to locate the different variables and constraints in the problem. For non-linear problems, automatic differentiation relieves me of the need to figure out gradients.
On the other hand, the solver based workflow uses the familiar MATLAB functions, matrices, and vectors. The optimization problem may be just one part of an algorithm that uses these data types. And you don't want to have to convert to another form. You may also need some MATLAB operators or functions that are not supported for optimization expressions.
The solver based workflow also allows you to more easily use some advanced features where you supply more information to the solver, such as Hessians. Also, as of 20B, you need to use this workflow in order to use the global optimization solvers. But note that you can do the formulation with the problem based workflow and then convert it to a structure to use with the global solvers. Look for the data science cheat sheets offer on the MathWorks website for a cheat sheet on each of these workflows to get you started.
There is another option for doing design optimization when your system model isn't Simulink. Simulink design optimization provides functions interactive tools and blocks for analyzing and tuning model parameters. Using optimization and global optimization tool box solvers, it supports not just optimization but also design, exploration, and sensitivity analysis.
Now for the key takeaways. I hope the demos and information we've shared today will help you with your design optimization projects. We showed how having a variety of solvers is helpful in getting the job done. We showed the two workflows for setting up and running optimizations and the diagnostics that are available. Finally, we showed that using the MATLAB platform makes it possible to do all the tasks surrounding optimization, like developing the system models and visualizing the results. Symbolic Math Toolbox can help with computing analytical derivatives. And parallel computing can speed up optimization, especially on large problems.
One thing we didn't show was how to share your design optimization work with others. Besides sharing your MATLAB code as projects or toolboxes, you can compile it into apps with MATLAB Compiler. Or for some of the solvers, generate C and C++ code for deploying to embedded or enterprise systems using MATLAB Coder.
That concludes our presentation. The product web pages have links to documentation, examples, and videos for you to learn more. You will also find links to trial offers there that you can try what you learn. Thank you.