Video length is 18:00

How to Build the Flight Code | Drone Simulation and Control, Part 3

From the series: Drone Simulation and Control

Brian Douglas

This video describes how to create quadcopter flight software and program a drone from the control architecture developed in the last video. It covers how to process the raw sensor readings and use them with the controllers to calculate motor speed commands. 

In addition to those two functions, the flight control software is also responsible for reference command generation, data logging, and fault protection. You’ll see how each of these contribute to getting a quadcopter to hover safely.  

The quadcopter example in Simulink® is used for drone programming as a starting point for the flight software and you’ll learn how to load and run the code on the Parrot® Minidrone directly from Simulink.

Published: 12 Oct 2018

We are well on our way to designing a control system for a quadcopter. At this point in the series, we’ve learned how quadcopters generate motion with their four propellers and we’ve stepped through a control system architecture that we think is capable of getting our drone to hover. However, there are still a few more steps we need to take before we can actually get the drone up and flying. First, we need to code the control logic in a way that we can put it on the minidrone. We’ll call this the flight code. And second, we’ll need to tune and tweak the flight code until the hover performance is what we’re looking for. To do that we’ll use Model-Based Design where we’ll use a realistic model of the quadcopter and the environment to design our flight code and simulate the results. So we’ll have two different bits of software that we’ll write: the actual flight code that runs on the quadcopter, and the model code that we use to simulate the real world.  In this video, we’re going to explore the flight code in more detail.  I’m Brian, and welcome to a MATLAB Tech Talk.

Flight control software is just a small part of the overall flight code that will exist on the Parrot minidrone. There’s also code to operate and interface with the sensors and process their measurements.  There’s code to manage the batteries, the Bluetooth® interface, and LEDs, there’s code to manage the motors speeds, and so on, there’s a bunch.

One option to implement the flight controller would be to write the C code by hand and then compile the entire flight code with your changes to the flight controller and finally load the compiled code to the minidrone. This is a perfectly reasonable approach to creating flight code, but it has a few drawbacks when developing feedback control software. With this method, we don’t have an easy way to tune the controllers except by tweaking them on the hardware. We could develop our own tuning tools and a model of the system and simulate how it would behave that way. But my experience is that designing and modeling control systems in C code makes it hard to explain the architecture to other people and it’s more difficult to understand how changes impact the whole system than it is with graphical methods.

So we’re going to use a second option, describing the flight controller graphically using block diagrams. With this option, we’ll develop the flight controller in Simulink, then auto code it into C code, where if we wanted to we could make changes manually, then compile that C code and load it onto the minidrone.  The benefit of adding this extra step is that I think the Simulink code is easier to understand, plus over the next two videos, we’ll talk about how we can build a model of the drone and the environment in Simulink so that we can simulate the performance of our flight controller and use existing tools to tune the controller gains.

We won’t need to worry about writing most of the flight code because we are going to use the Simulink Support Package for Parrot Minidrones to program our custom flight control software. This package loads new flight firmware onto the vehicle in a way that keeps all of the normal operating functions of the drone in place but let’s us replace just the control portion of the firmware.  As long as we keep the same input and output signals intact, then when we program the minidrone through Simulink, any code we write will be placed in the correct spot and can interface with the rest of the minidrone firmware.

So in this video, here’s what we’re trying to do. Design a Simulink model that takes external commands and the measured sensor readings as inputs, and outputs the commanded motor speeds along with a safety flag that will shut the minidrone down if set. This flag is our protection in case our code causes the drone to run away or go crazy in some way. Then we can build the C code from that model and fly the actual drone with that software to see how it does.

We have a pretty good start on the flight control system from the architecture that we developed in the last video, but it’s not all of the software that we need to write. This is only the controller part of the control system. For example, the drone has a sensor that measures air pressure and this air pressure reading is what is passed into our flight control system. However, we’re not trying to control the drone to a particular pressure, we’re trying to control an altitude. Therefore, there is additional logic needed to convert all of the measured states coming from the sensors into the control states needed by the controller. We’ll call this block the state estimator.

In addition to the state estimator and the controllers, there’s also logic we have to write that determines whether or not to set the shutdown flag. We could leave this code out, but then we’d run the risk of the drone flying away or causing harm to nearby observers. We could check for things like the rotation rate of the drone is above some threshold or the position or altitude is outside of some boundary set. Creating this code is relatively easy and can really save us from damaging the hardware or other people.

Lastly, we need to think about data logging. All of the firmware that exists on the minidrone records data that we have access to during and after the flight. Since the minidrone doesn’t know about the software that we’ve written, we need to make sure we have data logging set for the variables that we’re interested in. Since we’ll be using Simulink to write our flight code, we can easily create logic that will store data as a .mat file locally on the drone that we can download to MATLAB® after the flight.

These are the four main subsystems that we need to develop in Simulink in order to have safe and functioning flight code. But we’re not going to build the entire Simulink model of this code from scratch in this video, it would just take too much time. Luckily, we don’t have to. Aerospace Blockset™ in MATLAB has a quadcopter project based on the Parrot minidrone that we’re going to use as a starting point. There are some good webinars that I’ll link to in the description that describe how to open this model and use it, so I’m not going to cover much of that here. Instead, I’m going to show you how this Simulink code matches the control architecture that we developed in the last video and point out how it also accomplishes state estimation, data logging, and fault protection. We’ll end by auto coding the Simulink model and seeing it in action by flying the minidrone.

One thing to note when looking at this Simulink model is that it was developed at MIT for the lab portion of a control theory course and, therefore, is set up in a way to teach the underlying theory. In some cases, it has more logic than we need to perform our relatively simple hover maneuver. I’m going to start from this stock model so that it looks the same to you when you open it, but I’m going to modify it slightly as we go along to have it be clearer for our purpose.

Alright, let’s get to it. As you can see, this top level of the model has several subsystems: FCS, airframe, environment, and so on. It might not immediately look like it, but this is our classic feedback control system. In the top left, we have the system that is generating the reference signals or the set points that we want the drone to follow. There is the flight control system block, where the error term is generated and the PID controllers live. This is the flight code block that gets auto coded and loaded onto the minidrone and where we’re going to spend the majority of the rest of this video.

The outputs from the FCS block are the motor commands that are played through the plant, the airframe dynamics. The visualization block up in the corner that just plots signals and runs the 3D visualizer while the simulation runs — it’s outside of our feedback loop. There is an environment block that models things like the gravity vector and air pressure for the plant and the sensors. And finally, the sensor model block, which simulates the noise and dynamics of the four sensors that are on the minidrone. So you can see the characteristic feedback loop at this top level. The important thing to realize for this video is that the FCS block is the flight code, and everything else is part of the model used for simulation.

So now, let’s go into the FCS block and see what’s there. First off, you’ll notice that there are three inputs instead of the two that I mentioned. This is because the software as written makes use of the camera and image processing to help with precision landing. While precision landing is useful, this complicates our hover controller so I’m going to remove it for now and get back to just the two inputs.

OK, let’s open the flight control system block where things will start to look familiar. And we’ll begin with the heart of our control system, the controller itself. As a quick reminder, the controller subsystem takes the reference signal and compares it to the estimated states to get the error signals. The error is then fed into several PID controllers to ultimately generate the required motor commands. So let’s open up the subsystem block and see what it looks like.

Graphically, it looks different than the controller architecture we covered in the last video, but I assure you that the logic is the mostly same, it’s just organized in a slightly different way and this logic allows us to command special take-off behaviors as well as control the roll and pitch angles directly for landing.  Both of these capabilities we won’t need for our simple hover maneuver. To show you the remaining logic matches the architecture we developed before, I’m going to zip through some reorganization so that what we’re left with matches our expectation. There we go.

You can now see that we have the XY position outer loop controller feeding into the roll pitch inner loop controller. And independent of those, we have the yaw and altitude controllers. Overall, there are six PID controllers that work together to control the position and orientation of the minidrone.

If we take a look at just the altitude controller, which is set up as proportional and derivative, you’ll see that it might be implementation slightly differently than you’re used to.

Rather than a single altitude error term feeding into the P and D branches, the P gain is applied to the altitude error derived from the ultrasound, whereas, the D gain is applied to the vertical rate measurement directly from the gyro. 

This way, we don’t have to take a derivative of a noisy altitude signal, we already have a derivative from a different sensor. One that is less noisy than taking a derivative of the ultrasound sensor.

I'm going to talk more about the benefits and drawbacks of tuning when setting up your PID controller this way in a future video in this series. For now, we’ll just accept that this is the way it is and move on.

The output of these PID controllers are force and torque commands which all feed into the mixing algorithm. This produces the required thrust per motor and then that thrust command is converted into a motor speed command through this block.

All in all this subsystem is executing the logic that we built in the last video.

Let’s leave the controller now and move on to the state estimator because there is some really cool stuff going on in this block that we should talk about.

There are two steps involved in taking the raw sensor measurements and generating the estimated states.  First, we process the measurements and then we blend them together with filters to estimate the control states. Let’s look at the details of the sensor processing block. This looks daunting at first but the underlaying concept is pretty simple. Along the top, the acceleration and gyro data are calibrated by subtracting off the bias that has been previously determined. By removing the bias, zero acceleration and zero angular rate should result in a zero measurement. The next step is to rotate the measurements from the sensor reference frame to the body frame. And lastly, filter the data through a low pass filter to remove the high frequency noise.

Similarly, the ultrasound sensor has its own bias removed. And the optical flow data just has a pass/fail criterion. If the optical flow sensor has a valid position estimate, and we want to use that estimate, then this block sets the validity flag, TRUE. There’s always more sensor processing that can be done, but we’ll see shortly that our drone hovers quite nicely with just this simple amount of bias removal, coordinate transformation, and filtering.

Now that we have filtered and calibrated data, we can begin the task of combining measurements to estimate the states we need for the controllers. The two orange blocks are used to estimate altitude and XY position. If you look inside these blocks, you’ll see that each of them use a Kalman filter to blend together the measurements and a prediction of how we think the system is supposed to behave in order to come up with an optimal estimation. There is already a MATLAB Tech Talk series that covers Kalman filtering so I’m not going to spend any more time of them here, but I recommend watching that series if you’re not familiar with it and I’ve left a link in the description of this video.

The other non-orange block estimates roll, pitch, and yaw and it does it using a complementary filter instead of a Kalman filter. A complementary filter is a simple way to blend measurements from two sensors together and it works really well for this system. In the description of this video, I also linked a complementary video on complementary controllers that I posted to my channel if you are interested.

With the state estimation and controller subsystems described, we can now move on to the other important, but less flashy subsystems.

There is the logging subsystem that is saving a bunch of signals like the motor commands and position references to .mat files. These are the values that we can download and plot after the flight. We also have the crash predictor flag subsystem. The logic in here is just checking the sensed variables like position and angular velocity for outliers. When that happens, it sets the flag that shuts down the minidrone. This where you could add additional fault protection logic if you wanted to.

There is also the sensor data group, which is just pulling individual sensor values off of the sensor bus so that we have access to them elsewhere in the code. 

And finally, there is the landing logic block. This block will overwrite the external reference commands with landing commands if the landing flag is set. Once again, I’ll remove the switch and landing portion to simplify the logic since we don’t want to execute a precision landing.

I have to change one other thing here because the reference block from the top level of the model isn’t part of the auto coder that runs on the drone. So it won’t get loaded onto the drone and execute. But that’s OK because I can move this logic into the flight code right now. Since I know I just want the drone to hover, I’m going to hardcode the reference values in this block. There we go. This will keep the drone at an X Y position of 0 and 0 and an altitude of -0.7 meters. Remember, the z-axis is down in the drone reference frame, so this is an altitude of 0.7 meters up.

OK, so this is no longer the landing logic, but instead is the block that is generating our reference commands. And we don’t need these inputs anymore since the reference commands are now hardcoded values.

That completes the very quick walkthrough of the entire flight control software that is in this quadcopter model. And you should now sort of understand how each of these subsystems contribute to getting our minidrone to hover safely, whether it’s the sensor processing and filtering or the various feedback controllers or even the logic that sets the stop flag. We need it all to have a successful control system. I haven’t yet spoken about how to tune all of these controllers, that will be a future video in this series. For now, we can rely on the fact that the default gains delivered with the model are already tuned pretty well. OK, enough looking at Simulink models, I think it’s about time we see this default flight code in action by flying it on a Parrot minidrone.

We need the Simulink Support Package for Parrot Minidrones installed in order to allow Simulink to communicate with the drone. I already have this package so all I need to do is pair my drone to my computer via Bluetooth and hit the build model button at the top of the model. Again, the webinar linked in the description describes how to set all of this up if you’re interested in doing this at home.

While the software is building, let me revisit what is going on behind the scenes. We have all of this flight code in Simulink, which at the top level has the necessary interfaces for the rest of the minidrone firmware. We’re now in the process of auto coding C code from the Simulink block diagrams. If you have Simulink Coder™ installed like I do, then you will have access to the C code and can make changes if you like. If you don’t have Similink Coder, then you just can’t see the code but it is still generated.  The C code is then compiled on your computer and the compiled binary code is sent to the minidrone via Bluetooth and placed in the correct spot in the firmware. Once it’s ready to fly, this GUI interface pops up which allows us to start the code on the drone, and more importantly, stop it. I’ve set up my computer and drone in an area that’s safe to fly, but don’t forget to grab your safety googles. Now we just sit back, hit start, and watch our feedback control system in action. 

I told you the default gains are pretty good. In the next video, I’ll deep dive into the models so that you have a pretty good idea of how we’re simulating the real world and then how we’re going to use those models to tune the controllers ourselves. If you don’t want to miss the next Tech Talk video, don’t forget to subscribe to this channel. Also, if you want to check out my channel, control system lectures, I cover more control theory topics there as well. Thanks for watching, I’ll see you next time.