Video length is 16:00

Fusing a Mag, Accel, and Gyro to Estimate Orientation | Understanding Sensor Fusion and Tracking, Part 2

From the series: Understanding Sensor Fusion and Tracking

Brian Douglas

This video describes how we can use a magnetometer, accelerometer, and a gyro to estimate an object’s orientation. The goal is to show how these sensors contribute to the solution and to explain a few things to watch out for along the way.

We’ll cover what orientation is and how we can determine orientation using an accelerometer and a magnetometer. We’ll also talk about calibrating a magnetometer for hard and soft iron sources and ways to deal with corrupting accelerations.

We’ll also show a simple dead reckoning solution that uses the gyro on its own. Finally, we’ll cover the concept of blending the solutions from the three sensors.

Published: 25 Sep 2019

In this video, we’re going to talk how we can use sensor fusion to estimate an object’s orientation. Now you may call orientation by other names, like attitude, or maybe heading if you’re just talking about direction along a 2D pane. This is why the fusion algorithm can also be referred to as an attitude and heading reference system. But it’s all the same thing; we want to figure out which way an object is facing relative to some reference.

We can use a number of different sensors to do this. For example, satellites could use star trackers to estimate attitude relative to the inertial star field, whereas an airplane could use an angle of attack sensor to measure orientation of the wing relative to the incoming free air stream.

Now, in this video, we’re going to focus on using a very popular set of sensors that you will find in every modern phone and a wide variety of autonomous systems: a magnetometer, accelerometer, and a gyro.

The goal of this video is not to develop a fully fleshed-out inertial measurement system. There’s just too much to cover to really do a thorough job. Instead, I want to conceptually build up the system and explain what each sensor is bringing to the table, and a few things to watch out for along the way. I’ll also call out some other really good sources that I’ve linked to below where you can dive into more of the details. So let’s get to it. I’m Brian, and welcome to a MATLAB Tech Talk.

When we are talking about orientation, we’re really describing how far an object is rotated away from some known reference frame. For example, the pitch of an aircraft is how far the longitudinal axis is rotated off of the local horizon. So in order to define an orientation, we need to choose the reference frame that we want to describe the orientation against, and then specify the rotation from that frame using some representation method. We have several different ways to represent a rotation. Perhaps the easiest to visualize and understand at first is idea of roll, pitch, and yaw. This representation works great in some situations; however, it has some widely known drawbacks in others. So, we have other ways to define rotations for different situations, things like the direction cosine matrix and the quaternion.

The important thing for this discussion is not what a quaternion is or how a DCM is formulated, but rather just to understand that these groups of numbers all represent a fixed three-dimensional rotation between two different coordinate frames: the object’s own coordinate frame that is fixed to the body and rotates with it, and some external coordinate frame. And it’s this rotation, or these sets of numbers, that we’re trying to estimate by measuring some quantity with sensors.

So let’s get to our specific problem. Let’s say we want to know the orientation of a phone that’s sitting on a table. So the phone’s body coordinate frame relative to the local north, east, and down coordinate frame. When can find the absolute orientation using just a magnetometer and accelerometer, a little later on we’ll add a gyro to improve accuracy and correct for problems that occur when the system is moving, but for now we’ll stick with these two sensors. Simply speaking, we could measure the phone’s acceleration, which would just be due to gravity since it’s sitting stationary on the table and we’d know that direction is up, the direction opposite the direction of gravity. And then we can measure the magnetic field in the body frame to determine north.

But here’s something to think about: The mag field points north but it also points up or down depending on the hemisphere you’re in. And it’s not just a little bit. In North America, the field lines are angled around 60 to 80 degrees down, which means it’s mostly in the gravity direction. The reason a compass points north and not down is that the needle is constrained to rotate within a 2D plane. However, our mag sensor has no such constraint, so it’s going to return a vector that is also in the direction of gravity. So to get north, we need to do some cross products. We can start with our measured mag and accel vectors in the body frame. Down is in the opposite direction of the acceleration vector. East is the cross product of down and the magnetic field, and then north is the cross product of east and down. So the orientation of the body is simply the rotation between the body frame and the NED frame, and I can build the direction cosine matrix directly from the N, E, and D vectors that I just calculated.

So let's go check out an implementation of this fusion algorithm. I have a physical IMU; it’s the MPU-9250 and it has an accelerometer, magnetometer, and gyro. Although for now we’re not going to use the gyro. I’ve connected it to an Arduino through I2C, which is then connected to MATLAB through USB. I’ve pretty much just followed along with this example from the MathWorks website, which provides some of the functions that I’m using, and I’ve linked below if you want to do the same.

But let me show you my simple script. I connect to the Arduino and the IMU and I’m using a MATLAB viewer to visualize the orientation and I update the viewer each time I read the sensors. This is a built-in function, with the sensor fusion and tracking toolbox. The small amount of math here is basically reading the sensors, performing the cross products, and building the DCM.

And that’s pretty much the whole of it. So if I run this, we can watch the algorithm in action. Notice that when it’s sitting on the table it does a pretty good job of finding down; it’s in the positive X axis, and if I rotate it to another orientation you can see that it follows pretty well with my physical movements. So overall, pretty easy and straightforward, right?

Well, there are some problems with this simple implementation and I want to highlight two of them. The first is that accelerometers aren’t just measuring gravity; they measure all linear accelerations. So if the system is moving around a lot, it’s going to throw off the estimate of where down is. You can see here that I’m not really rotating the sensor much, but the viewer is jumping all over the place. This might not be much of a problem if your system is largely not accelerating, like a plane while it’s cruising at altitude or a phone that’s sitting on a table. But linear accelerations aren’t the only problem. Even rotations can throw off the estimate because an accelerometer that’s not located at the center of rotation will sense an acceleration when the system rotates. So we have to figure out a way to deal with these corruptions.

And a second problem is that magnetometers are affected by disturbances in the magnetic field.  Obviously, you can see that if I get a magnet near the IMU, the estimate is corrupted. So what can we do about these two problems? Well, let’s start with the magnetometer. If the magnetic disturbance is part of the system and rotates with the magnetometer, then it can be calibrated out.

These are the so-called hard iron and soft iron sources. A hard iron source is something that generates its own magnetic field. This would be an actual magnet like the ones in an electric motor or it could be a coil that has a current running through it from the electronics themselves.  If you tried to measure an external magnetic field, a hard iron source near the magnetometer would contribute to the measurement. If we rotate the system around a single axis and measure the magnetic field, the result would be a circle that is offset from the origin. So your magnetometer would read a larger intensity in some directions and a smaller intensity in the opposite direction.

A soft iron source is something that doesn’t generate its own magnetic field but is what you might call magnetic; you know, like a nail that is attracted to a magnet or the metallic structure of your system. This type of metal will bend the magnetic field as it passes through and around it and the amount of bending changes as that metal rotates. So a soft iron source that rotates with the magnetometer would distort the measurement, creating an oval rather than a circle.

So even if you had a perfect noiseless magnetometer, it would still return an incorrect measurement simply because of the hard and soft iron sources that are near it. And your phone and pretty much all systems have both of them.

So let’s talk about calibration. If the system had no hard or soft iron sources and you rotated the magnetometer all around in four pi-steradian directions, then the magnetic field vector would trace out a perfect sphere with the radius being the magnitude of the field. Now, a hard iron source would offset the sphere and a soft iron source would distort it into some ellipsoid. If we could measure this ahead of time, we could calibrate the magnetometer by finding the offset and transformation matrix that would convert it back into a perfect sphere centered at the origin.  This transformation matrix and bias would then be applied to each measurement, essentially removing the hard and soft iron sources.

This is exactly what your phone does when it asks you to spin it around in all directions before using the compass. Here, I’m demonstrating this by calibrating my IMU using the MATLAB function, magcal. I’m collecting a bunch of measurements in different orientations and then finding the calibration coefficients that will fit them to an ideal sphere.

Now that I have an A matrix that will correct for soft iron sources and a b matrix that will remove hard iron bias, I can add a calibration step to the fusion algorithm that I showed you previously and this will produce a more accurate result than what I had before.

All right, now let’s go back to solving the other problem of the corrupting linear accelerations. One way to address this is by predicting linear acceleration and removing it from the measurement prior to using it. This might sound difficult to do, but it is possible if the acceleration is the result of the system actuators—you know, rather than an unpredictable external disturbance. We can take the commands that are sent to the actuators and play it through a model of the system to estimate the expected linear acceleration and then subtract that value from the measurement. This is something that is possible, say, if your system is a drone and you’re flying around by commanding the four propellers.

If we can’t predict the linear acceleration or the external disturbances are too high, another option is to ignore accelerometer readings that are outside of some threshold from a 1 g measurement. If the magnitude of the reading is not close to the magnitude of gravity, then clearly the sensor is picking up on other movement and it can’t be trusted.

This keeps corrupted measurements from getting into our fusion algorithm, but it’s not a great solution because we stop estimating orientation during these times and we lose track of the state of the system. Again, not really a problem if we’re trying to estimate orientation for a static object; this algorithm would work perfectly fine. However, often we want to know the orientation of something that is rotating and accelerating. So we need something else here to help us out.

What we can do is add a gyro into the mix to measure the angular rate of the system. In fact, the combination of magnetometer, accelerometer, and gyro are so popular that they are often packaged together as an inertial measurement unit like I have with my MPU-9250. So how does the gyro help?

Well, to start, I think it’s useful to think about how we can estimate orientation for a rotating object with just the gyro on its own, no accel and no magnetometer. For this, we can multiply the angular rate measurement by the sample time to get the change in angle during that time.  Then, if we knew the orientation of the phone at the previous sample time, we can add this delta angle to it and have an updated estimate of the current orientation. If the object isn’t rotating, then the delta angle would be zero and the orientation wouldn’t change, so it all works out. And by repeating this process for next sample and the one after that, we will know the orientation of the phone over time. This process is called dead reckoning and essentially it’s just integrating the gyro measurement.

There are downsides to dead reckoning. One, you still have to know the initial orientation before you can begin so we have to figure that out and, two, sensors aren’t perfect. They have bias and other high-frequency noises that will corrupt our estimation. Now, integration acts like a low-pass filter, so that high-frequency noise is smoothed out a little bit, which is good, but the result drifts away from the true position due to random walk as well as integrating any bias in the measurements. So, over time, the orientation will smoothly drift away from the truth.

So at this point we have two different ways to estimate orientation, one using the accelerometer and the magnetometers and the other using just the gyro. And each have their own respective benefits and problems. And this is where sensor fusion comes in once again. We can use it to combine these two estimates in a way that emphasizes each of their strengths and minimizes their weaknesses. Now, there’s a number of sensor fusion algorithms that we can use, like a complementary filter or a Kalman filter, or the more specialized but very common Madgwick or Mahony filters, but at their core, every one of them does essentially the same thing.

They initialize the attitude, either by setting it manually or using the initial results of the mag and accelerometer, then, over time, they use the direction of the mag field and gravity to slowly correct for the drift in the gyro. Now, I go into a lot more detail in my video on the complementary filter, and MathWorks has a series on the mechanics of the Kalman filter, both linked below, but in case you don’t go and watch them right away, let me go over a very high-level concept of how this blending works.

Let’s put our two solutions at opposite ends of a scale that represents our trust in each solution.  And we can place a slider that specifies which solution we trust more. If the slider is all the way left, then we trust our mag/accel solution 100% and we just use that value for our orientation.  All the way to the right, and we use the dead reckoning solution 100%. When the slider is in between, this is saying that we trust both solutions some amount and therefore want to take a portion of one solution and add it to the complementary portion of the other solution. By putting the slider almost entirely to the dead reckoning solution, we are mostly trusting the smoothness and quick updates of the integrated gyro measurements, which gives us good estimates during rotations and linear accelerations, but we are ever so gently correcting that solution back toward the absolute measurement of the mag and accel to remove the bias before it has a chance to grow too large. So these two approaches complement each other.

Now, for the complementary filter, you as the designer figure out manually where to place this slider, how much you trust one measurement over the other. With a Kalman filter, the optimal gain or position of the slider is calculated for you after you specify things like how much noise there is in the measurements and how good you think your system model is. So the bottom line is that we’re doing some kind of fancy averaging between the two solutions based on how much trust we have in them. Now, if you want to practice this yourself, the MATLAB tutorial I used earlier goes through a Kalman filter approach using the MATLAB function ahrsfilter.

And that’s where I’m going to leave this video. In the next video, we’re going to take this one step further and add GPS and show how our IMU and orientation estimate can help us improve the position we get from the GPS sensor.

So, if you don’t want to miss that or other future Tech Talk videos, don’t forget to subscribe to this channel. Also, if you want to check out my channel, Control System Lectures, I cover more control topics there as well. I’ll see you next time.