Video length is 10:41

Logic verification and testing | Drilling Systems Modeling & Automation, Part 6

From the series: Drilling Systems Modeling & Automation

Due to the importance and complexity of modern drilling systems, it is commonly required that design test and validation be performed. This helps identify unintended design errors in models such as integer overflow, dead logic, and array access violations. Design verification should be complementary to control system development activities and should be developed and implemented in such a way as to minimize additional work and cost, but to maximize its effectiveness. As a result, formal methods and tools are increasingly important for precise and reliable verification.

In this video, we discuss formal methods to validate and test the drawworks control supervisory logic before deployment to the PLC. We use Simulink Design Verifier to generate test cases for model coverage and inputs. We also walk through how you can use it to detect run-time and modeling errors (division by zero) and discover design logic that remains inactive during model execution. Finally, we generate test cases from models of system requirements with the click of a button. Using this, we can augment existing manually created test cases based on standard operating modes of the auto-driller.

Published: 2 Jul 2020

Hi, and welcome back. My name is Jonathan LeSage, and I'm an application engineer with The MathWorks. We've been covering a series of topics on digital twins for a drawworks drilling system. And today we're going to take a deep dive look at one aspect of this, which is the software side and this auto driller.

So in the past few videos, we've talked about modeling the actual system itself. We talked about calibration of the model using real data. Then we talked about using the model to optimize system behavior. And then how we could actually build supervisory logic.

Today we're actually going to be focused more on how do we validate and test that logic that we've come up with before we actually take it out into the field? And this is going to take it one step beyond just running some simulations to test different modes. We're actually going to be using formal methods.

All right, now let's hop over to Simulink again. So here we have that autodriller logic. Just to quickly review what's going on under the hood here, we have a state flow diagram with a couple of different operational modes, a manual mode, one where we can disable the limits, kind of an autodriller mode where we're basically raising the hoist at the top. We have a mode for adding pipe and making that, and then we can lower and drill, and then we can break string at the bottom.

In these states you'll notice that we call this energy speed limiter for raising, we've got 1,000 RPM as the rate we want to raise, minus 1,000 for lowering. And then we have some inputs like the speed, the position, and the mass. What this energy speed limiter does is it's going to basically take those inputs and basically determine where we are, either raising or lowering how far we are away from the top or bottom respectively, and to basically cut the amount of energy, slow the system down automatically so we don't either crown out and slam into the top or bottom out and slam into the bottom.

And so just take a quick look into this, this is an analytically derived one. We showed in a few videos back how to come up with a lookup table-based approach using multiple simulations. So this is just calculating energy and then limiting it based on the energy calculations.

Now looking at this algorithm here, we've got this flow chart, we've got some mathematical computations here. We could go ahead and start to either code this onto our PLC or use PLC code generation, and generate unstructured text, and assume that there's no issues with the implementation of this logic. Now that's usually not a good assumption, because there could be coding errors, there could be issues that we didn't account for, divide by zeros, that type of thing, or out of bound arrays that we may not be exploring in our simulations but could actually happen in reality.

Now one way you could actually address that is to go through this piece by piece and manually look for things. But if there's a way to do things that's a little more intelligent, we typically like to do that. And under our apps tab up here, we have a lot of apps for Simulink. But down here, we have a whole section on model verification validation and tests. So there's specific tools to help you actually more formally check your model. We're going to be using a tool called Simulate Design Verifier to do a couple of different things today.

OK, so I'm going go ahead and open up Design Verifier. And what Design Verifier does is a few different things. We can basically generate test cases. We'll see that in a little bit here. But we can also do what we call design error detection. And this is a nice quick and easy thing that you can run on any logic that you've created. And it will check for very common design errors, such as divide by 0 situations that you may not be accounting for, or buffer overflows, maybe you have an array and it's a fixed point signal, and maybe your accumulator is going to integrate outside of an expected value.

I'm going to go ahead and say, detect the design errors for this model. And it's going to go ahead and start running the analysis. So this is a quick and easy check. You can do it on your logic and quickly see if there's any issues with it and things that you might need to resolve.

So it's going to go ahead and start to run the analysis. And as I said, it's just a quick test. We could go get a cup of coffee and come back. You'll see that it's already validated a couple of things.

Now the nice thing about this is this analysis is going to take expected input sizes, or rather, dimensions, and use those in the analysis. So if speed ref is an integer that goes from minus 1,500 to 1,500, it'll pass all of that through formally and let us see the sizes. And so you'll see here the output of this is minus 2,000 to 2,000.

Now you'll see that it says two out of three objects valid, one out of three objectives falsified and needs simulation. We could go ahead and start to look at the report. But what I like to do is start to jump into this block.

So this block is red, that means the issue's inside here. We don't see anything in the actual state chart itself. So we can go into the Simulink function. And we see a couple of things going on here.

So we see some green blocks. So for example, this block here, this lookup table is green. And you'll see up here in the note, it says that it's checked for overflow conditions, and we're never going to hit one. And that's just because of the way that these inputs are defined. There is no condition where this lookup table is going to overflow. So that's good.

The other thing that we'll see here is this red one. And this is a divide by 0. And you'll notice here that it says we need to use simulation to validate this, because we have this mass value. And we haven't constrained that, we haven't put a saturation block here to maybe prevent this from ever going to divide by 0. And so maybe that's what we should do.

So what I'll do real quick is I'll get my Simulink library browser. I'll get a saturation. And this is just a quick solution. There could be better ones.

And we can say, upper limit is infinite, lower limit is 0.1. We expect most of the masses to be quite large. And so then I could rerun the analysis, design verifier, detect design errors. And we can hopefully see that that issue is resolved.

But again, it's a quick check. This is a simple case with a simple bit of logic. But as your logic grows in complexity, checking for these types of design errors becomes more and more tedious. And these are not the things that you want to find after you've actually deployed your code to the field. So you'll see I have a green block now, so everything's valid and good to go.

All right, to wrap up design error detection, there's other errors that the tool can detect. We just covered a couple today. But these are again, just quick checks that you can run, have it point to your algorithm, comb through it, look for any of these common type of violations.

Now the last thing we're going to look at for validation is actually how we can automatically come up with test inputs for our algorithms that we've come up with to ensure that we're looking at all the different states. I'm going to go ahead and hit the generate test button and talk as it's building up the analysis here. So what this is actually going to do is it's going to look at the ranges of all of our inputs, and then it's going to look at the state chart that we have and the algorithm that we have in here for the energy detection or energy limiter.

Now what it's going to do is it's going to come up with input vectors for all of these inputs to this chart that are going to excite each of the individual states in the chart, and then also all the conditions between the states or in the algorithm itself, all of the saturations, maximum and minimum bounds.

So it's going to automatically come up with these test vectors for us. We don't have to come up with these on our own. The other benefit here is if we already have a list of test vectors, maybe we've got common operating modes, different speed profiles that our operators put this system through, we could already have those as inputs, and then this design verify our approach for test generation would just append onto the list of test cases required to do what we call full coverage where the chart is fully covered. OK, so I'm going to let this analysis just finish out, and we'll take a look at what it actually gives us.

All right, so you see that it finished with the analysis here. The chart is green. We satisfied 56 of the 56 possible objectives. Again, we could look at a report here of all the different test cases that it came up with.

But what I like to do is create a harness model. And these now can be used for testing purposes. So basically we have what we call a signal builder here, providing a bunch of inputs. And then we have our state flowchart here under test.

And these are actually the test vectors that it came up with. So these are stimulus inputs to that algorithm, different speed refs, different speed measures, different position values, different values of mass, disable limit autodriller mode. And it came up with five different test cases. And so to just kind quickly look at them, you'll see that these different stimuli are going to basically go through every possible mode of this chart and do testing. So this is a great way to run through and check for any possible issues. And we didn't have to come up with these on our own. We were able to actually use formal methods to generate them.