How to Scope Model Coverage to Requirements-Based Tests
Starting in R2020a, you can scope coverage results to linked requirements-based tests using Simulink Coverage™. This setting scopes the aggregated coverage results such that each test only contributes coverage for the corresponding model elements that implement the requirements verified by that test. This improves confidence that model elements are covered by the intended test cases. This video will show you how to use this new setting.
Published: 3 Apr 2020
While performing requirements-based testing, you can measure test coverage of your models using Simulink Coverage to determine how much your design has been tested during simulation.
For simplicity’s sake, let’s call design elements and test cases that are linked to the same requirements “siblings.”
Model coverage results should be analyzed with respect to requirements. What if some of the design element is never exercised by a sibling test? This could be a sign of incomplete testing, incorrect traceability, or missing requirements.
With Simulink Coverage in MATLAB Release 2020a, you can focus the coverage results to show only the coverage received by sibling tests. We call this “scoping” the coverage to requirements.
Let’s walk through an example to demonstrate.
This model is a design for part of a simple cruise control. The model calculates the throttle demand for the engine controller as well as the target speed. The model uses several Boolean inputs from the steering wheel cruise control switches, a Boolean brake input, and the vehicle’s speed.
We have several requirements for this design which are managed using Requirements Toolbox.
We also have six test cases linked to the requirements which are managed using Simulink Test.
Let’s run the test suite in Simulink Test.
Let’s take a look at the results.
All six of our tests passed, and we have achieved 100% coverage for Decision, Condition, and Execution. This is great! Are we done now?
Not quite.
Let’s make sure the coverage we achieved from the test cases is associated with the sibling requirements.
We can do this by clicking “Scope coverage results to linked requirements” in the Test Manager.
Simulink Coverage uses the traceability data between the requirements, design, and test cases to filter coverage accordingly.
It looks like we have lost some coverage! Let’s open the model to learn more.
The model elements with missing coverage are highlighted in red.
It looks like the constant and sum blocks for the increment and decrement logic are missing coverage. Let’s click on one of the sum blocks to learn more.
The block is missing execution coverage. This is because there are no implementation links for this block. We can confirm this by opening the requirements perspective.
These two blocks should be linked to the INCREMENT requirement. We can do this by selecting the INCREMENT requirement in the Requirements browser, then right clicking on the blocks and adding the link. Once we save the model, let’s go back to the Test Manager and rerun the test suite to see how this improved our coverage.
That was easy!
Let’s now look at the PI Controller, which appears to now only have 83% Decision coverage.
It looks like the PI Controller subsystem can be traced to one requirement and one test case labeled T6.
The Discrete-Time Integrator block has one Decision objective missing.
The integration result was never equal to or above the upper limit, which means we never tested the full range of the integrator output.
But wait – it looks like test case T4 reached the upper limit. Let’s click on T4 to learn more.
Test case T4 is the Increment Test, which is linked to the Increment requirement.
We can resolve this discrepancy a few different ways:
1) We can link the Increment test case to the Throttle requirement. This does not make sense in this scenario, as the expected results for the Increment test case have nothing to do with the Throttle requirement.
2) We can create a requirement that defines the expected behavior when the integrator limit is reached. This might be too much of a design detail.
3) Finally, we can modify the Throttle test inputs to more aggressively exercise the controller. In this case, we can do this by simply extending the existing Throttle test case and incrementing the set speed at the end of the test. This is the best option, because the expected results should not change; we are still verifying the behavior defined by the Throttle requirement.
When we rerun the Throttle test, the test fails because the throttle rate of change exceeds the requirement. This is due to a flaw in the design.
This example showed how to use coverage results scoped to requirements to reveal incomplete requirements traceability and an incomplete test case.
Click on the link below to run this example yourself, or visit the Simulink Coverage product page on mathworks.com to request a trial.
Do you mean "test cases linked to the requirements"?
My framing of how to think about the missing cov is a little different, so let's hone in on this part when we meet.