Technical Articles

Why You Should Collect Model Test Coverage Metrics

By Pat Canny, MathWorks


Systematic testing of your Simulink® design ensures that you have accounted for both intended and unintended behavior before generating code. During requirements-based simulation testing, parts of your design might not be exercised due to subtle design errors, incomplete tests, or missing requirements. Model test coverage helps you identify these gaps in testing by measuring how much of your Simulink design has been exercised during simulation.

A common cause of incomplete model test coverage during requirements-based testing is missing requirements. This type of missing model test coverage is often a result of design elements that cannot be traced to a higher-level requirement. Collecting model test coverage helps you account for this situation. This article describes an example in which testing of a triplex selection algorithm design is found to be incomplete due to a missing requirement.

Triplex Selection Algorithm: Overview and Requirements

Triplex signal selection algorithms are commonly used in aerospace control system software. In triplex signal selection, a single “voted” signal is selected from three independent sensors for use in the control of the aircraft. A sensor is “valid” if there are no faults detected for that sensor.

For this example, the aircraft Airspeed signal will be selected. The requirements for the Airspeed signal selection algorithm are as follows:

  • HLR_1 Selecting Airspeed Signal for Triple Sensors: The flight control computer shall select the middle value of the three sensors when three Airspeed signals are valid.
  • HLR_2 Selecting Airspeed Signal for Dual Sensors: The flight control computer shall select the average of the two sensors when only two Airspeed signals are valid.
  • HLR_3 Selecting Airspeed Signal of Single Sensor: The flight control computer shall select the valid signal when only one Airspeed signal is valid.

Figure 1 shows the Simulink implementation of these requirements.

Figure 1. Simulink implementation of the Airspeed signal selection algorithm.

Figure 1. Simulink implementation of the Airspeed signal selection algorithm. 

Implementing the Algorithm and Collecting Model Test Coverage Metrics

We implement each requirement in a dedicated subsystem in the model and write a separate test case for each one. Before running these tests, we need to make sure we are capturing model test coverage metrics.

Model test coverage metrics can be individual (collected for single tests) or cumulative (aggregated across multiple tests). We’ll collect cumulative coverage across our requirements-based tests to measure how well our full test suite is exercising the full design.

There are many types of model test coverage metrics, such as execution coverage, decision coverage, signal range coverage, and relational boundary coverage. In this example, we are interested in decision coverage, a type of structural coverage that measures execution of all possible logical outcomes for a decision in the model. Simulink model objects such as Switch blocks receive full decision coverage if all inputs to the switch have been selected at least once during simulation.

We enable model test coverage collection on a model using the Coverage Analyzer app in the Apps tab of the Simulink Toolstrip (Figure 2).

Figure 2. The Apps tab in the Simulink Toolstrip.

Figure 2. The Apps tab in the Simulink Toolstrip.

Once we open the Coverage Analyzer app, we can turn on coverage collection using the Coverage ON/OFF button in the Coverage tab. We also enable cumulative coverage using the Cumulative Collection button. We then simulate the model with coverage enabled using the Play button, now labeled Analyze Coverage (Figure 3). We simulate several times with different test vectors used in each simulation.

Figure 3. Coverage Analyzer app with coverage collection enabled.

Figure 3. Coverage Analyzer app with coverage collection enabled.

Figure 4 shows the resulting coverage. Green highlighting indicates complete model test coverage, including all model objects within each subsystem, while red denotes incomplete coverage. 

Figure 4. Coverage results from tests of the Airspeed Signal Selection algorithm.

Figure 4. Coverage results from tests of the Airspeed signal selection algorithm. 

Decision coverage seems to be missing on the Multiport Switch block. To understand why this happened, we check the model test coverage details for individual blocks by opening the Coverage Details pane in Simulink (Figure 5).

Figure 5. Opening the Simulink coverage details pane.

Figure 5. Opening the Simulink coverage details pane.

We then click on the MultiportSwitch block to see its coverage details (Figure 6).

Figure 6. Coverage details pane for the Multiport Switch block.

Figure 6. Coverage details pane for the Multiport Switch block.

The first input to the Multiport Switch was never 0 during simulation. Recall the model shown in Figure 1, where the first input to the Multiport Switch is the number of valid signals. As a result, we did not test a case where none of the Airspeed signals was valid.

This missing model test coverage is due to a missing higher-level requirement. None of the requirements specified which signal should be selected when no Airspeed signal is valid. However, it appears that the design includes logic to account for this condition already.

We add this missing requirement:

  • HLR_4 Selecting Airspeed Signal of No Valid Sensor: The flight control computer shall hold the selected Airspeed signal when none of the three Airspeed signals are valid.

We then add a new test case and rerun the tests. We now get complete decision coverage (Figure 7). 

Figure 7. Coverage results after adding a new requirement.

Figure 7. Coverage results after adding a new requirement.

As this example showed, model test coverage is a reliable way to identify missing requirements during requirements-based testing. You can use the model test coverage results to ensure that the correct parts of your design are being exercised.

Another common cause of incomplete model test coverage is dead logic. Dead logic is any part of a Simulink model or Stateflow® chart that can never be executed during simulation, such as an input to a Switch block that can never be selected, or a transition in a state machine that can never be taken. A best practice is to use Simulink Design Verifier™ to analyze your models and resolve dead logic before writing and executing your requirements-based tests. You can also use the Model Slicer feature within Simulink Check™ to refine and debug dead logic.

Published 2019

View Articles for Related Capabilities

View Articles for Related Industries