Main Content

Troubleshoot Coverage Merge Issues

When you generate coverage results for a model, sometimes you analyze the same model multiple times with different input signals or parameter values. You can aggregate coverage for the model into one cumulative coverage report that summarizes the coverage for all of the coverage simulations that you run for that model. Sometimes, changes you make during this process can cause the coverage results to become incompatible with the other results for the same model.

Issue

If you are working with just Simulink® Coverage™, you might analyze coverage multiple times and then aggregate the results by using the + operator, like:

covTotal = covData1 + covData2 + ... + covDataN;

If one of the coverage data objects is not compatible with the others, the operation results in this error:

Error using cv.internal.cvdata/checkDataCompatibility
Checksums must match for cvdata operator calculation

Error in +

If you use the Test Manager in Simulink Test™, the problem is more subtle because Test Manager checks compatibility before attempting to aggregate the results. If coverage data is not compatible, Test Manager shows the incompatible run as separate lines in the coverage results table. The result is two or more rows in the coverage results table for the same model.

For instance, in this example, the model slcoverageVariants appears twice in the Aggregated Coverage Results section.

The Aggregated Coverage Results section of the Test Manager displays two rows for the same model.

Possible Solutions

Start by investigating the root cause of the incompatibility. After you determine the root cause, you can implement a solution. The easiest way to determine the root cause of the incompatibility is to click the arrows in the Report column in the coverage results table to generate the coverage report for all lines that show the same model and compare those coverage reports side-by-side.

To determine the root cause of the incompatibility, click the arrows in the Report column of the coverage results table to generate the coverage report for each line that shows the same model. Compare the results side-by-side, looking for differences that can cause the coverage results to be incompatible. For example, look for instances of a block or subsystem with a different number of coverage objectives for a metric, or for different active variants. Note that a change of variant can cause coverage results to be incompatible even when the total number of coverage objectives remains the same.

For the slcoverageVariants model, the coverage reports differ in two places. First, the Details section reveals that the number of block execution coverage objectives has changed between the test cases. In the first test case, the subsystem Variant_Subsystem has three block execution coverage objectives, but in the second test case it has four execution coverage objectives. This discrepancy causes coverage results to be incompatible for aggregation.

In addition, the Tests and Summary sections of the coverage report show that the two simulations have different active choices of the Variant Subsystem block.

Side-by-side comparison of tests and summary sections of the coverage reports for the two instancesof the model displayed in Test Manager.

Changing the active variant choice changes the coverage objectives for the two different test cases, which creates two sets of coverage data for the same model that cannot be aggregated. If you run these same tests manually without using Test Manager, the sum operation returns the checksums must match error message.

Solution 1: Test Cases Contain Different Active Variants

If the problem is that test cases have different active variants, there are a few options to fix the problem:

  • If you did not mean to test multiple active variants in this test suite use one of these options:

    • Change the test cases so that they both use the same choice for the active variant.

    • Reorganize the test cases so that tests which use each active variant choice are grouped in a test suite.

  • Change the Variant activation time parameter of the variant to startup or runtime. This way, the number of coverage objectives does not change between the test cases and the coverage results can be aggregated.

    By default, variants that use startup or run-time activation time analyze all choices of variants, even ones that are inactive during a given simulation. However, this option can result in an overall lower percentage of coverage achieved.

For more information about coverage for variant blocks, see Model Coverage for Variant Blocks.

In the previous example, changing the Variant activation time to startup resolves the problem.

The Aggregated Coverage Results section of the Test Manager displays one row for the model.

Solution 2: Test Cases Change Data Types

If your test case collects relational boundary coverage, changing the data type of a parameter using the Parameter overrides section of Test Manager can cause coverage incompatibility issues. The difference in the number of coverage objectives would be visible in the relational boundary table for the affected blocks. For example, suppose you run one simulation in which the inputs to a Relational Operator block have a type of double, and a second where the inputs have a type of int8. Then the relational boundary table displays two rows for the double case, and three rows for the int8 case. These results are incompatible for coverage aggregation.

The image shows two relational boundary analyzed tables. One table has two rows and the second table has three rows.

To resolve the discrepancy in relational boundary coverage results use one of these options:

  • Evaluate if you intended for the data type to change. Because the default numerical class in MATLAB® is double, it is common for users to accidentally specify a double in a test case when the intended data type is an integer. If the change was unintended, edit the test cases so that the data type does not change.

  • Turn off the relational boundary coverage metric. In the Configuration Parameters dialog box, on the Coverage pane, expand Other metrics and clear Relational boundary. (since R2023a)

    For more information, see the release note Quantity of coverage objectives for saturation on integer overflow and relational boundary metrics does not block coverage aggregation.

Solution 3: Test Cases Change Data Sizes

Test cases that change the size of one or more signals can cause coverage incompatibility issues. For example, suppose that you run one simulation in which the inputs to a Relational Operator block are scalars, followed by a second simulation in which the inputs are 1-by-2 vectors. Changing the inputs to 1-by-2 vectors increases the number of condition objectives from one to two, and the number of relational boundary objectives from two to four.

The image shows two condition and relational boundary analyzed tables. The first condition table has one row, and the first relational boundary table has two rows. The second condition table has two rows, and the second relational boundary table has four rows.

Evaluate whether the signal size changes are intentional. If the changes are not intentional, adjust your test cases so that the signal sizes do not change. If the changes are intentional, use one of these options:

  • Separate the test cases into different test suites such that each of the test cases with like signal sizes are in a test suite together.

  • Consider using variable-size signals. If you choose to use variable-size signals, specify the upper bound to be greater than or equal to the largest signal size you expect during your tests. For more information about variable size signals, see Variable-Size Signal Basics.

Related Topics