Back-to-Back Test Status for Normal and PIL Mode
Metric ID
slcomp.pil.B2BTestStatus
Description
The back-to-back testing metrics perform translation validation between a model and the generated code.
This metric returns the status of back-to-back testing for each test by comparing, at each time step, the outputs of the model simulation and the outputs of the code executed in processor-in-the-loop (PIL) mode. The metric compares the normal mode and PIL mode test runs from baseline, equivalence, and simulation tests.
Supported Artifacts
You can collect this metric for the Units in your project. To control what the dashboard classifies as a unit, see Categorize Models in Hierarchy as Components or Units.
Computation Details
Scope of Analysis
The metric only analyzes unit tests. A unit test directly tests either the entire unit or lower-level elements in the unit, like subsystems.
Comparison
The way that the metric compares the normal mode and PIL mode results depends on the test type:
Equivalence Tests. For equivalence tests, the metric uses the method
getComparisonResult
to get the equivalence data comparison
results from the test and determine the back-to-back testing status. If you
specified signal tolerances, the metric uses those signal tolerances to
determine the acceptable tolerance for differences between your results.
Baseline Tests and Simulation Tests. For baseline tests and simulation tests, the metric uses the Simulation
Data Inspector function Simulink.sdi.compareRuns
to compare the simulation outputs from the logged signals in the test and
determine the back-to-back testing status. For information on how to add logged
signals to a test, see Capture Simulation Data in a Test Case (Simulink Test). The
comparison checks for mismatches in the number of signals, signal data types,
signal time steps, and other metadata. Each output that the test logged in
normal mode must have a matching output logged in PIL mode, otherwise the
back-to-back comparison fails. For more information on the comparison, see Simulink.sdi.compareRuns
and How the Simulation Data Inspector Compares Data.
The metric specifies the absolute tolerance ('AbsTol'
) and
relative tolerance ('RelTol'
) values for the function
Simulink.sdi.compareRuns
depending on the data type of
the signal. If you need the metric to consider individual signal tolerances in
the comparison, use an equivalence test instead.
Data Type | Absolute Tolerance | Relative Tolerance |
---|---|---|
Logical | 0 | 0 |
Integer | 0 | 0 |
Fixed-Point | 0 | 0 |
Enumeration | 0 | 0 |
Half-Precision |
10*eps(half(0)) |
10*eps(half(1)) |
Single-Precision |
100*eps(single(0)) |
100*eps(single(1)) |
Double-Precision |
1000*eps(double(0)) |
1000*eps(double(1)) |
For data types not listed above, the absolute and relative tolerances are 0.
Collection
To collect data for this metric, execute the metric engine and use getMetrics
with the metric
ID
slcomp.pil.B2BTestStatus
.
metric_engine = metric.Engine; execute(metric_engine,"slcomp.pil.B2BTestStatus"); results = getMetrics(metric_engine,"slcomp.pil.B2BTestStatus")
Collecting data for this metric loads the model file and test result files and requires a Simulink® Test™ license.
Results
For this metric, the function getMetrics
returns a
metric.Result
instance for each test.
Instances of metric.Result
return Value
as one
of these outputs:
0
— The comparison between normal and PIL mode test runs failed. For information on how the metric compares the results, see Computation Details.1
— The comparison between normal and PIL mode test runs passed.2
— The test was not tested back-to-back. The metric considers a test as untested if the test is missing either normal mode results, PIL mode results, or both. Make sure that you run the test in both normal mode and PIL mode. For equivalence tests, make sure that you run the test in the same test run and in both normal mode and PIL mode.
To view the comparison results that the metric uses to determine the status of
back-to-back testing, use the function metric.loadB2BResults
.
Compliance Thresholds
This metric does not have predefined thresholds.
See Also
Code Testing Metrics | Back-to-Back Test Status Distribution for Normal and PIL Mode