Assess Simulation and Compare Output Data
Overview
Functional testing requires assessing simulation behavior and comparing simulation output to expected output. For example, you can:
Analyze signal behavior in a time interval after an event.
Compare two variables during simulation.
Compare timeseries data to a baseline.
Find peaks in timeseries data, and compare the peaks to a pattern.
This topic provides an overview to help you author assessments for your particular application. In the topic, you can find links to more detailed examples of each assessment.
You can include assessments in a test case, a model, or a test harness.
In a test case, you can:
Compare simulation output to baseline data.
Compare the output of two simulations.
Post-process simulation output using a custom script.
Assess temporal properties using logical and temporal assessments. If you have one or more defined assessments and their associated symbols in a test case, you can use the API or the Test Manager to obtain a list and information about them, copy them to another test case, and remove them from a test case. For information on using the API, see
sltest.testmanager.Assessment
,sltest.testmanager.AssessmentSymbol
, andsltest.testmanager.TestCase
. For the Test Manager, see Assess Temporal Logic by Using Temporal Assessments.
In a test harness or model, you can:
Verify logical conditions in run-time using a
verify
statement, which returns apass
,fail
, oruntested
result for each time step.Use
assert
statements to stop simulation on a failure.
Use blocks from the Model Verification or Simulink® Design Verifier™ library.
Compare Simulation Data to Baseline Data or Another Simulation
Baseline criteria are tolerances for simulation data compared to baseline data. Equivalence criteria are tolerances for two sets of simulation data, each from a different simulation. You can set tolerances for numeric, enumerated, or logical data.
Set a numeric tolerance using absolute or relative tolerances. Set time tolerances using leading and lagging tolerances. For numeric data, you can specify absolute tolerance, relative tolerance, leading tolerance, or lagging tolerance. For enumerated or logical data, you can specify leading or lagging tolerance. Results outside the tolerances fail. For more information, see Set Signal Tolerances.
Specify the baseline data and tolerances in the Test Manager Baseline Criteria or Equivalence Criteria section. Results appear in the Results and Artifacts pane. The comparison plot displays the data and differences.
This graphic shows an example of baseline criteria. The baseline criteria sets a
relative tolerance for signals output torque
and vehicle
speed
.
Post-Process Results With a Custom Script
You can analyze simulation data using specialized functions by using a custom criteria script. For example, you could find peaks in timeseries data using Curve Fitting Toolbox™ functions. A custom criteria script is MATLAB® code that runs after the simulation. Custom criteria scripts use the MATLAB Unit Test framework.
Write a custom criteria script in the Test Manager Custom Criteria section of the test case. Custom criteria results appear in the Results and Artifacts pane. Results are shown for individual MATLAB Unit Test qualifications. For more information, see Process Test Results with Custom Scripts.
This simple test case custom criteria verifies that the value of
slope
is greater than 0.
% A simple custom criteria test.verifyGreaterThan(slope,0,'slope must be greater than 0')
Run-Time Assessments
verify
Statements
For general run-time assessments, use verify
statements. A
verify
statement evaluates a logical expression and returns a
pass, fail, or untested result for each simulation time step.
verify
statements can include temporal and conditional
syntax. A failure does not stop simulation.
Enter verify
statements in a Test Assessment or
Test Sequence block, using the Test Sequence Editor. You can use
verify
statements with or without a test case in the Test
Manager. Without a test case, results appear in the Simulation Data Inspector. With
a test case, results appear in the Test Manager.
You can choose to log only pass and fail verify
results in a
model by selecting Suppress Untested Results in the
Test Cases section of the Tests or
Harness tab. For information on using
verify
statements in your model, see Verify Model Simulation by Using When Decomposition and verify
.
assert
Statements
You can use assert
statements in a Test
Assessment or Test Sequence block to stop executing an
invalid test. assert
evaluates a logical argument, but unlike
verify
, assert
stops simulation. Failures
appear as simulation errors. To make results easier to interpret, add an optional
message.
For example, if a component under test outputs two signals h
and k
, and the test requires h
and
k
to initialize to 0
, use
assert
to stop the test if the signals do not initialize.
This assert
statement returns a message 'Signals must
initialize to 0'
if the logical condition h == 0 && k
== 0
fails.
Note
assert
statements in a Test Sequence block
or Stateflow® chart are not supported for code generation and are ignored, so
no error occurs if the assert condition fails during a
Simulink
Real-Time™ simulation. However, verify
statements are supported for
Simulink Real-Time code generation and automatically log results for a test case in the Test
Manager. The same logging behavior is available when using a Simulink Assert
block.
Assessments for Real-Time Testing
If you are using a real-time test case, or if you want to reuse a desktop
simulation test case on a real-time target, use verify
statements. verify
statements are built into the real-time
application, and run on the real-time target. See Verify Model Simulation by Using When Decomposition.
Model Verification Blocks
Use blocks from the Simulink
Model Verification library or the Simulink
Design Verifier library to assess signals in your model or test harness.
pass
, fail
, or untested
results from each block appear in the Test Manager. For more information, see Examine Model Verification Results by Using Simulation Data Inspector.
Note
All Model Verification library blocks,
including the Assertion block, do not produce verification
results when used in For Each subsystems. Use a Test Sequence
block with verify
statements instead.
Examples of Run-Time Assessments
This example test harness includes:
A
verify
statement in the Test Assessment block, verifying thatsignalC >= 5
.An Assertion block verifying that
throttle >= 0
.
Logical and Temporal Assessments
Logical and temporal assessments evaluate temporal properties such as model timing and event ordering over logged data. Use temporal assessments for additional system verification after the simulation is complete. Temporal assessments are associated with test cases in the Test Manager. Author temporal assessments by using the Logical and Temporal Assessments Editor. See Assess Temporal Logic by Using Temporal Assessments for more information.
Temporal assessment evaluation results appear in the Results and Artifacts pane. Use the Expression Tree to investigate results in detail. If you have a Requirements Toolbox™ license, you can establish traceability between requirements and temporal assessments by creating requirement links. See Link to Requirements for more information.