Main Content

Model and Code Testing Metrics

Collect metrics for model testing, including requirements-based testing, and code testing

Use model and code testing metrics to assess the status of testing for software unit models. Open the Model Testing Dashboard to monitor the units and model testing artifacts in a project. As you define artifacts such as requirements, design unit models, and run unit tests, the dashboard measures the traceability and completeness of the testing artifacts for each unit. After you complete model testing, open the SIL Code Testing or PIL Code Testing dashboards to monitor the software-in-the-loop (SIL) or processor-in-the-loop (PIL) code testing results, respectively. Use the metric results to add missing traceability links, fill testing gaps, and track your testing progress. You can also use the metric API to collect metric results programmatically, such as in a continuous integration system, and save the results in a report.

Apps

Model Testing DashboardAssess verification status and quality of your model and generated code (Since R2020b)

Classes

expand all

metric.EngineCollect metric results (Since R2020b)
metric.ResultResults from specified metric (Since R2020b)

Functions

expand all

modelTestingDashboardOpen Model Testing Dashboard (Since R2020b)
metric.loadB2BResultsView comparison results from back-to-back testing metric (Since R2024a)

Topics

Model Testing

Code Testing

Artifact Traceability