Collect Metrics on Model Testing Artifacts Programmatically
This example shows how to programmatically assess the status and quality of requirements-based testing activities in a project. When you develop software units by using Model-Based Design, you use requirements-based testing to verify your models. You can assess the testing status of one unit by using the metric API to collect metric data on the traceability between requirements and tests and on the status of test results. The metrics measure characteristics of completeness and quality of requirements-based testing that reflect industry standards such as ISO 26262 and DO-178. After collecting metric results, you can access the results or export them to a file. By running a script that collects these metrics, you can automatically analyze the testing status of your project to, for example, design a continuous integration system. Use the results to monitor testing completeness or to detect downstream testing impacts when you make changes to artifacts in the project.
Open the Project
Open a project that contains models and testing artifacts. For this example, in the MATLAB® Command Window, enter:
openExample("slcheck/ExploreTestingMetricDataInModelTestingDashboardExample"); openProject("cc_CruiseControl");
The example project contains models, and requirements and tests for the models. Some of the requirements have traceability links to the models and tests, which help to verify that the functionality of the model meets the requirements.
The example project also has the project setting Track tool outputs to detect outdated results enabled. Before you programmatically collect metrics, make sure that the Track tool outputs to detect outdated results setting is enabled for your project. For information, see Monitor Artifact Traceability and Detect Outdated Results with Digital Thread.
Collect Metric Results
Create a metric.Engine
object for the current
project.
metric_engine = metric.Engine();
Update the trace information for metric_engine
to reflect
pending artifact changes and to track the test results.
updateArtifacts(metric_engine);
Create an array of metric identifiers for the metrics you want to collect. For
this example, create a list of the metric identifiers used in the Model Testing
Dashboard. For more information, see getAvailableMetricIds
.
metric_Ids = getAvailableMetricIds(metric_engine,... 'App','DashboardApp',... 'Dashboard','ModelUnitTesting');
For a list of model testing metrics, see Model Testing Metrics.
When you collect metric results, you can collect results for one unit at a time or for each unit in the project.
Collect Results for One Unit
When you collect and view results for a unit, the metrics return data for the artifacts that trace to the model.
Collect the metric results for the
cc_DriverSwRequest
.
Create an array that identifies the path to the model file in the project and the name of the model.
unit = {fullfile(pwd,'models','cc_DriverSwRequest.slx'),'cc_DriverSwRequest'};
Execute the engine and use 'ArtifactScope'
to specify the
unit for which you want to collect results. The engine runs the metrics on only
the artifacts that trace to the model that you specify. Collecting results for
these metrics requires a Simulink®
Test™ license, a Requirements Toolbox™ license, and a Simulink
Coverage™
license.
execute(metric_engine, metric_Ids, 'ArtifactScope', unit)
Collect Results for Each Unit in the Project
To collect the results for each unit in the project, execute the engine
without the argument for
'ArtifactScope'
.
execute(metric_engine, metric_Ids)
For more information on collecting metric results, see the function execute
.
Access Results
Generate a report file that contains the results for
all units in the project. For this example, specify
the HTML file format, use pwd
to provide the path to the current folder,
and name the report
'MetricResultsReport.html'
.
reportLocation = fullfile(pwd, 'MetricResultsReport.html'); generateReport(metric_engine,'Type','html-file','Location',reportLocation);
To open the table of contents and navigate to results for each unit, click the menu icon in the top-left corner of the report. For each unit in the report, there is an artifact summary table that displays the size and structure of that unit.
Saving the metric results in a report file allows you to access the results without opening the project and the dashboard. Alternatively, you can open the Model Testing Dashboard to see the results and explore the artifacts.
modelTestingDashboard
To access the results programmatically, use the getMetrics
function. The function returns the
metric.Result
objects that contain the result data for the
specified unit and metrics. For this example, store the results for the metrics
slcomp.mt.TestStatus
and
TestCasesPerRequirementDistribution
in corresponding
arrays.
results_TestCasesPerReqDist = getMetrics(metric_engine, 'TestCasesPerRequirementDistribution'); results_TestStatus = getMetrics(metric_engine, 'slcomp.mt.TestStatus');
View Distribution of Test Links Per Requirement
The metric TestCasesPerRequirementDistribution
returns a
distribution of the number of tests linked to each functional requirement for
the unit. You can use the fprintf
function to show the bin
edges and bin counts of the distribution, which are fields in the
Value
field of the metric.Result
object.
The left edge of each bin shows the number of test links and the bin count shows
the number of requirements that are linked to that number of tests. The sixth
bin edge is 18446744073709551615
, which is the upper limit of
the count of tests per requirement, which shows that the fifth bin contains
requirements that have four or more tests.
fprintf('Unit: %s\n', results_TestCasesPerReqDist(4).Scope(1).Name) fprintf('Number of Tests:\t') fprintf('%d\t', results_TestCasesPerReqDist(4).Value.BinEdges) fprintf('\n Requirements:\t') fprintf('%d\t', results_TestCasesPerReqDist(4).Value.BinCounts)
Unit: cc_ControlMode Number of Tests: 0 1 2 3 4 1.844674e+19 Requirements: 0 15 10 4 1
The results might be different on your machine, since the order of the units in the results can vary.
In these example results, the unit cc_ControlMode
has 0
requirements that are not linked to tests, 15 requirements that are linked to
one test, and 10 requirements that are linked to two tests, 4 requirements that
are linked to three tests, and 1 requirement that is linked to four tests. Each
requirement should be linked to at least one test that verifies that the model
meets the requirement. The distribution also allows you to check if a
requirement has many more tests than the other requirements, which might
indicate that the requirement is too general and that you should break it into
more granular requirements.
View Test Status Results
The metric slcomp.mt.TestStatus
assesses the testing status
of each test for the unit and returns one of these numeric results:
0
— Failed1
— Passed2
— Disabled3
— Untested
Display the name and status of each test.
for n=1:length(results_TestStatus) disp(['Test: ', results_TestStatus(n).Artifacts(1).Name]) disp([' Status: ', num2str(results_TestStatus(n).Value)]) end
For this example, the tests have not been run, so each test returns a status
of 3
.
See Also
Model Testing Metrics | metric.Engine
| execute
| generateReport
| getAvailableMetricIds
| updateArtifacts