Main Content

padv.builtin.task.CollectMetrics Class

Namespace: padv.builtin.task
Superclasses: padv.Task

Task for collecting model design and testing metrics

Description

This class requires CI/CD Automation for Simulink Check.

The padv.builtin.task.CollectMetrics class provides a task that can collect model design and testing metrics using the metric.Engine API for the Model Design and Model Testing Dashboards. By default, the task collects model maintainability metrics that can help you monitor the size, architecture, and complexity of the software units and components in your project. But you can reconfigure the task to collect model testing, SIL code testing, or PIL code testing metrics by using the Dashboard property to specify which dashboard you want to collect metrics for. You can add these tasks to your process model by using the method addTask. After you add the tasks to your process model, you can run the tasks from the Process Advisor app or by using the function runprocess.

To view the source code for this built-in task, in the MATLAB® Command Window, enter:

open padv.builtin.task.CollectMetrics

The padv.builtin.task.CollectMetrics class is a handle class.

Creation

Description

task = padv.builtin.task.CollectMetrics() creates a task for collecting model maintainability metrics like size, architecture, and complexity. These are the same metric results that the Model Maintainability Dashboard uses.

example

task = padv.builtin.task.CollectMetrics(Name=Value) sets certain properties using one or more name-value arguments. For example, task = padv.builtin.task.CollectMetrics(Name = "MyCollectMetricsTask") creates a task with the specified name.

You can use this syntax to set property values for InputQueries, Name, IterationQuery, InputDependencyQuery, Licenses, LaunchToolAction, and LaunchToolText.

The padv.builtin.task.CollectMetrics class also has other properties, but you cannot set those properties during task creation.

example

Properties

expand all

The CollectMetrics class inherits properties from padv.Task. The properties listed in Specialized Inherited Properties are padv.Task properties that the CollectMetrics task overrides.

The task also has properties for specifying Metric Collection Options. The task uses these properties to specify input arguments for the getAvailableMetricIds, execute, and generateReport functions of the metric.Engine API.

Specialized Inherited Properties

Unique identifier for task in process, specified as a string.

Example: "MyCollectMetricsTask"

Data Types: string

Human-readable name that appears in Process Advisor app, specified as a string.

Example: "My Metric Collection Task"

Data Types: string

Task description, specified as a string.

When you point to a task in Process Advisor and click the information icon, the tooltip shows the task description.

Example: "This task collects and reports metric data used by the model design and testing dashboards."

Data Types: string

Path to task documentation, specified as a string.

When you point to a task in Process Advisor, click the ellipsis (...), and click Help, Process Advisor opens the task documentation.

Example: fullfile(pwd,"taskHelpFiles","myTaskDocumentation.pdf")

Data Types: string

Type of artifact, specified as one or more of the values listed in this table. To specify multiple values, use an array.

CategoryArtifact TypeDescription

MATLAB

"m_class"MATLAB class
"m_file"MATLAB file
"m_func"MATLAB function
"m_method"MATLAB class method
"m_property"MATLAB class property

Model Advisor

"ma_config_file"Model Advisor configuration file
"ma_justification_file"Model Advisor justification file

Other

"coder_code_files"Code files
"other_file"Other file

Process Advisor

"padv_dep_artifacts"

Related artifacts that current artifact depends on

"padv_output_file"

Process Advisor output file

Project

"project"Current project file

Requirements

"mwreq_item"Requirement (since R2024b)

"sl_req"

Requirement (for R2024a and earlier)
"sl_req_file"Requirement file
"sl_req_table"Requirements Table

Stateflow®

"sf_chart"Stateflow chart
"sf_graphical_fcn"Stateflow graphical function
"sf_group"Stateflow group
"sf_state"Stateflow state
"sf_state_transition_chart"Stateflow state transition chart
"sf_truth_table"Stateflow truth table

Simulink®

"sl_block_diagram"Block diagram
"sl_data_dictionary_file"Data dictionary file
"sl_embedded_matlab_fcn"MATLAB function
"sl_block_diagram"Block diagram
"sl_library_file"Library file
"sl_model_file"Simulink model file
"sl_protected_model_file"Protected Simulink model file
"sl_subsystem"Subsystem
"sl_subsystem_file"Subsystem file
"sl_subsystem"Subsystem

System Composer™

"zc_block_diagram"System Composer architecture
"zc_component"System Composer architecture component
"zc_file"System Composer architecture file
Tests"harness_info_file"Harness info file
"sl_harness_block_diagram"Harness block diagram
"sl_harness_file"Test harness file
"sl_test_case"Simulink Test™ case
"sl_test_case_result"Simulink Test case result
"sl_test_file"Simulink Test file
"sl_test_iteration"Simulink Test iteration
"sl_test_iteration_result"Simulink Test iteration result
"sl_test_report_file"Simulink Test result report
"sl_test_result_file"Simulink Test result file
"sl_test_resultset"Simulink Test result set
"sl_test_seq"Test Sequence
"sl_test_suite"Simulink Test suite
"sl_test_suite_result"Simulink Test suite result

Example: "sl_model_file"

Example: ["sl_model_file "zc_file"]

Query that finds the artifacts that the task iterates over, specified as a padv.Query object or the name of a padv.Query object. When you specify IterationQuery, the task runs one time for each artifact returned by the query. In the Process Advisor app, the artifacts returned by IterationQuery appear under task title.

For more information about task iterations, see Overview of Process Model.

Example: padv.builtin.query.FindUnits

Query that finds artifact dependencies for task inputs, specified as a padv.Query object or the name of a padv.Query object.

The build system runs the query specified by InputDependencyQuery to find the dependencies for the task inputs, since those dependencies can impact if task results are up-to-date.

For more information about task inputs, see Overview of Process Model.

Example: padv.builtin.query.GetDependentArtifacts

Function that launches a tool, specified as the function handle.

When you point to a task in the Process Advisor app, you can click the ellipsis (...) to see more options. For built-in tasks, you have the option to launch a tool associated with the task.

By default, the task CollectMetrics can launch the Model Maintainability Dashboard. If you specify the Dashboard property as a value other than "ModelMaintainability", the task can launch the Model Testing Dashboard instead.

Data Types: function_handle

Description of the action that the LaunchToolAction property performs, specified as a string.

Data Types: string

Inputs to the task, specified as:

  • a padv.Query object

  • the name of padv.Query object

  • an array of padv.Query objects

  • an array of names of padv.Query objects

By default, the task CollectMetrics gets the current model that the task is iterating over by using the built-in query padv.builtin.query.GetIterationArtifact.

Location for standard task outputs, specified as a string.

The built-in tasks use tokens, like $DEFAULTOUTPUTDIR$, as placeholders for dynamic path resolution during run-time. For more information, see Dynamically Resolve Paths with Tokens.

Data Types: string

Metric Collection Options

Dashboard metrics to collect, specified as one of these values:

  • "ModelMaintainability" — Analyze the size, architecture, and complexity of the MATLAB, Simulink, and Stateflow artifacts in your project by using the Model Maintainability Metrics.

  • "ModelUnitPILTesting" — Assess the quality and completeness of processor-in-the-loop (PIL) code testing by using the Code Testing Metrics. Collecting these metrics requires a Simulink Test license.

  • "ModelUnitSILTesting" — Assess the quality and completeness of software-in-the-loop (SIL) code testing by using the Code Testing Metrics. Collecting these metrics requires a Simulink Test license.

  • "ModelUnitTesting" — Assess the quality, traceability, and completeness of your models, requirements, tests, and test results by using the Model Testing Metrics. By default, collecting these metrics requires a Requirements Toolbox™ license and Simulink Test license. If you do not want to collect requirements metrics, you can specify the property IncludeRequirements as false. When IncludeRequirements is false, the task does not require a Requirements Toolbox license.

The task uses this property to get the available metrics using the function getAvailableMetricIds.

Note

If you specify a value other than "ModelMaintainability", make sure to specify the task iteration query as padv.builtin.query.FindUnits since you can only collect model testing and code testing metrics on units and not components.

Example: "ModelUnitTesting"

Filter metrics based on whether the associated MathWorks product is installed, specified as either:

  • 1 (true) — Only collect metrics associated with MathWorks products installed on the current machine.

  • 0 (false) — Try to collect metrics for each of the available metrics, even if the associated MathWorks products are not installed on the current machine.

Example: false

Data Types: logical

Include requirements metrics in model testing metric results, specified as either:

  • 1 (true) — If you specified the property Dashboard as "ModelUnitTesting", the task includes requirements metrics in the model testing metric results. Collecting requirements metrics requires a Requirements Toolbox license.

  • 0 (false) — The task does not collect requirements metrics. The task excludes metrics where the metric ID contains the word requirement (case insensitive).

Example: false

Data Types: logical

Path to report output by task, specified as a string.

The task generates the report by using the function generateReport.

Data Types: string

Name of output report, specified as a string.

Data Types: string

Format of output report, specified as either:

  • "pdf" — PDF file.

  • "html-file" — HTML report.

Example: "html-file"

Data Types: string

List of metrics to filter out, specified as a string.

For example, if you are collecting model maintainability metrics (Dashboard property specified as "ModelMaintainability"), you can skip metric collection for a metric by specifying the value of FilteredMetrics as the metric ID for the metric.

Example: "slcomp.ComponentInterfaceSignals"

Data Types: string

Methods

expand all

Examples

collapse all

Add a task that can collect model maintainability metrics using the metric.Engine API for the Model Maintainability Dashboard.

Open the process model for your project. If you do not have a process model, open the Process Advisor app to automatically create a process model.

In the process model file, add the CollectMetrics task to your process model by using the addTask method. By default, the CollectMetrics task collects model maintainability metrics.

mmMetricTask = pm.addTask(padv.builtin.task.CollectMetrics());

You can reconfigure the task behavior by using the task properties. For example, to have the task return the generated metric results report as an HTML file instead of a PDF:

mmMetricTask.ReportFormat = "html-file";

By default, the CollectMetrics task collects model maintainability metrics. To collect different types of metrics, you can add multiple instances of the CollectMetrics to the process and reconfigure those instances to collect different metrics. For example, you can add tasks for model testing, SIL code testing, and PIL code testing metrics.

Each task instance needs a unique value for the Name property. To specify which metrics you want the task to collect, use the Dashboard property of the task. Since the dashboards collect model testing and code testing metrics for units, and not components, you need to specify the IterationQuery as padv.builtin.query.FindUnits. The other changes to the task property values give the task instances unique titles in Process Advisor and unique names for the reports that the task generates.

%% Collect Model Testing Metrics
mtMetricTask = pm.addTask(padv.builtin.task.CollectMetrics(...
    Name="ModelTestingMetrics",...
    IterationQuery=padv.builtin.query.FindUnits));
mtMetricTask.Title = "Collect Model Testing Metrics";
mtMetricTask.Dashboard = "ModelUnitTesting";
mtMetricTask.ReportName = "$ITERATIONARTIFACT$_ModelTesting";

%% Collect SIL Code Testing Metrics
stMetricTask = pm.addTask(padv.builtin.task.CollectMetrics(...
    Name="SILTestingMetrics",...
    IterationQuery=padv.builtin.query.FindUnits));
stMetricTask.Title = "Collect SIL Code Testing Metrics";
stMetricTask.Dashboard = "ModelUnitSILTesting";
stMetricTask.ReportName = "$ITERATIONARTIFACT$_SILTesting";

%% Collect PIL Code Testing Metrics
ptMetricTask = pm.addTask(padv.builtin.task.CollectMetrics(...
    Name="PILTestingMetrics",...
    IterationQuery=padv.builtin.query.FindUnits));
ptMetricTask.Title = "Collect PIL Code Testing Metrics";
ptMetricTask.Dashboard = "ModelUnitPILTesting";
ptMetricTask.ReportName = "$ITERATIONARTIFACT$_PILTesting";
You can use the other task properties to specify other task options.

To specify a preferred execution order for your tasks, you can use runsAfter. For example, if you want your process to merge test results before collecting model testing, SIL code testing, and PIL code testing metrics:

mtMetricTask.runsAfter(mergeTestTask);
stMetricTask.runsAfter(mtMetricTask);
ptMetricTask.runsAfter(stMetricTask);