Main Content

averagePrecision

Evaluate average precision metric of instance segmentation results

Since R2024b

Description

ap = averagePrecision(metrics) evaluates the average precision (AP) for all classes and overlap thresholds of metrics.

AP aggregates the precision across different recall levels, providing a single metric to assess the overall ability of an object detector to identify objects accurately while minimizing false detections.

ap = averagePrecision(metrics,Name=Value) specifies options for the average precision evaluation using one or more name-value arguments. For example, ClassNames=["cars" "people"] specifies to evaluate the average precision metric for the cars and people classes.

Input Arguments

collapse all

Instance segmentation performance metrics, specified as an instanceSegmentationMetrics object.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: averagePrecision(metrics,ClassNames=[cars people]) specifies to evaluate the average precision metric for the cars and people classes.

Class names of segmented objects, specified as an array of strings or a cell array of character vectors. By default, the averagePrecision function returns the average precision metrics for all classes specified by the ClassNames property of the instanceSegmentationMetrics object metrics.

Overlap threshold to use for evaluating the average precision, specified as a numeric scalar or numeric vector of box overlap threshold values. . To evaluate multiple overlap thresholds, specify this argument as a numeric vector. By default, the averagePrecision object function returns the average precision metrics for all overlap thresholds specified by the OverlapThreshold property of the instanceSegmentationMetrics object metrics.

Output Arguments

collapse all

Average precision (AP) for specified classes and overlap thresholds, specified as an M-by-N matrix. M is the number of classes in the ClassNames property and N is the number of specified overlap thresholds OverlapThreshold.

The AP metric evaluates instance segmentation performance by quantifying the accuracy of the model in identifying object instances across different confidence thresholds, enabling you to assess both the precision (correctness of detections) and recall (completeness of detections).

Version History

Introduced in R2024b