Main Content

instanceSegmentationMetrics

Instance segmentation quality metrics

Since R2022b

    Description

    An instanceSegmentationMetrics object stores instance segmentation quality metrics, such as the confusion matrix and average precision, for a set of images.

    Creation

    Create an instanceSegmentationMetrics object using the evaluateInstanceSegmentation function.

    Properties

    expand all

    This property is read-only.

    Confusion matrix, specified as a numeric matrix or numeric array.

    • When OverlapThreshold is a scalar, ConfusionMatrix is a square matrix of size C-by-C, where C is the number of classes. Each element (i, j) is the count of objects known to belong to class i but predicted to belong to class j.

    • When OverlapThreshold is a vector, ConfusionMatrix is an array of size C-by-C-by-numThresh. There is one confusion matrix for each of the numThresh overlap thresholds.

    This property is read-only.

    Normalized confusion matrix, specified as a numeric matrix or numeric array with elements in the range [0, 1]. NormalizedConfusionMatrix represents a confusion matrix normalized by the number of objects known to belong to each class. For each overlap threshold, each element (i, j) in the normalized confusion matrix is the count of objects known to belong to class i but predicted to belong to class j, divided by the total number of objects predicted in class j.

    This property is read-only.

    Metrics aggregated over the data set, specified as a table with one row. DataSetMetrics has two columns corresponding to these instance segmentation metrics:

    • mAP — Mean average precision, or the average precision values averaged over all overlap thresholds specified in the threshold argument.

    • AP — Average precision (AP) calculated for each class at each specified overlap threshold in OverlapThreshold, returned as a numThresh-by-1 array, where numThresh is the number of overlap thresholds.

    This property is read-only.

    Metrics for each class, specified as a table with C rows, where C is the number of classes in the instance segmentation. ClassMetrics has four columns, corresponding to these instance segmentation metrics:

    • mAP — Mean average precision calculated by averaging over all thresholds specified in the threshold argument for a class.

    • AP — Average precision calculated for a class at each overlap threshold in OverlapThreshold, returned as a numThresh-by-1 array, where numThresh is the number of overlap thresholds.

    • Precision — Precision values, returned as a numThresh-by-(numPredictions+1) matrix, where numPredictions is the number of predicted object masks. Precision is the ratio of the number of true positives (TP) and the total number of predicted positives​.

      Precision = TP / (TP + FP)

      FP is the number of false positives. Larger precision scores imply that most detected objects match ground truth objects.

    • Recall — Recall values, returned as a numThresh-by-(numPredictions+1) matrix, where numPredictions is the number of predicted object masks. Recall is the ratio of the number of true positives (TP) and the total number of ground truth positives​.

      Recall = TP / (TP + FN)

      FN is the number of false negatives. Larger recall scores imply that most ground truth objects are detected.

    This property is read-only.

    Metrics for each image in the data set, specified as a table with numImages rows, where numImages is the number of images in the data set. ImageMetrics has two columns, corresponding to these instance segmentation metrics:

    • mAP — Mean average precision, or the average precision values averaged over all overlap thresholds specified in the threshold argument for one image.

    • AP — Average precision calculated for each class at each overlap threshold in OverlapThreshold, returned as a numThresh-by-1 array, where numThresh is the number of overlap thresholds.

    Class names of segmented objects, specified as an array of strings.

    Example: ["sky" "grass" "building" "sidewalk"]

    Overlap threshold, specified as a numeric scalar or numeric vector. When the intersection over union (IoU) of the pixels in the predicted object mask and ground truth object mask is equal to or greater than the overlap threshold, the prediction is considered a true positive.

    IoU, or the Jaccard Index, is the number of pixels in the intersection of the binary masks divided by the number of pixels in the union of the masks. In other words, IoU is the ratio of correctly classified pixels to the total number of pixels that are assigned that class by the ground truth and the predictor. IoU can be expressed as:

    IoU = TP / (TP + FP + FN)

    Object Functions

    metricsByAreaEvaluate instance segmentation across object mask size ranges

    Version History

    Introduced in R2022b