Main Content

detect

Detect objects using YOLOX object detector

Since R2023b

Description

bboxes = detect(detector,I) detects objects within a single image or a batch of images, I, using a YOLOX object detector, detector. The detect function returns the locations of objects detected in the input image as a set of bounding boxes.

Note

This functionality requires Deep Learning Toolbox™ and the Automated Visual Inspection Library for Computer Vision Toolbox™. You can install the Automated Visual Inspection Library for Computer Vision Toolbox from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.

example

[bboxes,scores] = detect(detector,I) also returns the class-specific confidence scores for each bounding box.

[bboxes,scores,labels] = detect(detector,I) returns a categorical array of labels assigned to the bounding boxes. You must define the labels for object classes during training.

[bboxes,scores,labels,info] = detect(detector,I) also returns information about the class probabilities and objectness scores for each detection.

detectionResults = detect(detector,ds) returns object detection predictions within all the images returned by the read function of the input datastore ds, as a table.

example

[___] = detect(___,roi) detects objects within the rectangular search region roi, in addition to any combination of arguments from previous syntaxes.

example

[___] = detect(___,Name=Value) specifies options using one or more name-value arguments, in addition to any combination of arguments from previous syntaxes.. For example, Threshold=0.25 specifies a detection threshold of 0.25.

Examples

collapse all

Specify the name of a pretrained YOLOX deep learning network.

name = "tiny-coco";

Create YOLOX object detector by using the pretrained YOLOX network.

detector = yoloxObjectDetector(name);

Detect objects in a test image by using the pretrained YOLOX object detector.

img = imread("tima.png");
[bboxes,scores,labels] = detect(detector,img,Threshold=0.6)
bboxes = 1x4 single row vector

  185.1392  255.8597  119.6875  217.3187

scores = single

0.7775
labels = categorical
     cat 

Display the detection results.

detectedImg = insertObjectAnnotation(img,"Rectangle",bboxes,labels);
figure
imshow(detectedImg)

Figure contains an axes object. The hidden axes object contains an object of type image.

Load a pretrained YOLOX object detector.

detector = yoloxObjectDetector("small-coco");

Read the test datastore and store it as an image datastore object.

location = fullfile(matlabroot,"toolbox","vision","visiondata","vehicles");
imds = imageDatastore(location);

Detect objects in the test datastore. Set the Threshold parameter value to 0.4 and MiniBatchSize parameter value to 32.

detectionResults = detect(detector,imds,Threshold=0.4,MiniBatchSize=32);

Read an image from the test dataset and extract the corresponding detection results.

num = 20;
I = readimage(imds,num);
bboxes = detectionResults.Boxes{num};
labels = detectionResults.Labels{num};
scores = detectionResults.Scores{num};

Perform non-maximal suppression to select strongest bounding boxes from the overlapping clusters. Set the OverlapThreshold parameter value to 0.5.

[bboxes,scores,labels] = selectStrongestBboxMulticlass(bboxes,...
                              scores,labels,OverlapThreshold=0.5);

Display the detection results.

results = table(bboxes,labels,scores)
results=5×3 table
                   bboxes                   labels    scores 
    ____________________________________    ______    _______

    2.0755    69.251    16.852    9.0757     car      0.61246
    19.219    70.205    21.257    10.847     car      0.77888
    75.165    65.773    25.769    23.227     car      0.75951
    96.479    54.215    16.175    24.654     bus      0.67867
         1    104.91    225.57    22.663     car      0.43216

detectedImg = insertObjectAnnotation(I,"Rectangle",bboxes,labels);
figure
imshow(detectedImg)

Figure contains an axes object. The axes object contains an object of type image.

Load a pretrained YOLOX object detector.

detector = yoloxObjectDetector("small-coco");

Read a test image.

img = imread("aruba.png");

Specify a region of interest (ROI) within the test image.

roiBox = [250 180 300 250];

Detect objects within the specified ROI.

[bboxes,scores,labels] = detect(detector,img,roiBox,Threshold=0.55);

Display the ROI and the detection results.

img = insertObjectAnnotation(img,"Rectangle",roiBox,"ROI",AnnotationColor="yellow");
detectedImg = insertObjectAnnotation(img,"Rectangle",bboxes,labels);
figure
imshow(detectedImg)

Figure contains an axes object. The hidden axes object contains an object of type image.

Input Arguments

collapse all

YOLOX object detector, specified as a yoloxObjectDetector object.

Test images, specified as a numeric array of size H-by-W-by-C or H-by-W-by-C-by-B. You must specify real and nonsparse grayscale or RGB images.

  • H — Height of the input images.

  • W — Width of the input images.

  • C — Number of channels. The channel size of each image must be equal to the input channel size of the network. For example, for grayscale images, C must be 1. For RGB color images, it must be 3.

  • B — Number of test images in the batch. The detect function computes the object detection results for each test image in the batch.

When the test image size does not match the network input size, the detector resizes the input image to the value of the InputSize property of detector, unless you specify AutoResize as false.

Data Types: uint8 | uint16 | int16 | double | single

Datastore of test images, specified as an imageDatastore object, CombinedDatastore object, or TransformedDatastore object containing full filenames of the test images. The images in the datastore must be grayscale or RGB images.

Region of interest (ROI) to search, specified as a vector of form [x y width height]. The vector specifies the upper-left corner and size of a region, in pixels. If the input data is a datastore, the detect function applies the same ROI to every image.

Note

You can specify the ROI to search only when the detect function automatically resizes the input test images to the network input size. To use roi, reset AutoResize to its default value.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: detect(detector,I,Threshold=0.25) specifies a detection threshold of 0.25.

Detection threshold, specified as a scalar in the range [0, 1]. The function removes detections that have scores less than this threshold value. To reduce false positives, increase this value at the possible expense of missing some objects.

Strongest bounding box selection for each detected object, specified as a numeric or logical 1 (true) or 0 (false).

  • true — Return the strongest bounding box for each object. The detect function calls the selectStrongestBboxMulticlass function, which uses nonmaximal suppression to eliminate overlapping bounding boxes based on their confidence scores.

    By default, the detect function uses this call to the selectStrongestBboxMulticlass.

     selectStrongestBboxMulticlass(bboxes,scores, ...
                                   RatioType="Union", ...
                                   OverlapThreshold=0.45);

  • false — Return all the detected bounding boxes. You can write a custom function to eliminate overlapping bounding boxes.

Minimum region size containing an object, specified as a vector of the form [height width]. Units are in pixels. The minimum region size defines the size of the smallest object that can be detected by the trained network. When the minimum size is known, you can reduce computation time by setting MinSize to that value.

Maximum region size, specified as a vector of the form [height width]. Units are in pixels. The maximum region size defines the size of the largest object that can be detected by the trained network.

By default, MaxSize is set to the height and width of the input image I. To reduce computation time, set this value to the known maximum region size for the objects that can be detected in the input test image.

Minimum batch size, specified as a positive integer. Adjust the MiniBatchSize value to help process a large collection of images. The detect function groups images into minibatches of the specified size and processes them as a batch, which can improve computation efficiency at the cost of increased memory demand. Increase the minibatch size to decrease processing time. Decrease the minibatch size to use less memory.

Automatic resizing of input images to preserve the aspect ratio, specified as a numeric or logical 1 (true) or 0 (false). When AutoResize is set to 1 (or true), the detect function resizes images to the nearest InputSize and the aspect ratio is preserved. Set AutoResize to logical false or 0 when performing image tiling-based training or inference at full test image size.

Hardware resource on which to run the detector, specified as one of these values:

  • "auto" — Use a GPU if Parallel Computing Toolbox™ is installed and a supported GPU device is available. Otherwise, use the CPU.

  • "gpu" — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA®-enabled NVIDIA® GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

  • "cpu" — Use the CPU.

Performance optimization, specified as one of these options:

  • "auto" — Automatically apply a number of compatible optimizations suitable for the input network and hardware resource.

  • "mex" — Compile and execute a MEX function. This option is available only when using a GPU. Using a GPU requires Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then the detect function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

  • "none" — Disable all acceleration.

Using the Acceleration options "auto" and "mex" can offer performance benefits on subsequent calls with compatible parameters, at the expense of an increased initial run time. Use performance optimization when you plan to call the function multiple times using new input data.

The "mex" option generates and executes a MEX function based on the network and parameters used in the function call. You can have several MEX functions associated with a single network at one time. Clearing the network variable also clears any MEX functions associated with that network.

The "mex" option is available only for input data specified as a numeric array, cell array of numeric arrays, table, or image datastore. No other types of datastore support the "mex" option.

The "mex" option is available only when you are using a GPU. You must also have a C/C++ compiler installed. For setup instructions, see MEX Setup (GPU Coder).

"mex" acceleration does not support all layers. For a list of supported layers, see Supported Layers (GPU Coder).

Output Arguments

collapse all

Locations of objects detected within the input image or images, returned as one of these options:

  • M-by-4 matrix — The input is a single test image. M is the number of bounding boxes detected in an image.

  • B-by-1 cell array — The input is a batch of images, where B is the number of test images in the batch. Each cell in the array contains an M-by-4 matrix specifying the detected bounding boxes.

Detection confidence scores for each bounding box, returned as one of these options:

  • M-by-1 numeric vector — The input is a single test image. M is the number of bounding boxes detected in the image.

  • B-by-1 cell array — The input is a batch of test images, where B is the number of test images in the batch. Each cell in the array contains an M-element row vector, where each element indicates the detection score for a bounding box in the corresponding image.

A higher score indicates higher confidence in the detection. The confidence score for each detection is a product of the corresponding objectness score and maximum class probability. The objectness score is the probability that the object in the bounding box belongs to a class in the image. The maximum class probability is the largest probability that a detected object in the bounding box belongs to a particular class.

Labels for bounding boxes, returned as one of these options:

  • M-by-1 categorical vector — The input is a single test image. M is the number of bounding boxes detected in an image.

  • B-by-1 cell array — The input is an array of test images. B is the number of test images in the batch. Each cell in the array contains an M-by-1 categorical vector containing the names of the object classes.

Detection results when the input is a datastore of test images, ds, returned as a table with these columns:

bboxesscoreslabels

Predicted bounding boxes, defined in spatial coordinates as an M-by-4 numeric matrix with rows of the form [x y w h], where:

  • M is the number of axis-aligned rectangles.

  • x and y specify the upper-left corner of the rectangle.

  • w specifies the width of the rectangle, which is its length along the x-axis.

  • h specifies the height of the rectangle, which is its length along the y-axis.

Class-specific confidence scores in the range [0, 1] for each bounding box, returned as an M-by-1 numeric vector.

Predicted object labels assigned to bounding boxes, returned as an M-by-1 categorical vector. All categorical data returned by the datastore must contain the same categories.

Class probabilities and objectness scores of the detections, returned as a structure array with these fields.

  • ClassProbabilities — Class probabilities for each of the detections, returned as a B-by-1 cell array. B is the number of images in the input batch of images, I. Each cell in the array contains the class probabilities as an M-by-N numeric matrix. M is the number of bounding boxes and N is the number of classes. Each class probability is a numeric scalar, indicating the probability that the detected object in the bounding box belongs to a class in the image.

  • ObjectnessScores — Objectness scores for each of the detections, returned as a B-by-1 cell array. B is the number of images in the input batch of images, I. Each cell in the array contains the objectness score for each bounding box as an M-by-1 numeric vector. M is the number of bounding boxes. Each objectness score is a numeric scalar, indicating the probability that the bounding box contains an object belonging to one of the classes in the image.

Extended Capabilities

Version History

Introduced in R2023b

expand all