detect
Syntax
Description
detects objects within a single image or an array of images, bboxes
= detect(detector
,I
)I
, using
an single shot multibox detector (SSD). The locations of objects detected are returned as
a set of bounding boxes.
When using this function, use of a CUDA® enabled NVIDIA® GPU is highly recommended. The GPU reduces computation time significantly. Usage of the GPU requires Parallel Computing Toolbox™. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).
[___,
also returns a categorical array of labels assigned to the bounding boxes, using either of
the preceding syntaxes. The labels used for object classes are defined during training
using the labels
] = detect(detector
,I
)trainSSDObjectDetector
function.
[___] = detect(___,
detects objects within the rectangular search region specified by
roi
)roi
.
detects objects within the series of images returned by the detectionResults
= detect(detector
,ds
)read
function
of the input datastore.
[___] = detect(___,
specifies options using one or more Name,Value
)Name,Value
pair arguments. For
example, detect(detector,I,"Threshold",0.75)
sets the detection score
threshold to 0.75
. Any detections with a lower score are
removed.
Examples
Detect Vehicles Using SSD Object Detector
Load a pretrained single shot detector (SSD) object to detect vehicles in an image. The detector is trained with images of cars on a highway scene.
vehicleDetector = load("ssdVehicleDetector.mat","detector"); detector = vehicleDetector.detector;
Read a test image into the workspace.
I = imread("highway.png");
Display the test image.
imshow(I)
Run the pretrained SSD object detector by using the detect
function. The output contains the bounding boxes, scores, and the labels for vehicles detected in the image. The labels are derived from the ClassNames
property of the detector.
[bboxes,scores,labels] = detect(detector,I)
bboxes = 2×4
139 78 96 81
99 67 165 146
scores = 2×1 single column vector
0.8349
0.6302
labels = 2×1 categorical
vehicle
vehicle
Annotate the image with the detection results.
if ~isempty(bboxes) detectedI = insertObjectAnnotation(I,"rectangle",bboxes,cellstr(labels)); else detectedI = insertText(I,[10 10],"No Detections"); end imshow(detectedI)
Input Arguments
detector
— SSD object detector
ssdObjectDetector
object
SSD object detector, specified as an ssdObjectDetector
object. To create this object, call the trainSSDObjectDetector
function with training data as input.
I
— Input image
H-by-W-by-C-by-B
numeric array of images
Input image, specified as an H-by-W-by-C-by-B numeric array of images. Images must be real, nonsparse, grayscale or RGB image.
H — Height in pixels.
W — Width in pixels
C — The channel size in each image must be equal to the network's input channel size. For example, for grayscale images, C must be equal to
1
. For RGB color images, it must be equal to3
.B — Number of images in the array.
The detector is sensitive to the range of the input image. Therefore, ensure that the input
image range is similar to the range of the images used to train the detector. For
example, if the detector was trained on uint8
images, rescale
this input image to the range [0, 255] by using the im2uint8
or rescale
function. The size of this input image should be comparable
to the sizes of the images used in training. If these sizes are very different, the
detector has difficulty detecting objects because the scale of the objects in the
input image differs from the scale of the objects the detector was trained to
identify. Consider whether you used the SmallestImageDimension
property during training to modify the size of training images.
Data Types: uint8
| uint16
| int16
| double
| single
| logical
ds
— Datastore
datastore
object
Datastore, specified as a datastore
object containing a
collection of images. Each image must be a grayscale, RGB, or multichannel image.
The function processes only the first column of the datastore, which must contain
images and must be cell arrays or tables with multiple columns.
roi
— Search region of interest
[x
y
width
height] vector
Search region of interest, specified as a four-element vector of the form [x y width height]. The vector specifies the upper left corner and size of a region in pixels.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose
Name
in quotes.
Example: "SelectStrongest",true
Threshold
— Detection threshold
0.5
(default) | scalar
Detection threshold, specified as a scalar in the range [0, 1]. Detections that have scores less than this threshold value are removed. To reduce false positives, increase this value.
SelectStrongest
— Select strongest bounding box
true
(default) | false
Select the strongest bounding box for each detected object, specified as
true
or false
.
true
— Return the strongest bounding box per object. To select these boxes,detect
calls theselectStrongestBboxMulticlass
function, which uses nonmaximal suppression to eliminate overlapping bounding boxes based on their confidence scores.For example:
selectStrongestBboxMulticlass(bbox,scores, ... "RatioType","Union", ... "OverlapThreshold",0.5);
false
— Return all detected bounding boxes. You can then create your own custom operation to eliminate overlapping bounding boxes.
MaxSize
— Maximum region size
size
(I
) (default) | [height
width] vector
Maximum region size that contains a detected object, specified as a [height width] vector. Units are in pixels.
To reduce computation time, set this value to the known maximum region size for
the objects being detected in the image. By default, MaxSize
is
set to the height and width of the input image, I
.
MinSize
— Minimum region size
[1 1
] (default) | [height
width] vector
Minimum region size that contains a detected object, specified as a [height width] vector. Units are in pixels.
To reduce computation time, set this value to the known minimum region size for
the objects being detected in the image. By default, MinSize
is
set to [1 1
].
MiniBatchSize
— Minimum batch size
128
(default) | scalar
Minimum batch size, specified as a scalar value. Use the
MiniBatchSize
to process a large collection of images. Images
are grouped into minibatches and processed as a batch to improve computation
efficiency. Increase the minibatch size to decrease processing time. Decrease the size
to use less memory.
ExecutionEnvironment
— Hardware resource
"auto"
(default) | "gpu"
| "cpu"
Hardware resource on which to run the detector, specified as
"auto"
, "gpu"
, or "cpu"
.
"auto"
— Use a GPU if it is available. Otherwise, use the CPU."gpu"
— Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox)."cpu"
— Use the CPU.
Acceleration
— Performance optimization
"auto"
(default) | "mex"
| "none"
Performance optimization, specified as one of the following:
"auto"
— Automatically apply a number of optimizations suitable for the input network and hardware resource."mex"
— Compile and execute a MEX function. This option is available when using a GPU only. Using a GPU requires Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox)."none"
— Disable all acceleration.
The default option is "auto"
. If "auto"
is
specified, MATLAB® applies a number of compatible optimizations. If you use the
"auto"
option, MATLAB does not ever generate a MEX function.
Using the Acceleration
options "auto"
and
"mex"
can offer performance benefits, but at the expense of an
increased initial run time. Subsequent calls with compatible parameters are faster.
Use performance optimization when you plan to call the function multiple times using
new input data.
The "mex"
option generates and executes a MEX function based on
the network and parameters used in the function call. You can have several MEX
functions associated with a single network at one time. Clearing the network variable
also clears any MEX functions associated with that network.
The "mex"
option is only available for input data specified as
a numeric array, cell array of numeric arrays, table, or image datastore. No other
types of datastore support the "mex"
option.
The "mex"
option is only available when you are using a GPU.
You must also have a C/C++ compiler installed. For setup instructions, see MEX Setup (GPU Coder).
"mex"
acceleration does not support all layers. For a list of
supported layers, see Supported Layers (GPU Coder).
Output Arguments
bboxes
— Location of objects detected
M-by-4 matrix | B-by-1 cell array
Location of objects detected within the input image or images, returned as an M-by-4 matrix or a B-by-1 cell array. M is the number of bounding boxes in an image, and B is the number of M-by-4 matrices when the input contains an array of images.
Each row of bboxes
contains a four-element vector of the
form [x
y
width
height]. This vector specifies the upper left corner and size
of that corresponding bounding box in pixels.
scores
— Detection scores
M-by-1 vector | B-by-1 cell array
Detection confidence scores, returned as an M-by-1 vector or a B-by-1 cell array. M is the number of bounding boxes in an image, and B is the number of M-by-1 vectors when the input contains an array of images. A higher score indicates higher confidence in the detection.
labels
— Labels for bounding boxes
M-by-1 categorical array | B-by-1 cell array
Labels for bounding boxes, returned as an M-by-1 categorical array or a
B-by-1 cell array. M is the number of
labels in an image, and B is the number of
M-by-1 categorical arrays when the input contains an
array of images. You define the class names used to label the objects when you
train the input detector
.
detectionResults
— Detection results
3-column table
Detection results, returned as a 3-column table with variable names, Boxes, Scores, and Labels. The Boxes column contains M-by-4 matrices, of M bounding boxes for the objects found in the image. Each row contains a bounding box as a 4-element vector in the format [x,y,width,height]. The format specifies the upper-left corner location and size in pixels of the bounding box in the corresponding image.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
The
roi
argument to thedetect
method must be a codegen constant (coder.const()
) and a 1x4 vector.Only the
Threshold
,SelectStrongest
,MinSize
,MaxSize
, andMiniBatchSize
Name-Value pairs are supported. All Name-Value pairs must be compile-time constants.The channel and batch size of the input image must be fixed size.
The
labels
output is returned as a categorical array.In the generated code, the input is rescaled to the size of the input layer of the network. But the bounding box that the
detect
method returns is in reference to the original input size.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Usage notes and limitations:
For code generation,
The
roi
argument to thedetect
method must be a codegen constant (coder.const()
) and a 1-by-4 vector.Only the
Threshold
,SelectStrongest
,MinSize
,MaxSize
, andMiniBatchSize
name-value arguments are supported.The channel and batch size of the input image must be fixed size.
The
labels
output is returned as a categorical array.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Version History
Introduced in R2020a
See Also
Apps
Objects
Functions
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)