trainYOLOv2ObjectDetector
Train YOLO v2 object detector
Syntax
Description
returns an object detector trained using the you only look once version 2 (YOLO v2)
network specified by trainedDetector
= trainYOLOv2ObjectDetector(trainingData
,detector
,options
)detector
. The options
argument specifies training parameters for the detection network.
You can use this syntax for training an untrained detector or for fine-tuning a pretrained detector.
resumes training from the saved detector checkpoint.trainedDetector
= trainYOLOv2ObjectDetector(trainingData
,checkpoint
,options
)
You can use this syntax to:
Add more training data and continue the training.
Improve training accuracy by increasing the maximum number of iterations.
[
also returns information on the training progress, such as the training accuracy and
learning rate for each iteration.trainedDetector
,info
] = trainYOLOv2ObjectDetector(___)
___ = trainYOLOv2ObjectDetector(___,
uses additional options specified by one or more name-value arguments and any of the
previous inputs. For example, Name=Value
)ExperimentMonitor=[]
specifies not to
track metrics with the Experiment Manager (Deep Learning Toolbox)
app.
Examples
Input Arguments
Output Arguments
More About
Tips
To generate the ground truth, use the Image Labeler or Video Labeler app. To create a table of training data from the generated ground truth, use the
objectDetectorTrainingData
function.To improve prediction accuracy:
Increase the number of images you can use to train the network. You can expand the training dataset through data augmentation. For information on how to apply data augmentation for preprocessing, see Preprocess Images for Deep Learning (Deep Learning Toolbox).
Perform multiscale training by specifying an input detector whose
TrainingImageSize
property is a matrix with two or more rows. For each training epoch, thetrainYOLOv2ObjectDetector
function randomly resizes the input training images to one of the specified training image sizes.Choose anchor boxes appropriate to the dataset for training the network. You can use the
estimateAnchorBoxes
function to compute anchor boxes directly from the training data.
References
[1] Joseph. R, S. K. Divvala, R. B. Girshick, and F. Ali. "You Only Look Once: Unified, Real-Time Object Detection." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788. Las Vegas, NV: CVPR, 2016.
[2] Joseph. R and F. Ali. "YOLO 9000: Better, Faster, Stronger." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525. Honolulu, HI: CVPR, 2017.
Version History
Introduced in R2019aSee Also
Apps
Functions
trainingOptions
(Deep Learning Toolbox) |objectDetectorTrainingData
|trainYOLOv4ObjectDetector
Objects
Topics
- Create Custom YOLO v2 Object Detection Network
- Object Detection Using YOLO v2 Deep Learning
- Estimate Anchor Boxes From Training Data
- Code Generation for Object Detection by Using YOLO v2
- Train Object Detectors in Experiment Manager
- Getting Started with YOLO v2
- Get Started with Object Detection Using Deep Learning
- Choose an Object Detector
- Anchor Boxes for Object Detection
- Datastores for Deep Learning (Deep Learning Toolbox)