yolov2OutputLayer
(To be removed) Create output layer for YOLO v2 object detection network
yolov2OutputLayer
will be removed in a future release. Create a YOLO
v2 object detection network using the yolov2ObjectDetector
object instead. For more information, see Version History.
Description
The yolov2OutputLayer
function creates a
YOLOv2OutputLayer
object, which represents the output layer for you only
look once version 2 (YOLO v2) object detection network. The output layer provides the refined
bounding box locations of the target objects.
Creation
Description
creates a layer
= yolov2OutputLayer(anchorBoxes
)YOLOv2OutputLayer
object, layer
, which
represents the output layer for YOLO v2 object detection network. The layer outputs the
refined bounding box locations that are predicted using a predefined set of anchor boxes
specified at the input.
sets the additional properties using name-value pairs and the input from the preceding
syntax. Enclose each property name in single quotes. For example,
layer
= yolov2OutputLayer(anchorBoxes
,Name,Value
)yolov2OutputLayer('Name','yolo_Out')
creates an output layer with the
name 'yolo_Out'.
Input Arguments
Properties
Examples
More About
Tips
To improve prediction accuracy, you can:
Train the network with more number of images. You can expand the training dataset through data augmentation. For information on how to apply data augmentation for training dataset, see Preprocess Images for Deep Learning (Deep Learning Toolbox).
Perform multiscale training by using the
trainYOLOv2ObjectDetector
function. To do so, specify theTrainingImageSize
argument oftrainYOLOv2ObjectDetector
function for training the network.Choose anchor boxes appropriate to the dataset for training the network. You can use the
estimateAnchorBoxes
function to compute anchor boxes directly from the training data.
References
[1] Joseph. R, S. K. Divvala, R. B. Girshick, and F. Ali. "You Only Look Once: Unified, Real-Time Object Detection." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788. Las Vegas, NV: CVPR, 2016.
[2] Joseph. R and F. Ali. "YOLO 9000: Better, Faster, Stronger." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525. Honolulu, HI: CVPR, 2017.