Augmentations via Rotation, Shearing, Scaling, Reflection

7 次查看(过去 30 天)
Hi professionals,
I am trying to code the augmentation for a small dataset and would like to perform Augmentations via Rotation, Shearing, Scaling, Reflection for adding accuracy and efficiency with my classification process!
I am having a challenge to understand the affine function as this is constantly arguing for the correct variable. I have implemented and followed the example from here!!https://uk.mathworks.com/help/deeplearning/examples/bounding-box-augmentation-using-computer-vision-toolbox.html?searchHighlight=augmented%20rotation%2C%20shearing%2C%20scaling&s_tid=doc_srchtitle
The affine command constantly argues but this is the exact example from the matlab link provided!!
Can a professional guide me please?
tform = randomAffine2d("Rotation",[-50 50]);
my error is:
Undefined function or variable 'randomAffine2d'.
my code:
clearvars
clear
close all
clc
net =alexnet;
%% Step 1 Creating Filenames /Loading Data
load('SgTruth.mat');
load('RCNNLayers.mat');
load('Alexnlayers.mat');
save Alexnlayers.mat net;
save RCNNLayers.mat net;
save rcnnGuns.mat SgTruth
load('rcnnGuns.mat', 'SgTruth', 'net')
inputSize =net.Layers(1).InputSize
net.Layers
total_images = size(SgTruth,1);
%% Step 3 Adding Image Directory For Path To Image Data
imDir = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata','gunsGT')
addpath(imDir)
%% Step 4 Accessing Contents Of Folder TrainingSet Using Datastore
imds =imageDatastore(imDir,'IncludeSubFolders',true,'LabelSource','Foldernames')
%% Step 5 Splitting Inputs Into Training and Testing Sets
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized')
disp('The Size Of Images Are')
size(imdsTrain)
%% Step 12 Replacing Final Layer/Last 3 Configure For Network classes
layersTransfer = net.Layers(1:end-3);
%% Step 13 Specifying Image Categories/Clases:
numClasses = numel(categories(imdsTrain.Labels));
Tlayers = [
layersTransfer
fullyConnectedLayer(1,'Name','fc8','WeightLearnRateFactor',10,'BiasLearnRateFactor',10);
softmaxLayer('name', 'Softmax')
classificationLayer('Name','ClassfLay')];
%% Warp Image & Pixel Labels
% Creates A Randomized 2-D Affine Transformation From A Combination Of Rotation,
% Rotate Input Properties By An Angle Selected Randomly From Range [-50,50] Degrees.
pixelRange = [-30 30];
[imdsTrain,imdsValidation] = digitTrain4DArrayData;
imageAugmenter = imageDataAugmenter('RandRotation',[-180 180],...
'RandXReflection',true,...
'RandYReflection',true,...
'RandXTranslation',pixelRange, ...
'RandYTranslation',pixelRange)
augimdsTrain = augmentedImageSource(inputSize(1:2),imdsTrain,imdsValidation,...
'DataAugmentation',imageAugmenter)
augimdsTrain.MiniBatchSize = 16
augimdsTrain.reset()
tform1 = randomAffine2d('Rotation',[35 55]);

回答(1 个)

Bhargavi Maganuru
Bhargavi Maganuru 2020-2-12
I guess you’re using earlier version of MATLAB. The function ‘randomAffine2d’ is a new feature introduce in R2019b release, so you would get error if you’re using it in earlier versions.
  1 个评论
Matpar
Matpar 2020-2-14
编辑:Matpar 2020-2-14
Thanx for the answer Bhargavi Maganuru but that as removed as directed that that was not the basis to view the features of the bounding boxes before the the classification procedure!
what i am trying to achieve is to see whether or not the features are being created in order for the classification process to be successful!
as it is now! the code bounding box is displaying no labels no boxes no accuracy but it read 100% percent in training and in the accuracy detection code it displays 1.000 etc
which denotes that it actively detected the ROI but it is not showing in the final result!
*What I Would Like Help With* is
seeing how to code DISPLAYING of the bounding box just after the
ccn detection code:
cnn = trainRCNNObjectDetector(Wgtruth, netTransfer, options, 'NegativeOverlapRange', [0 0.3]);
%% This Works Predicitng Validation Accuracy
predictedLabels = classify(netTransfer,augimdsValidation)
accuracy =mean(predictedLabels== imdsValidation.Labels)
%% ** BETWEEN HERE I WOULD LIKE TO DISPLAY THE BOUNDING
% BOXES On REGIONS OF INTEREST TO DETERMINE IF
% THIS IS THE ACTUAL ERROR ***
img = imread('11.jpg');
[bbox1, score, label] = detect(rcnn, img, 'MiniBatchSize', 80)
[score, idx] = max(score)
bbox1 = bbox(idx, :)
annotation = sprintf('%s: (Confidence = %f)', label(idx), score)
detectedImg = insertObjectAnnotation(img, 'rectangle', bbox1, annotation);
figure
imshow(detectedImg)
Here IS MY CODE:
%% Extract region proposals with selective search
%% Conducting Feature Extraction With RCNN
%% Classifing Features With SVM
%% Improving The Bounding Box
clc
clearvars
clear
close all
%% Step 1 Creating Filenames /Loading Data
anet = alexnet
load('Wgtruth.mat');
load('anet.mat');
save Wgtruth.mat Wgtruth;
save test16.mat;
save anet.mat anet;
load('test16.mat', 'Wgtruth', 'anet')
%% Step 2 Highlighting Image Input Size
inputSize = anet.Layers(1).InputSize;
anet.Layers;
total_images = size(Wgtruth,1)
imDir = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata','Wgtruth')
addpath(imDir)
%
%% Step 4 Accessing Contents Of Folder TrainingSet Using Datastore
imds =imageDatastore(imDir,'IncludeSubFolders',true,'LabelSource','Foldernames')
%% Step 5 Splitting Inputs Into Training and Testing Sets
[imdsTrain,imdsValidation] = splitEachLabel(imds,0.7,'randomized')
%% Step 6 Replacing Final Layer/Last 3 Configure For New Layer classes
% Complex Architecture Layers Has Inputs/Outputs From Multiple Layers
% Extracting All Layers Except The Last 3
layersTransfer = anet.Layers(1:end-3)
%% Step 7 Specifying Image Categories/Clases From 1000 to Gun(One Class):
numClasses = numel(categories(imdsTrain.Labels));
Tlayers = [
layersTransfer
fullyConnectedLayer(numClasses,'Name','fc8','WeightLearnRateFactor',10,'BiasLearnRateFactor',10);
softmaxLayer('name', 'Softmax')
classificationLayer('Name','ClassfLay')]
%% Step 8 Displaying and Visualising Layer Features Of FC8
% layer(16) = maxPooling2dLayer(5,'stride',2)
% disp(Tlayers)
% layer = 22;
% channels = 1:30;
% I = deepDreamImage(net,layer,channels,'PyramidLevels',1);
% figure
% I = imtile(I,'ThumbnailSize',[64 64]);
% imshow(I)
% name = net.Layers(layer).Name;
% title(['Layer ',name,' Features'])
%% Step 9 Setting Output Function Train Network with Augmented Images
% (images may have size variation resizing for consistency with pretrain net)
%[XimdsTrain,YimdsTrain] = digitTrain4DArrayData;
%% digitTrain4DArrayData loading the digit training set as 4-D array data. XTrain is a 28-by-28-by-1-by-5000 array,
% 28 is the height and width of the images.
% 1 is the number of channels.
% 5000 is the number of synthetic images of handwritten digits.
% YTrain is a categorical vector containing the labels for each observation.
% Set aside 1000 of the images for network validation.
% idx = randperm(size(XimdsTrain,4),3000)
% XimdsValidation = XimdsTrain(:,:,[1 1 1])
% XimdsTrain(:,:,[1 1 1]) = []
% YimdsValidation = YimdsTrain(:,:,[1 1 1])
% YimdsTrain(idx) = []
%% Creating The imageDataAugmenter/ Specifies Resizing, Rotation, Translation, & Reflection.
% Randomly translate images horizontally and vertically,
% Rotating Image with Angle up to 70 degrees.
pixelRange = [-70 70]
imageAugmenter = imageDataAugmenter(...
'RandRotation',[-70 70],...
'RandXReflection',true,...
'RandYReflection',true,...
'RandXShear',[-30 50],...
'RandYShear',[-30 50],...
'RandXTranslation',pixelRange, ...
'RandYTranslation',pixelRange)%,...
% augimdsTrain = augmentedImageDatastore(inputSize(1:2),imdsTrain, ...
% 'DataAugmentation',imageAugmenter)
%% Step 10 Resizing Images, Assists With Preventing Overfitting
% Utilising Data Augmentation For Resizing Validation Data
% Implemented Without Specifying Overfit Prevention Procedures
% By Not Specifying These Procedures The System Will Be Precise Via
% Predicitons Data Augmentation Prevent The Network From
% Overfitting/ MemorizingExact Details Of Training Images
%augmentedTrainingSet = augmentedImageDatastore(inputSize ,imdsTrain,'ColorPreprocessing', 'gray2rgb')
augimdsValidation = augmentedImageDatastore(inputSize,imdsValidation,'DataAugmentation',imageAugmenter)
% augimds = augmentedImageDatastore(inputSize,XimdsTrain,YimdsTrain,'DataAugmentation',imageAugmenter)
augmentedTrainingSet = augmentedImageDatastore(inputSize ,imdsTrain,'DataAugmentation',imageAugmenter)
% Specify the image output size.
% imageSize = [227 227 3];
% augimds = augmentedImageDatastore(inputSize,XimdsTrain,YimdsTrain,'DataAugmentation',imageAugmenter)
% %% Step 11 Specifying Training Options
% % Keep features from earlier layers of pretrained networked for transfer learning
% % Specify epoch training cycle, the mini-batch size and validation data
% % Validate the network for each iteration during training.
% % (SGDM)groups the full dataset into disjoint mini-batches This reaches convergence faster
% % as it updates the network's weight value more frequently and increases the
% % computationl speed
% % Implementing **WITH** The RCNN Object Detector
options = trainingOptions('sgdm',...
'Momentum',0.9,...
'InitialLearnRate', 1e-4,...
'LearnRateSchedule', 'piecewise', ...
'LearnRateDropFactor', 0.1, ...
'Shuffle','every-epoch', ...
'LearnRateDropPeriod', 8, ...
'L2Regularization', 1e-4, ...
'MaxEpochs', 10,...
'MiniBatchSize',80,...
'Verbose', true);
%'ValidationData',{XimdsValidation,YimdsValidation});
%
% %% Step 12 Training network Consisting Of Transferred & New Layers.
netTransfer = trainNetwork(augmentedTrainingSet,Tlayers,options)
cnn = trainRCNNObjectDetector(Wgtruth, netTransfer, options, 'NegativeOverlapRange', [0 0.3]);
save('cnn.mat', 'rcnn')
%% Predicitng Validation Image Accuracy
predictedLabels = classify(netTransfer,augimdsValidation)
accuracy =mean(predictedLabels== imdsValidation.Labels)
%% Step 13 Testing R-CNN Detector On Test Image.
img = imread('11.jpg');
[bbox1, score, label] = detect(cnn, img, 'MiniBatchSize', 80)
% numObservations = 4;
% images = repelem({img},numObservations,1)
% bboxes = repelem({bbox},numObservations,1)
% labels = repelem({label},numObservations,1)
%% Step 14 Displaying Strongest Detection Results.
[score, idx] = max(score)
bbox1 = bbox(idx, :)
annotation = sprintf('%s: (Confidence = %f)', label(idx), score)
detectedImg = insertObjectAnnotation(img, 'rectangle', bbox1, annotation);
figure
imshow(detectedImg)

请先登录,再进行评论。

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by