Semantic Segmentation input size

3 次查看(过去 30 天)
According to this https://www.mathworks.com/examples/computer-vision/mw/vision-ex90050995-create-a-semantic-segmentation-network
it stated that "A semantic segmentation network starts with an imageInputLayer, which defines the smallest image size the network can process."
I tried to run this code using different images size, but then i have error
The training images are of size 192x144x1 but the input layer expects images of size 32x32x1.
i only change the size from rgb channel into grayscale channel. my ImageDatastore contain gray scale images with 5 different size from 128x128 to 320x320, some have uneven size. why does this network can not process images that are larger than the specified input size?

回答(1 个)

Vishal Bhutani
Vishal Bhutani 2018-9-21
By my understanding, you want to train a Semantic Segmentation on different set of images. One thing which you should do make all images of same size, uneven size will also work but all images should be of same size. After making all images to one size, you can do one thing is to make changes in the following command:
>> inputSize = [size1 size2 3];
>> imgLayer = imageInputLayer(inputSize)
where size1 and size2 specify your image size. Specify 3 for RGB images and 1 for grayscale images. Hope it helps.

类别

Help CenterFile Exchange 中查找有关 Get Started with Image Processing Toolbox 的更多信息

产品


版本

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by