how working layers in deep learning ?

1 次查看(过去 30 天)
voxey
voxey 2020-1-7
回答: Sanyam 2022-7-4
how working layers in deep learning ?
  • Relu
  • Pool
  • Con
  • inception
  • Droput
  • weith---? what is the purpose of weight ?
  • How to reduce training time ?

回答(1 个)

Sanyam
Sanyam 2022-7-4
Hey @voxey
To understand these concepts in depth, I would suggest you to have look at the deep learning and image processing courses provided by mathworks
Still there is a brief overview of the concepts which you asked:
1) Relu : It's an activation function which is used to introduce non-linearity to the network and helps our network to learn non-linear decision boundaries better
2) Pooling : pooling is an operation used in CNNs. It is done to reduce the size of feature maps. Also it makes the network robust by introducing rotational/translational changes
3) Convolution : It is an operation in CNNs. It's main purpose to extract features from the image
4) Inception : Architecture used in GoogleNet. Refer this link
5) Dropout : It is a regularization technique used to prevent the neural net from overfitting
6) weight : It is a learnable parameter, network learns it over training to perform the task for which it's trained
7) Reducing training time : You can explore many options like using transfer learning,training on GPU, reducing number of epochs etc

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2013b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by