How to convert 2D layer to 1D and from 1D to 2D?

9 次查看(过去 30 天)
Helo,
I want to create an autoencoder architecture with 2D input and output (matrix) but inside I need 1D (fullyConnectedLayer) as a latent layer. How to do it? In Layer Library of Deep Network Designer I cannot see the useful "bricks".
Regards,
Greg

回答(2 个)

David Willingham
David Willingham 2022-9-6
Can you describe a little more about your application? I.e. what is your input, is it a matrix of signals or an image?
  2 个评论
Grzegorz Klosowski
Input and output are 48x48 matrix. It is a single-channel image that consists of real numbers.
Grzegorz Klosowski
编辑:Grzegorz Klosowski 2022-9-6
And even this matrix. The same at the input and output. And in the middle it should be a 1D layer. I need to compress this image this way. Using the autoencoder precisely. In the latent layer, I want to have a vector consisting of, say, 256 neurons. How to change a dimension from 2D to 1D and from 1D to 2D inside a neural network (or an autoencoder)?

请先登录,再进行评论。


Mandar
Mandar 2022-12-13
I understand that you want make an autoencoder network with specific hidden layer (latent layer) size.
You may refer the following documentation which shows how to create an autoencoder networks with specific hidden layer size, learning latent features and further stacking the latent layers to make a stacked autoencoder network with multiple hidden layers to learn effective latent features.

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

产品


版本

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by