Difficulty utilizing pretrained "pix2Pix" GAN implementation using Deep Network Designer
27 次查看(过去 30 天)
显示 更早的评论
Hi there
I am attempting to implement a pretrained "Pix2Pix" GAN that was trained using tensorflow/Keras example:
after (tensorflow) training I saved the model for the Generator to the required model.h5 file.
In Matlab:
import the layers and weights
lgraph=importKerasLayers(modelfile,'ImportWeights',true,"OutputLayerType","regression")
have also attempted the above without the regression layer but the following still occurs:
Using the Deep Network Designer I import lgraph from the workspace and then export with pretrained parameters - unless I am mistaken the weights and biases should be as was trained in Tensorflow?
Exectute the live scripts that Deep Network Designer generated to create the Lgraph with Weights,Biases etc
however once I create the generator from the lgraph:
Generator=assembleNetwork(lgraph)
and then pass an input image (correctly sized, normalized etc) through the generator
Y=predict(Generator,dlPic)
the output appears to return an unchanged input image?
Am I missing any important aspects?
Is it necessary to define the discriminator, loss functions etc if I only wish to utilize the pretrained Generator architecture?
Any assistance would be GREATLY appreciated!
0 个评论
回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!