Why does the Mathworks provided model for See-in-the-Dark Data Set not give as good results as the original Tensorflow equivalent?

44 次查看(过去 30 天)
Please refer to the Matlab example (https://www.mathworks.com/help/images/learning-to-see-in-the-dark.html), This page itself acknowledges that "The colors of the network prediction are less saturated and vibrant than in the ground truth". I have compared it against the original TF model as provided by the original authors (https://github.com/cchen156/Learning-to-See-in-the-Dark) which DOES provide a very good reconstruction of the ground truth with vibrant colors. I have tried retraining the model in Matlab and got similarly poor results as the Matlab-provided pretrained model. Could someone please throw light on what could be causing this? I am afraid without a solution, Matlab example is not useful at all compared to the Tensorflow solution.

回答(1 个)

Divyam
Divyam 2024-8-9,8:07
编辑:Divyam 2024-8-9,9:09
Hi @Muhammad Bilal, the MATLAB example is not a direct replication of the solution provided by authors of the "Learning to see in the dark" research paper. The pretrained model in the example has been trained only on the images clicked from the Sony camera and implements a loss function which utilizes both L1 and SSIM loss rather than the researcher's pipeline.
function loss = lossFcn(Y,T)
ssimLoss = mean(1-multissim(rgbToGray(Y),rgbToGray(T),NumScales=5),"all");
L1loss = mean(abs(Y-T),"all");
alpha = 7/8; % You can experiment with this value to adjust the loss function
loss = alpha*ssimLoss + (1-alpha)*L1loss;
end
As suggested in the table below which is taken from the original research paper (https://arxiv.org/pdf/1805.01934), different loss functions can be tried out to figure out which works best for training the MATLAB model.
The MATLAB reconstruction is not as close to the ground truth as reported in the original paper as specified in the example. To improve your model, you can increase the number of epochs and alter the loss function to train a better model which might give an output closer to the ground truth. To increase the contrast in the output images you can try implementing histogram stretching as a post-processing step which increases the contrast of the image on the cost of image clarity. Using the 'imadjust' function during post processing also helps with adjusting the contrast of a colored image.
To learn more about histogram stretching and the 'imadjust' function you can refer to the following MATLAB answer and documentation links:
To learn more about other contrast enhancement techniques you can refer to the following documentation link: https://www.mathworks.com/help/images/contrast-enhancement-techniques.html

类别

Help CenterFile Exchange 中查找有关 Recognition, Object Detection, and Semantic Segmentation 的更多信息

产品


版本

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by