How to solve out of memory error?

2 次查看(过去 30 天)
I am doing my project in OCR.For this i am using image size of 64x64 because when i tried 32x32 etc some pixels is lost.I have tried features such as zonal density,Zernike moments,Projection Histogram,distance profile,Crossing .The main problem is feature vector size is too big .I have take the combination of above features and tried.But whenever i train the neural network ,i have got an error "out of memory".I have tried pca dimensionality reduction but its not work good.i didnt get efficency during training.Run the code in my pc and laptop.In both of them i have got same error.my RAM is 2GB.so i think about reducing the size of an image.is there any solution to solve this problem.
I have one more problem whenever i tried to train the neural network using same features result is varied.how to solve this also?

采纳的回答

Greg Heath
Greg Heath 2013-5-7
Of course pixels are lost when you reduce the size. I am not an expert in imagery, therefore I cannot confidently suggest another method. However, there must be several acceptable ones available. Why don't you submit a post on image feature reduction?
The NNs in the NNTBX randomly divide data and randomly initialize net weights. If you want to reproduce a design, set the RNG to the same initial state as before.
Hope this helps.
Thank you for formally accepting my answer
Greg

更多回答(1 个)

Jan
Jan 2013-5-7
What about installing more RAM?

类别

Help CenterFile Exchange 中查找有关 Image Data Workflows 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by