Cell contents reference from a non-cell array object error
显示 更早的评论
Error: Cell contents reference from a non-cell array object. I know that a input must be a cell array but thin is I don't know where went wrong when I put in a cell array...
Here's the code:
function subset = GetClassSubsetIndexes(classes)
subset=[];
oldClassLabel = 'nekaLabela';
for i=1 : 1 : length(classes)
if oldClassLabel ~= classes{i}
oldClassLabel = classes{i};
subset = cat(1, subset, i);
end
end
%now put end indicies
for i=2 : 1 : size(subset,1)
endIndex = subset(i, 1);
subset(i-1, 2) = endIndex-1;
end
subset(size(subset,1), 2) = length(classes);
end
Need help Thanks!!! Ps: please don't close the question, it's quite important to me...
4 个评论
Which line is the error occurring on?
what does
class(classes)
show?
Please remember to use strcmp() to compare strings, not "==" or "~="
It occured at the line if oldClassLabel ~= classes{i} I tried changing the brackets. It seems to work but not sure whether it is the correct method? Im trying your method as well...
For == it gives the same error, class(classes) gives the following error: For colon operator with char operands, first and last operands must be char.
if ~strcmp(oldClassLabel, classes{i})
The bit about colon operators makes no sense unless the class() call itself has been shadowed.
Right after the "function" line, for the moment please put
which -all class
whos classes
and show us the output.
采纳的回答
Your code is written to assume that LDA is called with the second parameter being a cell array of strings, but you are instead calling it with the second parameter being a column vector of double (such as a class number.)
38 个评论
I am a novice, and need more guidance from you. I apologise for that. What are the second parameters defined? My inputs are:
data = importdata('LDA data.mat')
features=data(:,1:end-1); %split data without labels
lable=data(:,end); %get the labels from training data
trainSamples = features; %training samples
trainClasses = lable; %training labels
I2 = reshape(I,[],1); % reshape image into cloumn
testSamples = I2; %test samples
lableimage = reshape(handles.lableimage,[],1);
testClasses = lableimage; %test labels drawn from ROI
Your line
lable=data(:,end); %get the labels from training data
is quite likely going to be extracting numeric labels rather than strings. If you are passing this "lable" to LDA then you should write the code to expect a numeric array rather than a cell array of strings. In particular, your line
oldClassLabel = 'nekaLabela';
would no longer be appropriate, and you would use () indexing rather than {} indexing.
I would suggest you initialize
oldClassLabel = nan;
Im working on it right now to see whether it works. Thanks for tons of help so far.
That line was supposed to extract the training data's labels from the total training data so it should not have been a problem, I've changed the {} to (), inititalized NaN as stated above.
This came up:
Error using * MTIMES is not fully supported for integer classes. At least one input must be scalar. To compute elementwise TIMES, use TIMES (.*) instead.
I think im starting to understand, the code dosen't accept double precision. Right?
Which line is the mtime complaint on? You removed your code so I cannot look to make guesses.
My bad, here it is. It was at the line :
projectedSamples = samples * vectors;
Code:
classdef LDA < handle
%inherit from handle otherwise we must store value (object) each time when we change some propertie value
%function would need to return obj(this)
properties
Samples, %array of feature vectors
Classes, %class labels for samples
EigenVectors,
EigenValues,
BetweenScatter, %SB
WithinScatter, %SW
NumberOfClasses
end
properties(Access = 'private')
ClassSubsetIndexes %start and end indicies where one class begins and ends in variable "Classes"
TotalMean,
MeanPerClass
end
methods
function this = LDA(samples, classes)
this.Samples = samples;
this.Classes = classes;
this.ClassSubsetIndexes = LDA.GetClassSubsetIndexes(classes);
this.NumberOfClasses = size(this.ClassSubsetIndexes, 1);
[this.TotalMean this.MeanPerClass] = this.CalculateMean(samples);
end
function Compute(this)
SB = this.CalculateBetweenScatter();
SW = this.CalculateWithinScatter();
[eigVectors, eigValues] = eig(SB, SW);
eigValues = diag(eigValues);
%--------- sort eig values and vectors ---------%
sortedValues = sort(eigValues,'descend');
[c, ind] = sort(eigValues,'descend'); %store indicies
sortedVectors = eigVectors(:,ind); % reorder columns
this.EigenVectors = sortedVectors;
this.EigenValues = sortedValues;
this.BetweenScatter = SB;
this.WithinScatter = SW;
end
function projectedSamples = Transform(this, samples, numOfDiscriminants)
vectors = this.EigenVectors(:, 1:numOfDiscriminants);
%transformed sample is scalar (projection on a hyperplane)
projectedSamples = samples * vectors;
end
function measure = CalculateFLDMeasure(this, numOfDiscriminants)
SB = this.BetweenScatter;
SW = this.WithinScatter;
vectors = this.EigenVectors(:, 1:numOfDiscriminants);
measure = det(vectors' * SB * vectors) / det(vectors' * SW * vectors);
end
end
methods(Access = 'private')
function [totalMean meanPerClass] = CalculateMean(this, samples)
for classIdx=1 : 1 : length(this.ClassSubsetIndexes)
startIdx = this.ClassSubsetIndexes(classIdx, 1);
endIdx = this.ClassSubsetIndexes(classIdx, 2);
meanPerClass(classIdx, :) = mean( samples(startIdx:endIdx, :), 1);
end
totalMean = mean(meanPerClass, 1); %global average value
end
function SW = CalculateWithinScatter(this)
featureLength = size(this.Samples, 2);
SW = zeros(featureLength, featureLength);
for classIdx=1 : 1 : length(this.ClassSubsetIndexes)
startIdx = this.ClassSubsetIndexes(classIdx, 1);
endIdx = this.ClassSubsetIndexes(classIdx, 2);
classSamples = this.Samples(startIdx:endIdx, :);
classMean = this.MeanPerClass(classIdx, :);
Sw_Class = LDA.CalculateScatterMatrix(classSamples, classMean);
SW = SW + Sw_Class;
end
end
function SB = CalculateBetweenScatter(this)
featureLength = size(this.Samples, 2);
SB = zeros(featureLength, featureLength);
for classIdx=1 : 1 : length(this.ClassSubsetIndexes)
startIdx = this.ClassSubsetIndexes(classIdx, 1);
endIdx = this.ClassSubsetIndexes(classIdx, 2);
numberOfSamplesInClass = endIdx - startIdx + 1;
classMean = this.MeanPerClass(classIdx, :);
%because my vector is row-vector
Sb_class = (classMean - this.TotalMean)' * (classMean - this.TotalMean);
Sb_class = numberOfSamplesInClass * Sb_class;
SB = SB + Sb_class;
end
end
end
methods(Static, Access = 'private')
function subset = GetClassSubsetIndexes(classes)
subset=[];
oldClassLabel = NaN;
for i=1 : 1 : length(classes)
if oldClassLabel ~= classes(i)
oldClassLabel = classes(i);
subset = cat(1, subset, i);
end
end
%now put end indicies
for i=2 : 1 : size(subset,1)
endIndex = subset(i, 1);
subset(i-1, 2) = endIndex-1;
end
subset(size(subset,1), 2) = length(classes);
end
function Sw_class = CalculateScatterMatrix(classSamples, classMean)
featureLength = size(classSamples, 2);
Sw_class = zeros(featureLength, featureLength);
for sampleIdx=1 : 1 : size(classSamples, 1)
covariance = (classSamples(sampleIdx, :) - classMean);
covariance = covariance' * covariance; %because my vector is row-vector
Sw_class = Sw_class + covariance;
end
end
end
end
Could you go back to the place you loaded the data,
data = importdata('LDA data.mat')
and after that ask
class(data)
I suspect you are loading data with an integer data class such as uint8
The answer provided was double
Hmmm.
After your line
vectors = this.EigenVectors
please show
class(samples)
class(vectors)
size(samples)
size(vectors)
Please also show how you are invoking Transform() and the class() of the data items you are passing in to it.
Question: I do not see any code to enforce that Compute() has been called before you use the stored eigenvalues and eigenvectors ?
Maybe this might help:
function Apply_Callback(hObject, eventdata, handles) global I
data = importdata('LDA data.mat')
features=data(:,1:end-1); %split data without labels
lable = data(:,end); %get the labels from training data
trainSamples = features;%training samples
trainClasses = lable;%training labels
I2 = reshape(I,[],1);
testSamples = I2;%test samples
lableimage = reshape(handles.lableimage,[],1);
testClasses = lableimage;%test labels
mLDA = LDA(trainSamples, trainClasses);
mLDA.Compute();
transformedTrainSamples = mLDA.Transform(trainSamples, 1);
transformedTestSamples = mLDA.Transform(testSamples, 1);
calculatedClases = knnclassify(transformedTestSamples,
transformedTrainSamples, trainClasses);
simmilarity = [];
for i = 1 : 1 : length(testClasses)
similarity(i) = ( testClasses{i} == calculatedClases{i} );
end
accuracy = sum(similarity) / length(testClasses);
fprintf('Testing: Accuracy is: %f %%\n', accuracy*100);
guidata(hObject, handles);
% hObject handle to Apply (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA)
Sorry for the late reply, busy the whole morning... Here it is ans =
double
ans =
double
ans =
170884 21
ans =
21 1
ans =
uint8
ans =
double
ans =
512652 1
You have not mentioned handles.lableimage before. What is it?
Also which variable is uint8 the class of?
If possible would you be able to leave your email? That way the problem would be much clearer...
handles.lableimage is actually the labels of the image labelled by the person... uint8 is the size(samples)
uint8 cannot be an answer for a size, only for a class.
I had suggested asking for class(samples) class(vectors) size(samples) size(vectors), and the outputs that appear to correspond to that information is the 'double' 'double' '170884 21' and '21 1' . And then I don't know what the 'uit8' 'double' '512652 1' correspond to. Perhaps you could go back and before each size() and class() add a disp() indicating what the following value will represent.
I think I might be able to have the answers to the 5126521 1. Its actually the picture I loaded that's for testing. < 359x476x3 double > so it ends up like that. So does that mean that because I put this in, the thing cannot run?
I am still trying to figure out where the uint8 is coming from. I need those class() and size() information with an indication of what each one is.
Ok . I think its coming from the data. Nid some time to load... Will be replying u the data...
class(samples)
ans =
double
ans =
uint8
class(vectors)
ans =
double
ans =
double
size(samples) ans =
170884 21
ans =
21 1
size(vectors) ans =
512652 1
ans =
21 1
I am confused about why there are two ans for each indicator of what follows ??
Im actually not sure either, followed what was told but this came out...
Please post the current version of the code including the disp() and size() and so on calls. Also please post the current version of the code you are using to load the data and call these routines.
There are quite a few parts, are you sure you don't want me to email you?
---------------------LDA------------------------
classdef LDA < handle
%inherit from handle otherwise we must store value (object) each time when we change some propertie value
%function would need to return obj(this)
properties
Samples, %array of feature vectors
Classes, %class labels for samples
EigenVectors,
EigenValues,
BetweenScatter, %SB
WithinScatter, %SW
NumberOfClasses
end
properties(Access = 'private')
ClassSubsetIndexes %start and end indicies where one class begins and ends in variable "Classes"
TotalMean,
MeanPerClass
end
methods
function this = LDA(samples, classes)
this.Samples = samples;
this.Classes = classes;
this.ClassSubsetIndexes = LDA.GetClassSubsetIndexes(classes);
this.NumberOfClasses = size(this.ClassSubsetIndexes, 1);
[this.TotalMean this.MeanPerClass] = this.CalculateMean(samples);
end
function Compute(this)
SB = this.CalculateBetweenScatter();
SW = this.CalculateWithinScatter();
[eigVectors, eigValues] = eig(SB, SW);
eigValues = diag(eigValues);
%--------- sort eig values and vectors ---------%
sortedValues = sort(eigValues,'descend');
[c, ind] = sort(eigValues,'descend'); %store indicies
sortedVectors = eigVectors(:,ind); % reorder columns
this.EigenVectors = sortedVectors;
this.EigenValues = sortedValues;
this.BetweenScatter = SB;
this.WithinScatter = SW;
end
function projectedSamples = Transform(this, samples, numOfDiscriminants)
vectors = this.EigenVectors(:, 1:numOfDiscriminants);
class(samples)
class(vectors)
size(samples)
size(vectors)
disp(samples)
disp(vectors)
%transformed sample is scalar (projection on a hyperplane)
projectedSamples = samples * vectors;
end
function measure = CalculateFLDMeasure(this, numOfDiscriminants)
SB = this.BetweenScatter;
SW = this.WithinScatter;
vectors = this.EigenVectors(:, 1:numOfDiscriminants);
measure = det(vectors' * SB * vectors) / det(vectors' * SW * vectors);
end
end
methods(Access = 'private')
function [totalMean meanPerClass] = CalculateMean(this, samples)
for classIdx=1 : 1 : length(this.ClassSubsetIndexes)
startIdx = this.ClassSubsetIndexes(classIdx, 1);
endIdx = this.ClassSubsetIndexes(classIdx, 2);
meanPerClass(classIdx, :) = mean( samples(startIdx:endIdx, :), 1);
end
totalMean = mean(meanPerClass, 1); %global average value
end
function SW = CalculateWithinScatter(this)
featureLength = size(this.Samples, 2);
SW = zeros(featureLength, featureLength);
for classIdx=1 : 1 : length(this.ClassSubsetIndexes)
startIdx = this.ClassSubsetIndexes(classIdx, 1);
endIdx = this.ClassSubsetIndexes(classIdx, 2);
classSamples = this.Samples(startIdx:endIdx, :);
classMean = this.MeanPerClass(classIdx, :);
Sw_Class = LDA.CalculateScatterMatrix(classSamples, classMean);
SW = SW + Sw_Class;
end
end
function SB = CalculateBetweenScatter(this)
featureLength = size(this.Samples, 2);
SB = zeros(featureLength, featureLength);
for classIdx=1 : 1 : length(this.ClassSubsetIndexes)
startIdx = this.ClassSubsetIndexes(classIdx, 1);
endIdx = this.ClassSubsetIndexes(classIdx, 2);
numberOfSamplesInClass = endIdx - startIdx + 1;
classMean = this.MeanPerClass(classIdx, :);
%because my vector is row-vector
Sb_class = (classMean - this.TotalMean)' * (classMean - this.TotalMean);
Sb_class = numberOfSamplesInClass * Sb_class;
SB = SB + Sb_class;
end
end
end
methods(Static, Access = 'private')
function subset = GetClassSubsetIndexes(classes)
subset=[];
oldClassLabel = NaN;
for i=1 : 1 : length(classes)
if oldClassLabel ~= classes(i)
oldClassLabel = classes(i);
subset = cat(1, subset, i);
end
end
%now put end indicies
for i=2 : 1 : size(subset,1)
endIndex = subset(i, 1);
subset(i-1, 2) = endIndex-1;
end
subset(size(subset,1), 2) = length(classes);
end
function Sw_class = CalculateScatterMatrix(classSamples, classMean)
featureLength = size(classSamples, 2);
Sw_class = zeros(featureLength, featureLength);
for sampleIdx=1 : 1 : size(classSamples, 1)
covariance = (classSamples(sampleIdx, :) - classMean);
covariance = covariance' * covariance; %because my vector is row-vector
Sw_class = Sw_class + covariance;
end
end
end
end
Code( As in GUI)
function Apply_Callback(hObject, eventdata, handles)
global I
data = importdata('LDA data.mat')
features=data(:,1:end-1); %split data without labels
lable=data(:,end); %get the labels
trainSamples = features;%training samples
trainClasses = lable;%training labels
I2 = reshape(I,[],1);
testSamples = I2;%test samples
lableimage = reshape(handles.lableimage,[],1);
testClasses = lableimage;%test labels
mLDA = LDA(trainSamples, trainClasses);
mLDA.Compute();
transformedTrainSamples = mLDA.Transform(trainSamples, 1);
transformedTestSamples = mLDA.Transform(testSamples, 1);
calculatedClases = knnclassify(transformedTestSamples,
transformedTrainSamples, trainClasses);
simmilarity = [];
for i = 1 : 1 : length(testClasses)
similarity(i) = ( testClasses{i} == calculatedClases{i} );
end
accuracy = sum(similarity) / length(testClasses);
fprintf('Testing: Accuracy is: %f %%\n', accuracy*100);
guidata(hObject, handles);
LDA data.mat is actually a data consisting of many pixels formed into one row, so there are 21 rows. There is a last column meant for the labels of 1-7. The data has numbers ranging from 0-1. Some are like 0.0079. Maybe this might help. Forgot to add that I don't have statistical toolbox only basic matlab...
In Transform please change the debugging code after the assignment to "vectors" to,
disp('Transform: processing dataset')
inputname(1)
inputname(2)
disp('Transform: class(samples)')
class(samples)
disp('Transform: size(samples)')
size(samples)
disp('Transform: class(vectors)')
class(vectors)
disp('Transform: size(vectors)')
size(vectors)
And in your GUI, after the line
testClasses = lableimage;%test labels
please temporarily add
disp('GUI: trainSamples is')
class(trainSamples)
disp('GUI: testSamples is')
class(testSamples)
I will make a prediction: that your "global I" is data that is uint8, quite contrary to your expectations that the data will be in the range 0-1. Is it possible that the data that went into making the LDA data file is pixels that were processed through im2double(), but that the file that was stored in the variable "I" has not been processed through im2double() ?
It might be possible, im running the changes u told me to make as of now...
trainSamples is double testSamples is uint8
The immediate cause of the matrix multiplication error you are getting is the fact that testSamples is uint8. You could pass in double(testSamples) which would convert it to floating point datatype, but then the values would just be the floating point representation of the non-negative integers, instead of being values in the range 0 to 1 like your trainingSamples are. I therefore suggest trying,
testSamples = im2double(lableimage);
Actually, how does the uint8 and double affect the testing and sampling? What should the trainSamples, trainClasses, testSamples, testClasses even be? What's your opinion on this.
The uint8 is what is causing your matrix multiplication to crash.
You are determining LDA coefficients based upon the data that is in the range 0 to 1, so if you present data that is 0, 1, 2, 3, ... 255, then the best you could hope for would be if it classified as if all of your non-zero data was equivalent to 1.0 in the original coefficients. But it likely won't do that: it will likely just produce wildly strange answers. For example as if you had trained on images of different kind of trout, and then asked the code to classify a cow and a bicycle.
Sounds correct after alot of thought. But my training data has decimals in it so its ok? The samples and vectors are actually my trainSamples and testClasses right?
In the first execution of Transform, "samples" is trainSamples and "vectors" is retrieved from the first eigenvector computed from the LDA; the second parameter that you are passing in to Transform is always 1, reflecting that you only want the first eigenvector.
In the second execution of Transform, "sample" is testSamples, and "vectors" is retrieved from the first eigenvector computed from the LDA; likewise you are always passing in 1 as the second parameter reflecting that you only want the first eigenvector.
The class labels are not used inside Transform()
Now its just the part where the the samples and vector's dimenstions don't match
Oh, I'm here from time to time :)
Got another problem, hoping you could help, the dimensions don't agree but the double works. The testSample is supposed to be the picture I want to classify right?
Also, the new problem is none other than out of memory...Im crying...
更多回答(0 个)
类别
在 帮助中心 和 File Exchange 中查找有关 Tall Arrays 的更多信息
另请参阅
2013-1-23
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!选择网站
选择网站以获取翻译的可用内容,以及查看当地活动和优惠。根据您的位置,我们建议您选择:。
您也可以从以下列表中选择网站:
如何获得最佳网站性能
选择中国网站(中文或英文)以获得最佳网站性能。其他 MathWorks 国家/地区网站并未针对您所在位置的访问进行优化。
美洲
- América Latina (Español)
- Canada (English)
- United States (English)
欧洲
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
