How to gather data of same class, select N rows, calculate mean in CSV file
1 次查看(过去 30 天)
显示 更早的评论
Hello.
I have a data (90 x 2857), column 2857 is a label(class)
I want to gather data of the same class, select N rows, calculate mean.
I made the below code, but I want to reduce the code using for loop
Thank you.
clear all
close all
% read csv file
final = csvread('outfile.csv');
% split data and label
predictors = final(:, 1:end-1);
labels = final(:, end);
predictors = normalize(predictors, 2);
predictors_train = predictors(1:80, :); % rows 1 to 80
predictors_test = predictors(81:90, :); % rows 81 to 90
labels_train = labels(1:80, :); % rows 1 to 80
labels_test = labels(81:90, :); % rows 81 to 90
% gather data of same lable
wave2_support= [predictors_train(1,:); predictors_train(11,:); predictors_train(21,:);
predictors_train(31,:); predictors_train(41,:); predictors_train(51,:);
predictors_train(61,:); predictors_train(71,:)];
wave1_support= [predictors_train(2,:); predictors_train(12,:); predictors_train(22,:);
predictors_train(32,:); predictors_train(42,:); predictors_train(52,:);
predictors_train(62,:); predictors_train(72,:)];
walk_support= [predictors_train(3,:); predictors_train(13,:); predictors_train(23,:);
predictors_train(33,:); predictors_train(43,:); predictors_train(53,:);
predictors_train(63,:); predictors_train(73,:)];
skip_support= [predictors_train(4,:); predictors_train(14,:); predictors_train(24,:);
predictors_train(34,:); predictors_train(44,:); predictors_train(54,:);
predictors_train(64,:); predictors_train(74,:)];
side_support= [predictors_train(5,:); predictors_train(15,:); predictors_train(25,:);
predictors_train(35,:); predictors_train(45,:); predictors_train(55,:);
predictors_train(65,:); predictors_train(75,:)];
run_support= [predictors_train(6,:); predictors_train(16,:); predictors_train(26,:);
predictors_train(36,:); predictors_train(46,:); predictors_train(56,:);
predictors_train(66,:); predictors_train(76,:)];
pjump_support= [predictors_train(7,:); predictors_train(17,:); predictors_train(27,:);
predictors_train(37,:); predictors_train(47,:); predictors_train(57,:);
predictors_train(67,:); predictors_train(77,:)];
jump_support= [predictors_train(8,:); predictors_train(18,:); predictors_train(28,:);
predictors_train(38,:); predictors_train(48,:); predictors_train(58,:);
predictors_train(68,:); predictors_train(78,:)];
jack_support= [predictors_train(9,:); predictors_train(19,:); predictors_train(29,:);
predictors_train(39,:); predictors_train(49,:); predictors_train(59,:);
predictors_train(69,:); predictors_train(79,:)];
bend_support= [predictors_train(10,:); predictors_train(20,:); predictors_train(30,:);
predictors_train(40,:); predictors_train(50,:); predictors_train(60,:);
predictors_train(70,:); predictors_train(80,:)];
% randomly select 5 rows of 8 data in same label
N=5;
c = randperm(size(wave2_support,1)); c = c(1:N);
wave2=wave2_support(c,:); % wave2 matrix
c = randperm(size(wave1_support,1)); c = c(1:N);
wave1=wave1_support(c,:); % wave1 matrix
c = randperm(size(walk_support,1)); c = c(1:N);
walk=walk_support(c,:); % walk matrix
c = randperm(size(skip_support,1)); c = c(1:N);
skip=skip_support(c,:); % skip matrix
c = randperm(size(side_support,1)); c = c(1:N);
side=side_support(c,:); % side matrix
c = randperm(size(run_support,1)); c = c(1:N);
run=run_support(c,:); % run matrix
c = randperm(size(pjump_support,1)); c = c(1:N);
pjump=pjump_support(c,:); % pjump matrix
c = randperm(size(jump_support,1)); c = c(1:N);
jump=jump_support(c,:); % jump matrix
c = randperm(size(jack_support,1)); c = c(1:N);
jack=jack_support(c,:); % jack matrix
c = randperm(size(bend_support,1)); c = c(1:N);
bend=bend_support(c,:); % bend matrix
% calculate mean of each support data
wave2_mean = mean(wave2,1);
wave1_mean = mean(wave1,1);
walk_mean = mean(walk,1);
skip_mean = mean(skip,1);
side_mean = mean(side,1);
run_mean = mean(run,1);
pjump_mean = mean(pjump,1);
jump_mean = mean(jump,1);
jack_mean = mean(jack,1);
bend_mean = mean(bend,1);
% select query (1 row)
N=1; % 1 of row randomly selected
c = randperm(size(predictors_test,1)); c = c(1:N);
query=predictors_test(c,:); % output matrix
% calculate mean of each support
euclidean_bend = pdist2(bend_mean,query, 'euclidean');
euclidean_jack = pdist2(jack_mean,query, 'euclidean');
euclidean_jump = pdist2(jump_mean,query, 'euclidean');
euclidean_pjump = pdist2(pjump_mean,query, 'euclidean');
euclidean_run = pdist2(run_mean,query, 'euclidean');
euclidean_side = pdist2(side_mean,query, 'euclidean');
euclidean_skip = pdist2(skip_mean,query, 'euclidean');
euclidean_walk = pdist2(walk_mean,query, 'euclidean');
euclidean_wave1 = pdist2(wave1_mean,query, 'euclidean');
euclidean_wave2 = pdist2(wave2_mean,query, 'euclidean');
% softmax
n = [euclidean_bend; euclidean_jack; euclidean_jump; euclidean_pjump;
euclidean_run; euclidean_side; euclidean_skip;
euclidean_walk; euclidean_wave1; euclidean_wave2];
0 个评论
采纳的回答
Hiro Yoshino
2020-3-17
编辑:Hiro Yoshino
2020-3-17
I would suggest you should use table for data analytics.
The following code may help you reduce the number of lines.
up to the end line, the data is sorted by the lable.
data = readtable("outfile.csv");
data.Properties.VariableNames{end} = 'label';
data{:,1:end-1} = normalize(data{:,1:end-1},1); % I belive this should be "1"
data = sortrows(data,'label'); % The data are sorted by the lables
wave2_support = data(:,1:9);
table:
if you want to split the data into TEST and TRAIN, then you should try cvpatition:
This function can maintain the ratio of the labels as it is after spliting the data into two.
0 个评论
更多回答(0 个)
另请参阅
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!