Hi Jaime,
When using Principal Component Analysis (PCA) to transform your data, it's important to understand what PCA does and how it affects your dataset. PCA projects your data onto a new coordinate system where the axes (principal components) are ordered by the amount of variance they explain in the data. The score matrix contains the projections of your original data onto these principal components.
Following is the code that might help you:
V = unifrnd(1, 2, 1, 10000);
A = betarnd(2, 2, 1, 10000);
t = 50;
S = zeros(t, 10000);
theoreticalmeanS = zeros(1, t);
meanS = zeros(1, t);
for i = 1:t
S(i, :) = V * i + 0.5 * A * i^2;
theoreticalmeanS(i) = 3/2 * i + 1/4 * i^2;
meanS(i) = mean(S(i, :));
end
[coeff, score, latent] = pca(S');
% Plot the principal component eigenvectors
figure('Name', 'coeff, principal component eigenvectors');
hold on;
for i = 1:t
plot(coeff(:, i));
end
title('Principal Component Eigenvectors');
xlabel('Feature Index');
ylabel('Coefficient Value');
% Plot original data
figure;
hold on;
plot(S);
title('Original Data S');
xlabel('Sample Index');
ylabel('Value');
% Plot transformed data (scores)
figure;
hold on;
plot(score');
title('PCA Transformed Data (Scores)');
xlabel('Sample Index');
ylabel('Score Value');
% Plot explained variance
figure;
plot(cumsum(latent) / sum(latent) * 100);
xlabel('Number of Principal Components');
ylabel('Variance Explained (%)');
title('Cumulative Variance Explained by Principal Components');
% Attempt to reconstruct S from scores and coefficients
S_reconstructed = score * coeff';
figure;
plot(S(1, :), 'b', 'DisplayName', 'Original S');
hold on;
plot(S_reconstructed(1, :), 'r--', 'DisplayName', 'Reconstructed S');
legend;
title('Comparison of Original and Reconstructed Data');