Gradient Descent - fix
5 次查看(过去 30 天)
显示 更早的评论
Hi all,
I have the following code for one of the assignments on Gradient Descent for Machine Learning, Coursera:
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
% theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by
% taking num_iters gradient steps with learning rate alpha
% Initialize some useful values
data = load('ex1data1.txt'); % read comma separated data
y = data(:, 2);
m = length(y); % number of training examples
X = [ones(m, 1), data(:,1)]; % Add a column of ones to x
theta = zeros(2, 1);
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
delta = zeros(2, 1);
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
for i = 1:m
Xi = X(i,:);
hi = Xi*theta;
delta = delta + (hi-y(i))*(Xi');
end
delta = delta/m;
theta = theta - alpha*delta;
delta = 0;
% ============================================================
% Save the cost J in every iteration
J_history(iter) = computeCost(X, y, theta);
end
end
It gives me the following error code:
>> gradientDescent()
Not enough input arguments.
Error in gradientDescent (line 13)
J_history = zeros(num_iters, 1);
I cannot find an answer to this. Any idea how to fix it?
0 个评论
采纳的回答
Matt J
2021-3-29
Call your function with all 5 input arguments.
6 个评论
Samuel Valentin Lopez Valenzuela
2021-8-6
i have the same problem , how did you use accessory files to fix it?
更多回答(0 个)
另请参阅
类别
在 Help Center 和 File Exchange 中查找有关 Deep Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!