Finding the optimal solution for a data with two variables
显示 更早的评论
I have a data set for two variables (two columns) in which each variable has thousands of solutions (rows). I also have the actual solution.
I want to find the best solution (row) with the lowest error considering both variables (not only one variable).
As a simple example containing only 10 solutions (rows), I have the following:
% actual solution:
X1_actual = 0.4722;
X2_actual = 4.4;
% predicted data:[X1_predicted X2_predicted]
Predicted_data = [0.4742 4.4557
0.4739 4.4553
0.4732 4.4549
0.4730 4.4545
0.4725 4.4540
0.4723 4.4536
0.4715 4.4532
0.4714 4.4528
0.4713 4.4505
0.4701 4.4501]
Where the first and second columns represent the predicted values of X1 and X2, respectively.
The problem is that the minimum errors for both variables are not at the same row. As can be seen in this example, the minimum error for X1 is in the 6th row while for X2 it is in the last row.
Is there a scientific method to find the optimal row that considers the minimum errors of both variables?
3 个评论
It's up to you to define a measure that gives the "distance" between two points (x1,y1) and (x2,y2) in 2d-space and choose the row that has the minimum "distance" to the given point according to this measure.
One possibility is the usual Euclidean distance d = sqrt((x1-x2)^2 + (y1-y2)^2), but there are many other options.
AAAAAA
2023-6-11
As I wrote: all norms on IR^2 can be used:
采纳的回答
更多回答(0 个)
类别
在 帮助中心 和 File Exchange 中查找有关 Statistics and Machine Learning Toolbox 的更多信息
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!