Surface Fitting using Neural Networks

版本 1.2.5 (1.5 MB) 作者: S0852306
Solve N-dimensional surface fitting problems with extremely high accuracy.
449.0 次下载
更新时间 2024/7/4

查看许可证

Neural Networks for Nonlinear Regression & Classification
Neural networks are universal function approximator, which means that given enough parameters, a neural net can approximate any multivariable continuous function to any desired level of accuracy.
Arbitrary dimension & high-precision function approximation. ( )
This package aims to solve problems related to scientific computing where the tolerance for approximation error is small. A second-order optimization solver is used, allowing the network to achieve significantly higher accuracy than commonly used first-order solvers.
NN = NeuralFit(x, y, [N, M]); % N: input dimension , M: output dimension.
Note
  • Fit data using "NeuralFit", where N& Mare input & output dimension respectively.
  • "x" must be a matrix and "y" is a matrix where D is number of data.
  • "NeuralFit" use default setting, to adjust the architecture of neural networks or optimization solvers, please refer to "GeneralGuide.mlx".
Example
x = linspace(-2, 2, 20); y = x;
[X, Y] = meshgrid(x, y); U = X.^2 + Y.^2; Z = exp(-0.5*U).*cos(2*U);
data = [X(:), Y(:)].'; label = Z(:).';
NN = NeuralFit(data, label, [2, 1]);
Prediction = NN.Evaluate(data);
PerformanceMetric = NN.Report;
figure();
surf(X, Y, Z); hold on; scatter3(data(1, :), data(2, :), label)
Application
Approximate highly nonlinear function Smoothing out data and estimate derivative
Handwritten digit recognition Find pattern form noisy data
Basic Network & Solver Set Up
data = linspace(0,2*pi,1000); label = (data).*sin(data) + cos(3*data);
LayerStruct = [1, 7, 7, 7, 1];
NN = Initialization(LayerStruct);
option.MaxIteration=600;
NN = OptimizationSolver(data, label, NN, option);
Prediction = NN.Evaluate(data);
Standard Template
  • Two standard templates available for users to quickly call the main functions: "SimplifiedWorkflow.m" and "CustomizableWorkflow.m" The purpose of "SimplifiedWorkflow.m" is to assist beginners in quickly getting started, while the other template provides more flexibility.
Instruction and Example
  • For detailed instructions on how to use the package, please refer to "GeneralGuide.mlx".
  • "DigitRecognition.mlx" use a simple MLP architecture and achieves an accuracy of 97.6% on the testing set of the "MNIST" handwritten digit recognition dataset.
  • "CurveFittingFromNoisyData.mlx" demonstrates how to use neural nets to fit noisy data and estimate derivatives.
  • "CustomizableWorkflow.m" provide standard workflow for following multivariable function approximation.
  • "MathModel.mlx" explains the mathematical model of neural nets and provides a step-by-step numerical example that may help users understand neural nets more easily.
Customizing the model and solver parameters.
  • For detailed instructions, please refer to "GeneralGuide.mlx" in example page.
NN.Cost = 'MSE'; % specify cost function.
NN.InputAutoScaling = 'on'; % normalize the data automatically.
NN.ActivationFunction = 'gaussian'; % specify nonlinear activation.
LayerStruct = [InputDimension, 10, 10, 10, OutputDimension]; % define the size of network.
NN = Initialization(LayerStruct,NN); % initialize weights of neural net.
%% First-stage optimization
option.MaxIteration = 100;
option.Solver = 'ADAM';
option.BatchSize = 500;
NN = OptimizationSolver(data, label, NN, option);
%% Second-stage optimization
option.MaxIteration = 400;
option.Solver = 'BFGS';
NN = OptimizationSolver(data, label, NN, option);
Report = FittingReport(data, label, NN); % Quantify fitting performance.
Tips and considerations for training neural networks
  • Please refer to the "Tips for Training Neural Networks.mlx" which provides detailed yet straightforward instructions to easily address the mentioned issues.
  • Ensure that the data (i.e., input x) is distributed in similar magnitude. otherwise, it can make neural network training challenging. Therefore, it is recommended that users always preprocess their data (i.e., perform normalization) before starting the optimization process. If you are unfamiliar with preprocessing methods, the package also provides basic algorithms that should be sufficient for most situations.
  • Make sure the standard deviation of the labels is not too small, as it can also make it difficult to train the neural network. The package includes built-in functions to handle this situation.
Optimization Solvers
  1. Stochastic Gradient Descents (SGD)
  2. Stochastic Gradient Descents with Momentum (SGDM)
  3. Root Mean Square Propagation (RMSprop)
  4. Adaptive Momentum Estimation (ADAM)
  5. Adaptive Momentum Estimation with Weight decay (AdamW)
  6. Broyden-Fletcher-Goldfarb-Shanno Method (BFGS)
Mathematical model of neural nets
  • W,b are the weights matrices and bias vectors of neural nets.
  • d is the depth of neural nets.
  • is a point-wise non-linear function, such as tanh.
  • For more detail, please refer to "MathModel.mlx".
Reference
  1. Numerical Optimization, Nocedal & Wright.
  2. Practical Quasi-Newton Methods for Training Deep Neural Networks, Goldfarb, et al.
  3. Kronecker-factored Quasi-Newton Methods for Deep Learning, Yi Ren, et al.

引用格式

S0852306 (2024). Surface Fitting using Neural Networks (https://www.mathworks.com/matlabcentral/fileexchange/129589-surface-fitting-using-neural-networks), MATLAB Central File Exchange. 检索时间: .

MATLAB 版本兼容性
创建方式 R2020a
兼容任何版本
平台兼容性
Windows macOS Linux

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
版本 已发布 发行说明
1.2.5

Update instructions.

1.2.4

Fit data with a single line of code.

1.2.3

minor update

1.2.2

Add a weighted least-squares option, see "WeightedListSquare.m".

1.2.1

Explain the mathematical model of neural nets using a live script.

1.2.0

Solver update: AdamW, avoiding overfitting by weight decay.

1.1.9

Add MAE cost for robust surface fitting.

1.1.8

Minor update.

1.1.7

Solver minor update

1.1.6

1. Handwritten digit recognition (MNIST).
2. Bug fixed.

1.1.5

1. Add cross-entropy cost for classification problems.
2. ReLU bug fixed

1.1.4

1. Add Cross-Entropy Cost for Classification Task.
2. ReLU bug fixed.

1.1.3

New Solver 'RMSprop'

1.1.2

Minor Bug Fixed.
(Previous Version) There was an error in calculating the gradient for the bias in the last layer, but strangely, it didn't have a significant impact on the training results.

1.1.1

Solver Improvement.

1.1.0

Improve efficiency.
Bug fixed.

1.0.9

bug fixed

1.0.8

autoscaling
automatic derivatie

1.0.7

Added Autoscaling Function
Automatic Derivate Calculation (for x, i.e. input, not parameters of NN)
Simplified Command

1.0.6

Added autoscaling capability.
Added automatic derivate function for x.

1.0.5

guided

1.0.3

user guide

1.0.2

User Guide

1.0.1

Added User Guide. ("Guide.mlx")

1.0.0