templateGP
Description
returns a Gaussian process (GP)
template suitable for training regression models. After you create the template
t
= templateGPt
, you can specify it as a learner during training.
specifies additional options using one or more name-value arguments. For example, you can
specify the basis function and method for estimating the parameters of the Gaussian process
regression (GPR) model.t
= templateGP(Name=Value
)
If you display t
in the Command Window, then all options appear
empty ([]
), except those that you specify using name-value arguments.
During training, the training function uses default values for empty options.
Examples
Input Arguments
Output Arguments
More About
Algorithms
Fitting a GPR model involves estimating the following model parameters from the data:
Covariance function parameterized in terms of kernel parameters in vector (see Kernel (Covariance) Function Options)
Noise variance
Coefficient vector of fixed-basis functions
The value of the
KernelParameters
name-value argument is a vector that consists of initial values for the signal standard deviation and the characteristic length scales . The software uses these values to determine the kernel parameters. Similarly, theSigma
name-value argument contains the initial value for the noise standard deviation .During optimization, the software creates a vector of unconstrained initial parameter values by using the initial values for the noise standard deviation and the kernel parameters.
The software analytically determines the explicit basis coefficients , specified by the
Beta
name-value argument, from estimated values of and . Therefore, does not appear in the vector when the software initializes numerical optimization.Note
If you do not specify the estimation of parameters for the GPR model, the software uses the value of the
Beta
name-value argument and other initial parameter values as the known GPR parameter values (seeBeta
). In all other cases, the value ofBeta
is optimized analytically from the objective function.The quasi-Newton optimizer uses a trust-region method with a dense, symmetric rank-1-based (SR1), quasi-Newton approximation to the Hessian. The LBFGS optimizer uses a standard line-search method with a limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) quasi-Newton approximation to the Hessian. See Nocedal and Wright [6].
If you set the
InitialStepSize
name-value argument to"auto"
the software determines the initial step size by using .is the initial step vector, and is the vector of unconstrained initial parameter values.
During optimization, the software uses the initial step size as follows:
If you specify
Optimizer="quasinewton"
with the initial step size, then the initial Hessian approximation is .If you specify
Optimizer="lbfgs"
with the initial step size, then the initial inverse-Hessian approximation is .is the initial gradient vector, and is the identity matrix.
References
[1] Nash, W.J., T. L. Sellers, S. R. Talbot, A. J. Cawthorn, and W. B. Ford. "The Population Biology of Abalone (Haliotis species) in Tasmania. I. Blacklip Abalone (H. rubra) from the North Coast and Islands of Bass Strait." Sea Fisheries Division, Technical Report No. 48, 1994.
[2] Waugh, S. "Extending and Benchmarking Cascade-Correlation: Extensions to the Cascade-Correlation Architecture and Benchmarking of Feed-forward Supervised Artificial Neural Networks." University of Tasmania Department of Computer Science thesis, 1995.
[3] Lichman, M. UCI Machine Learning Repository, Irvine, CA: University of California, School of Information and Computer Science, 2013. http://archive.ics.uci.edu/ml.
[4] Rasmussen, C. E. and C. K. I. Williams. "Gaussian Processes for Machine Learning". MIT Press. Cambridge, Massachusetts, 2006.
[5] Lagarias, J. C., J. A. Reeds, M. H. Wright, and P. E. Wright. "Convergence Properties of the Nelder-Mead Simplex Method in Low Dimensions." SIAM Journal of Optimization. vol. 9, no. 1, January 1998, pp. 112–147.
[6] Nocedal, J. and S. J. Wright. Numerical Optimization, Second Edition. Springer Series in Operations Research, Springer Verlag, 2006.
[7] Foster, L., et. al. "Stable and Efficient Gaussian Process Calculations", Journal of Machine Learning Research. vol. 10, no. 31, April 2009, pp. 857–882.
Version History
Introduced in R2023b