# mvksdensity

Kernel smoothing function estimate for multivariate data

## Syntax

``f = mvksdensity(x,pts,'Bandwidth',bw)``
``f = mvksdensity(x,pts,'Bandwidth',bw,Name,Value)``

## Description

example

````f = mvksdensity(x,pts,'Bandwidth',bw)` computes a probability density estimate of the sample data in the n-by-d matrix `x`, evaluated at the points in `pts` using the required name-value pair argument value `bw` for the bandwidth value. The estimation is based on a product Gaussian kernel function.For univariate or bivariate data, use `ksdensity` instead.```

example

````f = mvksdensity(x,pts,'Bandwidth',bw,Name,Value)` returns any of the previous output arguments, using additional options specified by one or more `Name,Value` pair arguments. For example, you can define the function type that `mvksdensity` evaluates, such as probability density, cumulative probability, or survivor function. You can also assign weights to the input values.```

## Examples

collapse all

`load hald`

The data measures the heat of hardening for 13 different cement compositions. The predictor matrix `ingredients` contains the percent composition for each of four cement ingredients. The response matrix `heat` contains the heat of hardening (in cal\g) after 180 days.

Estimate the kernel density for the first three observations in `ingredients`.

```xi = ingredients(1:3,:); f = mvksdensity(ingredients,xi,'Bandwidth',0.8);```

`load hald`

The data measures the heat of hardening for 13 different cement compositions. The predictor matrix `ingredients` contains the percent composition for each of four cement ingredients. The response matrix `heat` contains the heat of hardening (in cal/g) after 180 days.

Create a array of points at which to estimate the density. First, define the range and spacing for each variable, using a similar number of points in each dimension.

```gridx1 = 0:2:22; gridx2 = 20:5:80; gridx3 = 0:2:24; gridx4 = 5:5:65;```

Next, use `ndgrid` to generate a full grid of points using the defined range and spacing.

`[x1,x2,x3,x4] = ndgrid(gridx1,gridx2,gridx3,gridx4);`

Finally, transform and concatenate to create an array that contains the points at which to estimate the density. This array has one column for each variable.

```x1 = x1(:,:)'; x2 = x2(:,:)'; x3 = x3(:,:)'; x4 = x4(:,:)'; xi = [x1(:) x2(:) x3(:) x4(:)];```

Estimate the density.

```f = mvksdensity(ingredients,xi,... 'Bandwidth',[4.0579 10.7345 4.4185 11.5466],... 'Kernel','normpdf');```

View the size of `xi` and `f` to confirm that `mvksdensity` calculates the density at each point in `xi`.

`size_xi = size(xi)`
```size_xi = 1×2 26364 4 ```
`size_f = size(f)`
```size_f = 1×2 26364 1 ```

## Input Arguments

collapse all

Sample data for which `mvksdensity` returns the probability density estimate, specified as an n-by-d matrix of numeric values. n is the number of data points (rows) in `x`, and d is the number of dimensions (columns).

Data Types: `single` | `double`

Points at which to evaluate the probability density estimate `f`, specified as a matrix with the same number of columns as `x`. The returned estimate `f` and `pts` have the same number of rows.

Data Types: `single` | `double`

Value for the bandwidth of the kernel-smoothing window, specified as a scalar value or d-element vector. d is the number of dimensions (columns) in the sample data `x`. If `bw` is a scalar value, it applies to all dimensions.

If you specify `'BoundaryCorrection'` as `'log'`(default) and `'Support'` as either `'positive'` or a two-row matrix, `mvksdensity` converts bounded data to be unbounded by using log transformation. The value of `bw` is on the scale of the transformed values.

Silverman's rule of thumb for the bandwidth is

`${b}_{i}={\sigma }_{i}{\left\{\frac{4}{\left(d+2\right)n}\right\}}^{1}{\left(d+4\right)}},\text{ }i=1,2,...,d,$`

where d is the number of dimensions, n is the number of observations, and ${\sigma }_{i}$ is the standard deviation of the ith variate .

Example: `'Bandwidth',0.8`

Data Types: `single` | `double`

### Name-Value Arguments

Specify optional pairs of arguments as `Name1=Value1,...,NameN=ValueN`, where `Name` is the argument name and `Value` is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose `Name` in quotes.

Example: `'Kernel','triangle','Function,'cdf'` specifies that `mvksdensity` estimates the cdf of the sample data using the triangle kernel function.

Boundary correction method, specified as the comma-separated pair consisting of `'BoundaryCorrection'` and either `'log'` or `'reflection'`.

ValueDescription
`'log'`

`mvksdensity` converts bounded data to be unbounded by using one of the following transformations. Then, it transforms back to the original bounded scale after density estimation.

• If you specify `'Support','positive'`, then `mvksdensity` applies `log`(xj) for each dimension, where xj is the `j`th column of the input argument `x`.

• If you specify `'Support'` as a two-row matrix consisting of the lower and upper limits for each dimension, then `mvksdensity` applies `log`((xj-Lj)/(Uj-xj)) for each dimension, where Lj and Uj are the lower and upper limits of the `j`th dimension, respectively.

The value of `bw` is on the scale of the transformed values.

`'reflection'`

`mvksdensity` augments bounded data by adding reflected data near the boundaries, then it returns estimates corresponding to the original support. For details, see Reflection Method.

`mvksdensity` applies boundary correction only when you specify `'Support'` as a value other than `'unbounded'`.

Example: `'BoundaryCorrection','reflection'`

Function to estimate, specified as the comma-separated pair consisting of `'Function'` and one of the following.

ValueDescription
`'pdf'`Probability density function
`'cdf'`Cumulative distribution function
`'survivor'`Survivor function

Example: `'Function'`,`'cdf'`

Type of kernel smoother, specified as the comma-separated pair consisting of `'Kernel'` and one of the following.

ValueDescription
`'normal'` Normal (Gaussian) kernel
`'box'`Box kernel
`'triangle'`Triangular kernel
`'epanechnikov'`Epanechnikov kernel

You can also specify a kernel function that is a custom or built-in function. Specify the function as a function handle (for example, `@myfunction` or `@normpdf`) or as a character vector or string scalar (for example, `'myfunction'` or `'normpdf'`). The software calls the specified function with one argument that is an array of distances between data values and locations where the density is evaluated, normalized by the bandwidth in that dimension. The function must return an array of the same size containing the corresponding values of the kernel function.

`mvksdensity` applies the same kernel to each dimension.

Example: `'Kernel','box'`

Support for the density, specified as the comma-separated pair consisting of `'support'` and one of the following.

ValueDescription
`'unbounded'`Allow the density to extend over the whole real line
`'positive'`Restrict the density to positive values
2-by-d matrixSpecify the finite lower and upper bounds for the support of the density. The first row contains the lower limits and the second row contains the upper limits. Each column contains the limits for one dimension of `x`.

`'Support'` can also be a combination of positive, unbounded, and bounded variables specified as `[0 -Inf L; Inf Inf U]`.

Example: `'Support','positive'`

Data Types: `single` | `double` | `char` | `string`

Weights for sample data, specified as the comma-separated pair consisting of `'Weights'` and a vector of length `size(x,1)`, where `x` is the sample data.

Example: `'Weights',xw`

Data Types: `single` | `double`

## Output Arguments

collapse all

Estimated function values, returned as a vector. `f` and `pts` have the same number of rows.

collapse all

### Multivariate Kernel Distribution

A multivariate kernel distribution is a nonparametric representation of the probability density function (pdf) of a random vector. You can use a kernel distribution when a parametric distribution cannot properly describe the data, or when you want to avoid making assumptions about the distribution of the data. A multivariate kernel distribution is defined by a smoothing function and a bandwidth matrix, which control the smoothness of the resulting density curve.

The multivariate kernel density estimator is the estimated pdf of a random vector. Let x = (x1, x2, …, xd)' be a d-dimensional random vector with a density function f and let yi = (yi1, yi2, …, yid)' be a random sample drawn from f for i = 1, 2, …, n, where n is the number of random samples. For any real vectors of x, the multivariate kernel density estimator is given by

`${\stackrel{^}{f}}_{H}\left(x\right)=\frac{1}{n}\sum _{i=1}^{n}{K}_{H}\left(x-{y}_{i}\right),$`

where ${K}_{H}\left(x\right)={|H|}^{-1/2}K\left({H}^{-1/2}x\right)$, $K\left(·\right)$ is the kernel smoothing function, and H is the d-by-d bandwidth matrix.

`mvksdensity` uses a diagonal bandwidth matrix and a product kernel. That is, H1/2 is a square diagonal matrix with the elements of vector (h1, h2, …, hd) on the main diagonal. K(x) takes the product form K(x) = k(x1)k(x2) ⋯k(xd), where $k\left(·\right)$ is a one-dimensional kernel smoothing function. Then, the multivariate kernel density estimator becomes

`${\stackrel{^}{f}}_{H}\left(x\right)=\frac{1}{n}\sum _{i=1}^{n}{K}_{H}\left(x-{y}_{i}\right)=\frac{1}{n{h}_{1}{h}_{2}\cdots {h}_{d}}\sum _{i=1}^{n}K\left(\frac{{x}_{1}-{y}_{i1}}{{h}_{1}},\frac{{x}_{2}-{y}_{i2}}{{h}_{2}},\cdots ,\frac{{x}_{d}-{y}_{id}}{{h}_{d}}\right)=\frac{1}{n{h}_{1}{h}_{2}\cdots {h}_{d}}\sum _{i=1}^{n}\prod _{j=1}^{d}k\left(\frac{{x}_{j}-{y}_{ij}}{{h}_{j}}\right).$`

The kernel estimator for the cumulative distribution function (cdf), for any real vectors of x, is given by

`${\stackrel{^}{F}}_{H}\left(x\right)={\int }_{-\infty }^{{x}_{1}}{\int }_{-\infty }^{{x}_{2}}\cdots {\int }_{-\infty }^{{x}_{d}}{\stackrel{^}{f}}_{H}\left(t\right)d{t}_{d}\cdots d{t}_{2}d{t}_{1}=\frac{1}{n}\sum _{i=1}^{n}\prod _{j=1}^{d}G\left(\frac{{x}_{j}-{y}_{ij}}{{h}_{j}}\right)\text{\hspace{0.17em}},$`

where $G\left({x}_{j}\right)={\int }_{-\infty }^{{x}_{j}}k\left({t}_{j}\right)d{t}_{j}$.

### Reflection Method

The reflection method is a boundary correction method that accurately finds kernel density estimators when a random variable has bounded support. If you specify `'BoundaryCorrection','reflection'`, `mvksdensity` uses the reflection method.

If you additionally specify `'Support'` as a two-row matrix consisting of the lower and upper limits for each dimension, then `mvksdensity` finds the kernel estimator as follows.

• If `'Function'` is `'pdf'`, then the kernel density estimator is

${\stackrel{^}{f}}_{H}\left(x\right)=\frac{1}{n{h}_{1}{h}_{2}\cdots {h}_{d}}\sum _{i=1}^{n}\prod _{j=1}^{d}\left[k\left(\frac{{x}_{j}-{y}_{ij}^{-}}{{h}_{j}}\right)+k\left(\frac{{x}_{j}-{y}_{ij}}{{h}_{j}}\right)+k\left(\frac{{x}_{j}-{y}_{ij}^{+}}{{h}_{j}}\right)\right]$ for Lj ≤ xj ≤ Uj,

where ${y}_{ij}^{-}=2{L}_{j}-{y}_{ij}$, ${y}_{ij}^{+}=2{U}_{j}-{y}_{ij}$, and yij is the `j`th element of the `i`th sample data corresponding to `x(i,j)` of the input argument `x`. Lj and Uj are the lower and upper limits of the `j`th dimension, respectively.

• If `'Function'` is `'cdf'`, then the kernel estimator for cdf is

${\stackrel{^}{F}}_{H}\left(x\right)=\frac{1}{n}\sum _{i=1}^{n}\prod _{j=1}^{d}\left[G\left(\frac{{x}_{j}-{y}_{ij}^{-}}{{h}_{j}}\right)+G\left(\frac{{x}_{j}-{y}_{ij}}{{h}_{j}}\right)+G\left(\frac{{x}_{j}-{y}_{ij}^{+}}{{h}_{j}}\right)-G\left(\frac{{L}_{j}-{y}_{ij}^{-}}{{h}_{j}}\right)-G\left(\frac{{L}_{j}-{y}_{ij}}{{h}_{j}}\right)-G\left(\frac{{L}_{j}-{y}_{ij}^{+}}{{h}_{j}}\right)\right]$ for Lj ≤ xj ≤ Uj.

• To obtain a kernel estimator for a survivor function (when `'Function'` is `'survivor'`), `mvksdensity` uses both ${\stackrel{^}{f}}_{H}\left(x\right)$ and ${\stackrel{^}{F}}_{H}\left(x\right)$.

If you additionally specify `'Support'` as `'positive'` or a matrix including `[0 inf]`, then `mvksdensity` finds the kernel density estimator by replacing ```[Lj Uj]``` with ```[0 inf]``` in the above equations.

 Bowman, A. W., and A. Azzalini. Applied Smoothing Techniques for Data Analysis. New York: Oxford University Press Inc., 1997.

 Hill, P. D. “Kernel estimation of a distribution function.” Communications in Statistics – Theory and Methods. Vol. 14, Issue 3, 1985, pp. 605-620.

 Jones, M. C. “Simple boundary correction for kernel density estimation.” Statistics and Computing. Vol. 3, Issue 3, 1993, pp. 135-146.

 Silverman, B. W. Density Estimation for Statistics and Data Analysis. Chapman & Hall/CRC, 1986.

 Scott, D. W. Multivariate Density Estimation: Theory, Practice, and Visualization. John Wiley & Sons, 2015.