You can definitely compute higher order derivatives with dlgradient, here's an example of how to compute the second derivative of the cube function:
x = dlarray(5);
function [y, d2YdX2] = cube(x)
y = x^3;
dYdX = dlgradient(y,x, EnableHigherDerivatives=true);
d2YdX2 = dlgradient(dYdX, x);
end
[y, d2YdX2] = dlfeval(@cube, x);
fprintf("The second derivative is 6*x = 6*5 = %.3f", d2YdX2);
A few things to note:
- When you need to calculate higher-order derivatives, ensure that the 'EnableHigherDerivatives' option is true otherwise you will get an error.
- A dlgradient call must be inside a function. To obtain a numeric value of a gradient, you must evaluate the function using dlfeval, and the argument to the function must be a dlarray. See Use Automatic Differentiation In Deep Learning Toolbox.
- As of R2024a, the dlgradient function does not support calculating higher-order derivatives that depend on the following functions: gru, lstm, embed, prod, interp1 (possibly you had one of these functions in your dlgradient computation and that's why you thought that higher-order derivatives were unsupported).
- For a full list of limitations, see dlgradient limitations.
If you are interested in learning how to use higher-order derivatives in the loss function of a neural network, see one of the following examples: