Hi,
I understand that you are trying to implement SVM classification with cross-validation from the Bioinformatics Toolbox example, and you're specifically confused about how the function minfn works — particularly why it uses exp(z) and how it fits into the cross-validation.
I assume you already have your data (cdata, grp) and a partition (c) ready for cross-validation, and you want to tune the hyperparameters rbf_sigma and boxconstraint for an RBF SVM using a custom function for optimization.
In order to compute cross-validation loss with varying values of RBF sigma and box constraint, you can follow the below steps:
Step 1: Understand the parameter transformation
The parameters rbf_sigma and boxconstraint must be strictly positive. Using exp(z) ensures this. It also makes the parameter search more effective by spacing values exponentially (e.g., 0.1, 1, 10...).
Step 2: Define your SVM training function
Write a custom function (e.g., crossfun) that takes training and test data, fits the SVM with the given sigma and box constraint, and returns the predicted labels.
Step 3: Set up the anonymous function for optimization
Use crossval with a function handle like:
minfn = @(z) crossval('mcr', cdata, grp, 'Predfun', ...
@(xtrain, ytrain, xtest) crossfun(xtrain, ytrain, xtest, exp(z(1)), exp(z(2))), ...
'partition', c);
This function will be passed to an optimizer (like fminsearch) that minimizes the misclassification rate.
Step 4: Optimize the parameters
You can now search for the optimal parameters:
bestZ = fminsearch(minfn, [0, 0]); % Initial guess in log-scale
The resulting exp(bestZ) gives the actual values for rbf_sigma and boxconstraint.
Refer to the documentation of "crossval" function for more details:
Refer to the documentation of "fminsearch" function:
Hope this helps!