Configure Tuning Options in Fuzzy Logic Designer
To select an algorithm for tuning your fuzzy inference system (FIS) or FIS tree and configure the algorithm options in the Fuzzy Logic Designer app, open the Tuning Options dialog box. On the Tuning tab, click Tuning Options.
In the Tuning Options dialog box, you can:
Select the type of optimization to perform.
Select a tuning algorithm. You can choose between several Global Optimization Toolbox methods or adaptive neuro-fuzzy inference system (ANFIS) tuning.
Configure k-fold cross validation to prevent overfitting to your training data.
For more information on FIS tuning, see Tuning Fuzzy Inference Systems.
Optimization Type and Method
Select one of the following types of tuning.
Optimization Type | Description |
---|---|
Tuning | Optimize the existing input, output, and rule parameters without learning new rules. |
Learning | Learn new rules up to a maximum number of rules. To specify the maximum number of rules, use the Max number of rules option. This type of optimization is not supported for ANFIS tuning. |
To select a tuning algorithm, in the Method drop-down list, select one of the following tuning methods.
Method | Description |
---|---|
Genetic algorithm | Population-based global optimization method that searches randomly by mutation and crossover among population members |
Particle swarm optimization | Population-based global optimization method in which population members step throughout a search region |
Pattern search | Direct-search local optimization method that searches a set of points near the current point to find a new optimum |
Simulated annealing | A local optimization method that simulates a heating and cooling process to find a new optimal point near the current point |
Adaptive neuro-fuzzy inference | Back-propagation algorithm that tunes membership function parameters — Supported only for type-1 Sugeno systems with a single output) |
The first four tuning methods require Global Optimization Toolbox software.
To use default tuning options for any method, select the Use default method options parameter.
Global Optimization Toolbox Method Options
To configure options for one of the Global Optimization Toolbox tuning methods, you must configure two sets of options: algorithm-specific options and FIS tuning options.
Algorithm-Specific Options
To specify algorithm-specific options, expand the Method Options section and add optimization options. Any options that you do not specify use their default values.
To configure an option, in the leftmost drop-down list, select the option category. In the next drop-down list, select the optimization option. Then, specify the option value.
For example, the following figure shows how to configure the following options for the genetic algorithm tuning method.
Maximum number of generations, where:
The option category is
Run time limits
.The option is
Max generations
.The option value is
20
.
Population size, where:
The option category is
Population settings
.The option is
Population size
.The option value is
100
.
To add or remove options, click the corresponding + or –, respectively.
For more information on the algorithm-specific tuning options, click the question mark icon to see the Global Optimization Toolbox documentation.
FIS Tuning Options
For all Global Optimization Toolbox optimization methods, you can specify the following FIS tuning options.
Validation Option | Description |
---|---|
Max number of rules | Maximum number of rules, NR, in a FIS after optimization when using the Learning optimization type. The number of rules in a FIS (after optimization) can be less than NR, since duplicate rules with the same antecedent values are removed from the rule base during tuning. To automatically set NR based on the number of input variables and the number of membership functions for each input variable, select the auto parameter. This option is ignored when the optimization type is Tuning. |
Random number seed | Select a method for setting the random number generator
seed before tuning. For more information, see
|
Distance metric | Type of distance metric used for computing the cost for the optimized parameter values with respect to the training data, specified as one of the following:
|
Ignore invalid parameters | Select this parameter to invalid parameter values generated during the tuning process. |
Use parallel computing | Select this parameter to use parallel computation in the optimization process. Using parallel computing requires Parallel Computing Toolbox™ software. |
ANFIS Tuning Options
To configure the ANFIS tuning algorithm, specify the following tuning options. For more information, see Neuro-Adaptive Learning and ANFIS.
ANFIS Option | Description |
---|---|
Optimization method | Optimization method used in membership function parameter training. In the drop-down list, select one of the following:
|
Epoch number | Maximum number of training epochs, specified as a positive integer. |
Error goal | Training error goal, specified as positive scalar. The training process stops when the training error is less than or equal to the training error goal. |
Initial step size | Initial training step size, specified as a positive scalar. During training, the software updates the step size according to the following rules:
|
Step size decrease rate | Step-size decrease rate, specified as a positive scalar
less than |
Step size increase rate | Step-size increase rate, specified as a scalar greater than
|
Input validation data | To specify input validation data, in the drop-down list:
|
Output validation data | To specify output validation data, in the drop-down list:
|
K-Fold Cross Validation
When tuning a system using a Global Optimization Toolbox method, you can use k-fold cross-validation to prevent overfitting to your data. To configure the validation, in the Tuning Options dialog box, on the Validation tab, specify the following options.
Validation Option | Description |
---|---|
Number of cross-validation | Number of cross validations to perform, NV specified as a nonnegative integer less than or equal to the number of rows in the training data. When
NV is
Otherwise, the tuning algorithm randomly partitions the input data into NV subsets of approximately equal size. The algorithm then performs NV training-validation iterations. For each iteration, one data subset is used as validation data with the remaining subsets used as training data. |
Validation tolerance | Maximum allowable increase in validation cost when using k-fold cross validation, specified as a scalar value in the range [0,1]. A higher validation tolerance value produces a longer training-validation iteration, with an increased possibility of data overfitting. The increase in validation cost, ΔC, is the difference between the average validation cost and the minimum validation cost, Cmin, for the current training-validation iteration. The average validation cost is a moving average with a window size specified using the Validation window size option. The app stops the current training-validation iteration when the ratio between ΔC and Cmin exceeds the validation tolerance. |
Validation window size | Window size for computing average validation cost,
specified as a positive integer. The validation cost moving
average is computed over the last
NW validation
cost values, where NW
is the validation window size. A higher |
K-fold cross validation is not supported for ANFIS tuning.
For more information on k-fold cross validation, see Optimize FIS Parameters with K-Fold Cross-Validation.