Main Content

newrb

Design radial basis network

Description

net = newrb(P,T,goal,spread,MN,DF) takes two of these arguments:

  • PR-by-Q matrix of Q input vectors

  • TS-by-Q matrix of Q target class vectors

  • goal — Mean squared error goal

  • spread — Spread of radial basis functions

  • MN — Maximum number of neurons

  • DF — Number of neurons to add between displays

Radial basis networks can be used to approximate functions. newrb adds neurons to the hidden layer of a radial basis network until it meets the specified mean squared error goal.

The larger spread is, the smoother the function approximation. Too large a spread means a lot of neurons are required to fit a fast-changing function. Too small a spread means many neurons are required to fit a smooth function, and the network might not generalize well. Call newrb with different spreads to find the best value for a given problem.

example

Examples

collapse all

This example shows how to design a radial basis network.

Design a radial basis network with inputs P and targets T.

P = [1 2 3];
T = [2.0 4.1 5.9];
net = newrb(P,T);

Simulate the network for a new input.

P = 1.5;
Y = sim(net,P)

Input Arguments

collapse all

Input vectors, specified as an R-by-Q matrix.

Target class vectors, specified as an S-by-Q matrix.

Mean squared error goal, specified as a scalar.

Spread of radial basis functions, specified as a scalar.

Maximum number of neurons, specified as a scalar.

Number of neurons to add between displays, specified as a scalar.

Output Arguments

collapse all

New radial basis network, returned as a network object

Algorithms

newrb creates a two-layer network. The first layer has radbas neurons, and calculates its weighted inputs with dist and its net input with netprod. The second layer has purelin neurons, and calculates its weighted input with dotprod and its net inputs with netsum. Both layers have biases.

Initially the radbas layer has no neurons. The following steps are repeated until the network’s mean squared error falls below goal.

  1. The network is simulated.

  2. The input vector with the greatest error is found.

  3. A radbas neuron is added with weights equal to that vector.

  4. The purelin layer weights are redesigned to minimize error.

Version History

Introduced before R2006a

See Also

| | |