Radial Basis Function Neural Network in Machine Learning

Rupika Nimbalkar
appengine.ai
Published in
2 min readOct 7, 2021

RBFNN is one of the useful functions of Machine Learning.

Radial Basis Function Neural Network is an Artificial Neural Network that prefers using radial basis functions as operational functions. The output which is received through this function is an amalgamation of the radial basis function of the inputs and the neuron parameters. It was invented in 1988 by Broomhead and Lowe, both of them were researchers at Royal Signals and Radar Establishment. Basically, it is helpful in a number of applications like function approximation, time series prediction, classification, and system control. Hence due to which it is extremely useful for AI Startups.

The architecture of Radial Basis Function Neural Network

This is a three-layer network. Basically, it consists of three important components input layer, a hidden layer, and an output layer. Here each layer has a set of neurons. The input values of this network are forwarded through the input layer to the hidden layer via the input weights. Then from here the output values of the hidden layer are again forwarded to the output layer via output weights. While performing the task over here all neurons have the same activation functions.

Training of RBFN

While training the RBFN following parameters are taken into account,

  • First of the number of neurons in the hidden layer and also the coordinates of the center of each hidden layer RBF functions.
  • The spread of each RBF function in all the direction.
  • Also, the weights that are applied to the RBF function output are forwarded to the summation of the layer.

Various different methods have been used to train RBFN.

Hence we can conclude that it is quite a beneficiary method to use as it is trained fast than MLP and it's also extremely easy to interpret the meaning in the hidden layers of RBFN.

--

--