Neural Networks Explained


A Neural Network is a computer program that operates similarly to the human brain. The objective of neural networks is to perform those cognitive functions our brain can perform like problem-solving and being teachable.

The first neural network was developed in 1943 by Warren McCulloch, a neurophysiologist from the University of Illinois and Walter Pitts, a mathematician from the University of Chicago. However, they didn’t get to actually test the first network as their technology was advanced enough to have to capability to run it so, in 1954 Belmont Farley and Wesley Clark, professors at the Massachusetts Institute of Technology succeeded in running the first simple neural network.

The primary appeal of neural networks is their ability to emulate the brain’s pattern-recognition skills. Among personal and commercial applications of neural networks, they have been used for a wide array of things like they have been used to predict the outcome of investment decisions, they can find patterns in handwriting and they can even scan land areas for anomalies which makes it able to point out things like land mines or bombs.

How Neural Networks Operate

Imagine a neural network as a black box that takes inputs like the census of a self-driving car processing them into one of the multiple outputs like the controls for the car. The neural network itself has many small units called neurons, these neurons are grouped into several layers. Layers are columns of neurons that are connected to each other through their neurons.

Each neuron is connected to another layers’ neuron through connectors called weighted connections. The weighted connections are adjusted with a real-valued number that is attached to them. A neuron takes the value of a connected neuron (in their layer) and multiplies it with their connections’ weight. The sum of all the connected neurons is the neurons’ bias value. The bias value is then put through an activation function [(f)x] which mathematically transforms the value and assigned it to the connected neuron in the adjacent layer. This is propagated through the whole network.

Essentially, the network is like a filter through all the possibilities so, the computer can come up with the correct answer. That’s really all the network does, the real challenge is finding the right weights (neuron value) in order to compute the correct results. Finding the right weights are done through machine learning, which is why neural networks are so intertwined with the progression of artificial intelligence.

Activation Function (non-simplified [(f)x]) — Sigmoid

There are times where the network is wrong and outputs the wrong answer. The network is always able to be correct because it looks for specific characteristics in order output an answer.

If an object looks similar to another object, the network could get “confused” an output a false answer. In order to try to prevent this from occurring, we can, first, equip the network with some sort of feedback mechanism, known as a back-propagation algorithm. This enables the network to adjust the connections back through the network. Using this algorithm, the network is able to go back and “double-check” their network to make sure that all the biases are correct, all the connections are weighted properly or if the programmer has programmed the Machine Learning application correctly.

Second, we can make the neural network a recurrent neural network, involving signals that proceed in both directions as well as within and between layers. Recurrent Neural Networks (RNNs) are normally designed to recognize data sequential characteristics and use patterns to predict the next likely scenario.

How Neural Networks are Trained

When training neural networks, people use supervised learning under the Machine Learning umbrella. This is basically where each training example contains the value of both the input data and the desired output. As soon as the network is able to perform sufficiently well on additional test cases, it can be applied to any new training examples. Supervised Learning is when there is a full set of training data.

For example, researchers at the University of British Columbia have trained a Feedforward Neural Network with temperature and pressure data from the Pacific Ocean to predict future underwater volcano eruptions, since the western coast of North America borders the “Ring of Fire” and any unusually large waves like tsunamis. They’ve done the same thing for North America but, to be able to predict future global weather patterns to combat climate change.

However, there are some Neural Networks that are trained using unsupervised learning. The network is presented with input data and given the goal of discovering patterns without being told what specifically to look for. This type of neural networks could be used for data mining.

Fun Fact: You can use a neural network that is trained using unsupervised learning in blockchain when randomly trying to mine different cryptocurrencies!

Types of Neural Networks

  • Feedforward Neural Network: This is one of the simplest neural networks, where the data or the input travels in one direction only. The data passes through the input nodes and exit on the output nodes. There is no back-propagation algorithm so, if the neural network outputs the “wrong” answer, there is no way for it to correct itself.
Feedforward Neural Network
  • Radial Basis Function Neural Network: Radial basis functions consider the distance of a point with respect to the center. RBF functions have two layers, first where the features are combined with the Radial Basis Function in the inner layer and then the output of these features are taken into consideration while computing the same output in the next function.
Radial Basis Function Neural Network
  • Kohonen Self Organizing Neural Network: The objective of a Kohonen map is to input vectors of arbitrary dimensions to a discrete map made of neurons. The map needs to be trained to create its own organization of the training data. It comprises of either one or two dimensions. When training, the map location remains constant but, the weights differ depending on the value. The distance between the point and the neurons is calculated by the Euclidean distance, the neuron with the least distance wins.
Kohonen Self Organizing Neural Network
  • Recurrent Neural Network (RNN): RNNs work on the principle of saving the output of a layer and feeding this back to the input to help in predicting the outcome of the layer. This neural network uses back-propagation to double check its work to make sure that its output is correct 99.9% of the time.
Recurrent Neural Network
  • Convolutional Neural Network (CNN): CNNs are similar to feedforward neural networks, where the neurons have weights and biases that are able to learn. People use this very often, which signal and image processing in conjunction OpenCV (computer vision).

Below is a representation of a ConvNet, in this network, the input features are taken in batches like a filter. This will help the network to remember the images in parts and can compute the operations. These computations involve a conversion of the image from RGB or HSI scale to grayscale. Once we have this, the changes in the pixel value will help to detect the edges and images can be classified into different categories.

ConvNet — Convolutional Neural Network
  • Modular Neural Network (MNN): MNNs are a collection of different networks working independently while contributing towards the output. Each network has a set of inputs which are unique compared to other networks constructing and performing sub-tasks. The advantage of MNNs is that they break down the computational process into smaller processes which can make computing the function easier and quicker.
Modular Neural Network

Conclusion

Neural networks are at the forefront of cognitive computing. This is like saying a computer can one day be more powerful than our brains! Neural networks are mainly used in AI alone or in AI in conjunction to another emerging technology like blockchain. However, Deep Learning systems are different as they are based on multilayer neural networks and power, like, the speech recognition capability of Siri (Apple’s mobile assistant). Combined with exponentially growing computing power and the massive amounts of big data, Deep Learning neural networks influence the distribution of work between people and machines.

Key Takeaways

  • A Neural Network is a computer program that mimics the brains functions. In the future, neural networks could be able to solve big problems that humans cannot.
  • Neural Networks are like filters that use neurons with real-valued weighted connections, in layers that are linked together to come to a definitive output.
  • Computers aren’t always correct so, the back-propagation algorithm allows it to double-check the network to ensure the output is definitively correct.
  • There are thousands of applications for neural networks but, the core lies in the programming behind it.
  • There are 6 main types of neural networks that each have their own specific functions.

Thanks for reading! If you liked it, could you please give it a clap! Have a great day!