Implementing Logic Gates in Neural Nets and a solution for XOR

Conor Moloney
Analytics Vidhya
Published in
3 min readAug 2, 2020

--

The brain is the basis for all Neural Nets

Neural Networks have risen to prominence in recent years as one of the most powerful machine learning techniques (and over-used buzzwords) in tech. In this post I’ll give a beginner-friendly overview of how Neural Nets work and how they can be used to solve a simple but fundamental problem: representing logic gates. This blog post is based on the book “Neural Networks and Learning Machines” by Simon Haykin if anyone would like to explore the topic in more detail.

Multi Layer Perceptrons (MLPs) are perhaps the most commonly used form of Neural Net. Standard MLP architecture is feedforward in that activation flows one way, from input to output. An MLP architecture can be broken down in to 3 sections: input layers, hidden layers and output layers. Input layers take data into the network, hidden layers perform processing and output layers send out the result.

Perceptrons (like all Neural Nets) are composed of units, which represent neurons in the human brain (on which neural nets are based). A unit takes one or more inputs, sums the inputs together and if the summed value is greater than the value specified in the activation function passes data on. The activation in the perceptrons shown below is 0. A bias unit is another form of unit which always activates, typically sending a 1 to all units to which it is connected. The diagram below shows Perceptron implementations for both AND and OR logic gates.

AND and OR logic gates

A major issue in the early days of neural net development was representing the XOR function (the logic for which is shown in the table below). However a simple Perceptron with only input and output layers (as used for AND and OR above) could only seperate data with a straight line while XOR is not linearly seperable.

However a workaround was found when it was shown that x1 XOR x2 = (x1 OR x2) AND (NOT(x1 AND x2)). With this knowledge the Perceptron shown below can be developed to represent XOR

The table below shows that the XOR perceptron returns the correct output for each of the 4 possible input values

In conclusion this blog has covered a few simple uses for Perceptrons. Perceptrons, especially MLPs are a very powerful technique with a wide variety of applications.

--

--

Conor Moloney
Analytics Vidhya

MSc Business Analytics student at UCD with an interest in Machine Learning, Big Data systems and Graph theory.