Artificial Intelligence 101 — Neural Networks

Hey, everyone.
A few days ago I watched Ex Machina.

It was an amazing movie which triggered my curiosity for Artificial Intelligence and for the last few days I did a little research about it Once I discovered how amazing this field actually is, I considered starting a new series to share my knowledge around the world.

To understand AI you first need to understand the deep layers of it, such as neural networks:

What are neural networks?

A neural network is a massively identical diffused processor made up of simple processing systems, which has a natural capacity for storing experiential information and making it convenient for usage. It resembles the brain in two perspectives:

  1. Information is acquired by the network from its environment through a learning process.
  2. Interneuron connection strengths, known as synaptic weights, are used to collect the obtained knowledge. Neural networks are further referred to in the “literature” as neurocomputers, connectionist networks.

Benefits of Neural Networks

To characterise neural networks better, let’s take a look at some features and attitudes they hold:

  • Nonlinearity. 
    An artificial neurone can be linear or nonlinear. A neural network, made up of an interconnection of nonlinear neurones, is itself nonlinear. The nonlinearity is of a special kind in the sense that it is distributed throughout the network.
  • Input-Output Mapping. 
    The network learns from the patterns by constructing an input-output mapping the query at hand. Such an approach brings to mind the study of nonparametric analytical inference; the term”nonparametric” is used to signify the fact that no prior hypotheses are made on a statistical model for the input data.
  • Adaptivity. 
    Neural networks have a built-in capacity to adapt their synaptic weights to changes in the surrounding environment. In particular, a neural network encouraged to operate in a specific environment can be easily retrained to deal with minor changes in the operating environmental conditions. Furthermore, when it is working in a nonstationary environment, a neural network can be designed to change its synaptic weights in real time.
  • Evidential Response. 
    In the context of pattern analysis, a neural network can be outlined to provide knowledge not only about which particular model to select but also about the confidence in the decision made. This latter information may be used to reject ambiguous patterns, should they arise, and thereby improve the classification performance of the network.
  • Contextual Information. 
    Knowledge is interpreted by the very arrangement and activation state of a neural network. Every neurone in the network is potentially affected by the global activity of all other neurones in the network. Consequently, contextual information is dealt with naturally by a neural network.
  • Fault Tolerance. 
    A neural network, implemented in hardware form, has the potential to inherent fault-tolerant, or capable of powerful computation — in the sense that its performance degrades gracefully under adverse operating conditions.
  • VLSI Implementability.
    First of all: VLSI = Very-large-scale integration
    The massively parallel nature of a neural network makes it conceivably fast for the computation of certain tasks. This same feature makes a neural network well suited for implementation using VLSI technology. One particular beneficial virtue of VLSI is that it provides a means of capturing truly complex behaviour in a highly hierarchical fashion.
  • Uniformity of Analysis and Design. 
    Essentially, neural networks enjoy universality as information processors. We say this in the sense that the same notation is used in all domains involving the application of neural networks.Neurones, in one form or another, represent an ingredient common to all neural networks. This commonality makes it possible to share assumptions and learning algorithms in different applications of neural networks. Modular networks can be built through a seamless integration of modules.
  • Neurobiological Parallel. 
    The plan of a neural network is motivated by analogy with the mind, which is a living proof that fault-tolerant parallel processing is not only physically conceivable but also fast and powerful. Neurobiologists look to (artificial) neural networks as an analysis tool for the interpretation of neurobiological phenomena.

Networks architecture

The method in which the neurones of a neural network are arranged is intimately linked with the learning algorithm used to raise the network. We may speak of learning algorithms (rules) used in the design of neural networks as being structured.

In general, we may identify three fundamentally different classes of network architectures:

  1. Single-Layer Feedforward Networks:
    In a layered neural network, the neurones are arranged in the form of layers. In the purest form of a layered network, we have an input layer of source nodes that extends onto an output layer of neurones, but not vice versa, In other words, this network is strictly a feed forward cyclic type, the case of four nodes in both the input and output layers. 
    Such a network is called a single-layer network, with the appointed “single-layer” referring to the output layer of Computation nodes -
    the input layer of source nodes because no computation is performed there.
  2. Multilayer Feedforward Networks:
    The second class of a feed progressive neural network differentiates itself by the presence of one or more hidden layers, whose computation nodes are correspondingly called hidden neurones or hidden units. The function of hidden neurones is to intervene between the external input and the network output in some useful manner. By adding one or more hidden layers, the network is enabled to extract higher-order statistics. The ability of covered neurones to extract higher-order statistics is individually valuable when the size of the input layer is large.The source nodes in the input layer of the network supply individual elements of the activation pattern, which constitute the input signals applied to the neurones in the second layer. The output signals of the second layer are employed as inputs to the third layer, and so on for the rest of the network. Typically the neurones in each layer of the network have as their inputs the output signals of the preceding layer only. The set of output signals of the neurones in the output layer of the network constitutes the overall response of the network to the activation model supplied by the source nodes in the input layer.
  3. Recurrent Networks:
    A recurrent neural network distinguishes itself from a feed-forward neural network in that it has at least one feedback loop. 
    For example, a recurrent network may consist of a single layer of neurones with each neurone supplying its output signal back to the inputs of all the other neurones. In the structure depicted in this figure, there are no self-feedback loops in the network
    Self-feedback refers to a situation where the output of a neurone is feedback into its own input.

Moreover, the feedback loops involve the use of particular branches composed of unit delay elements (denoted by Z-l), which result in nonlinear units.

That’s all for today folks, and I really hope you enjoyed this one.Artificial intelligence is amazing and should be used as much as possible , but don’t give a computer too much power or Terminator may happen.

I’ll see you soon!