Neural Network Architecture

The three primary types of neural networks are — feed-forward neural nets, recurrent neural nets and symmetrically connected networks.

Feed-forward neural nets :

Feed forward neural network — Stanford
  • Most common type of neural network
  • Input is the first layer of the network
  • If there are multiple hidden layer, its a “deep” neural net
  • They perform a series of transformations on the input

Recurrent neural nets :

RNN — Colah’s blog
  • Can be used to model sequences
  • They have directed cycles in the connection graph
  • They can remember information in the hidden states

Symmetrically connected neural nets :

Hopfield Network — Neural Network Zoo
  • Most easy to analyse as weights are symmetric
  • The can’t model cycles
  • Hopfield nets are symmetrically connected neural nets without hidden layers whereas Boltzman machines do have hidden layers