A Connectionist approach to Deep Learning

My background and the path that led me to study Deep Learning.

Ioannis Kalfas
3 min readApr 12, 2017

There has been a lot of fuss about Convolutional Neural Networks and Deep Learning in general. Nowdays, you meet these concepts in almost any scientific field that one could possibly work on. In my case, this field is Neuroscience. The success of Deep Neural Networks has provided me the opportunity to satisfy my curiosity concerning Neural Networks and the relationship between Artificial and Biological ones.

Since I gained this interest in understanding Human Intelligence, the first logical thoughts that I processed pointed towards understanding how the Brain works. This seems to be the “root of all evil”. However, with a background in Informatics (BSc) and a specialization in Educational Software, one could presumably only take small steps to the direction of Neuroscience. It seemed reasonable to find a track that offers flexibility on choosing courses from both Biology and Computer Science. This flexibility I found in the MSc programme of Machine Learning at KTH University of Stockholm, Sweden. There I managed to follow courses that tought me how it’s possible for machines to learn on their own(Artificial Neural Networks, Machine Learning etc.) as well as courses that showed me how biological organisms learn/behave, such as Quantitative Systems Biology and Biomedicine (KTH), Neuroscience (Karolinska Institute) and Human Behavior (Stockholm University).

The most striking approach, that seemed to be the common ground for all the fields I studied, is ‘Connectionism’ .

Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience, and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. The term was introduced by Donald Hebb in the 1940s. There are many forms of connectionism, but the most common forms use neural network models. ~Wikipedia

Reading on Connectionism, one can gain a wider perspective on the emergence of Artificial Neural Networks in the past decades. Almost every Deep Learning enthusiast nowadays is a connectionist without realizing. With Deep Learning becoming very popular only after 2012 and not quite reaching many of my professors at KTH before I graduated from my MSc., the most attractive Connectionist approach for my Thesis was Spiking Neural Networks in the field of Computational Neuroscience. It would enable me to study their inner workings based on neuroscientific knowledge, but through a computational lens. Therefore, I focused on studying Spiking Networks in the department of Computational Biology (KTH), building realistic graphs of neuronal connections from a structurally organized network (in horizontal cortical layers and vertical columns i.e. minicolumns, hypercolumns) and trying to interpret their dynamics. Each artificial neuron, and its synapses to other neurons, is modeled based on biological observations, which is also true for the interconnections of groups of neurons across structural modules. This gave me experience in using both Python and Spiking Network Simulators, like (Py)NEST, in larger scale projects. Here is an introduction to satisfy your curiosity.

The emergence of Deep Learning was pretty obvious by the time of my graduation (Sep 15'), not only in the vast field of Artificial Intelligence, but also in (Computational) Neuroscience and its respective job boards/mailing lists (e.g. connectionists, comp-neuro). Deep Learning introduced neural networks that developed similar feature representations in a hierarchical way (Figure 1), just like our Visual System. This seemed (and later proved) to be our best guess for modeling the brain from low to mid/high level scales (single-cell recordings, multi-units and fMRI). I rapidly took advantage of an exciting opportunity to use Deep Neural Networks with biological data, in order to model neurons from a mammalian Visual Cortex (macaque body patches, IT Cortex), in the department of Neurophysiology at KU Leuven.

Figure 1. Deep Neural Networks learn hierarchical feature representations through their units’ weights. Source: Y LeCun presentation, adapted from Zeiler & Fergus 2013

There, I use the activations of Deep Neural Network Layer units upon “seeing” the same images that were presented to the Macaques, as predictors of the recorded neurons’ spiking activity (firing rate). More technical details on my project will be provided in future posts.

Yannis

--

--