Some interesting finds: Acyclic hierarchical modelling and sequence unfolding

ProjectAGI
Project AGI
Published in
2 min readFeb 9, 2016

This week we have a couple of interesting links to share.

From our experiments with generative hierarchical models, we claimed that the model produced by feed-forward processing should not have loops. Now we have discovered a paper by Bengio et al titled “Towards biologically plausible deep learning” [1] that supports this claim. The paper looks for biological mechanisms that mimic key features of deep learning. Probably the credit assignment problem is the most difficult feature to substantiate — ensuring each weight is updated correctly in response to its contribution to the overall output of the network — but the paper does leave me thinking it’s plausible.

Anyway the reason I’m talking about it is this quote:

“There is strong biological evidence of a distinct pattern of connectivity between cortical areas that distinguishes between “feedforward” and “feedback” connections (Douglas et al., 1989) at the level of the microcircuit of cortex (i.e., feedforward and feedback connections do not land in the same type of cells). Furthermore, the feedforward connections form a directed acyclic graph with nodes (areas) updated in a particular order, e.g., in the visual cortex (Felleman and Essen, 1991).”

This says that the feedforward modelling process (which we believe is constructing a hierarchical model) is a directed acyclic graph (DAG) — which means it does not have loops, as we predicted. Secondly, it is another source claiming that the representation produced is hierarchical (in this case, a DAG). The cited work is a much older paper — “Distributed hierarchical processing in the primate cerebral cortex” [2]. We’re still reading, but there’s a lot of good background information here.

The second item to look at this week is a demo by Felix Andrews featuring temporal pooling [3] and sequence unfolding. “Unfolding” means transforming the pooled sequence representation back into its constituent parts — i.e. turning a sequence into a series of steps.

Felix demonstrates that high-level sequence selection can successfully be used to track and predict through observation of the corresponding lower-level sequence. This is achieved by causing the high-level sequence to predict all steps, and then tracking through the predicted sequence using first-order predictions in the lower level. Both levels are necessary — the high level prediction provides guidance for the low-level to ensure it predicts correctly through forks. The low level prediction keeps track of what’s next in the sequence.

[1] “Towards Biologically Plausible Deep Learning” Yoshua Bengio, Dong-Hyun Lee, Jorg Bornschein and Zhouhan Lin (2015) http://arxiv.org/pdf/1502.04156v2.pdf

[2] “Distributed hierarchical processing in the primate cerebral cortex” Felleman DJ, Van Essen DC (1991) http://www.ncbi.nlm.nih.gov/pubmed/1822724

[3] Felix Andrews HTM temporal pooling and sequence unfolding demo http://viewer.gorilla-repl.org/view.html?source=gist&id=95da4401dc7293e02df3&filename=seq-replay.clj

Originally published at Project AGI.

--

--

ProjectAGI
Project AGI

Combining principles of biological cognition and machine learning to devise, empirically test and prove novel algorithms that can achieve general intelligence.