A Hebbian cell assembly is formed at full strength on a single trial

The point of this article is simply to emphasize a simple property of a Hebbian cell assembly (CA), which to my knowledge is never explicitly stated in the relevant literature, despite its having important consequences. Hebb defined a cell assembly as a group of reciprocally interconnected cells that represents a concept. It’s surely true that a set of cortical cells that becomes a CA will have some, possibly large, degree of interconnectivity. However, it’s important to realize that a CA can be formed with full strength on a single trial, even if there were no interconnections amongst those cells, or if the weights of all such interconnections were zero. That is, if a set of cells that receives a matrix of connections from some presynaptic field becomes active, then they will all experience correlated weight increases from the active cells in the presynaptic (upstream) field(s). Similarly, they will all experience correlated weight increases to any postsynaptic (downstream) CAs that they collectively impinge. They become bound as a CA NOT because of increases amongst themselves (i.e., via a recurrent matrix), but simply by virtue of the correlated afferent and efferent weight increases to their connections to the rest of the network. [N.b.: Buzsaki (2010) defines CAs only in terms of the downstream (efferent) effects, i.e., the “reader-centric” CA definition; see further note at end.] Consequently, even after a single such association event, if that CA is reactivated by any means, then it will exert a strong and specific influence on any downstream CAs (or individual neurons) to which it may have been associated on that initial learning instance. The annotated figure below explains this in more detail.

Figure 1: Explanation of how a CA forms at full strength on one trial.

Thus, the use of cell assemblies to represent units of meaning allows the strengthening of association that is usually (in both machine learning and neuroscience) viewed as occurring gradually over multiple trials, i.e., extended in time, to instead occur instantaneously, on a single trial, in the form of many simultaneous, correlated afferent weight increases. That is, we move

  • from a learning concept requiring a temporally extended process at a single spatial point, i.e., a sequence of pre-post pairings at a single synapse between two neurons
  • to a learning concept requiring a spatially extended process at a single temporal point, i.e., a single set of simultaneous pre-post pairings from the neurons of one CA to the neurons of another CA.

The former view is consistent with the functional Neuron Doctrine, whereas the latter is consistent with the burgeoning functional “Cell Assembly Doctrine”, cf. the “engram renaissance” (cf. Josselyn, Frankland, Tonegawa, Buzsaki, Yuste, and others).

Why is this so important? Because it strongly argues that synaptic increase may be essentially binary. And, if that’s true, then it further argues that something else besides a synapse’s strength (weight) is the primary variable modified during consolidation. I propose that that “something else” is the synapse’s plasticity. Specifically, it is the synapse’s resistance to passive decay, which I call “permanence” (i.e., inverse plasticity) that changes, in particular, increases, over the course of consolidation. [A specific permanence dynamics is given in my 2014 Frontiers article.] This is fleshed out in a little more detail in the numbered argument below.

  1. In reality, CAs are likely much bigger than in Fig. 1, e.g., consisting of ~70 coactive principal cells. This suggests that the absolute max synaptic strength (i.e., binary “1”), could be quite small: it’s the summed (and correlated) effects of all the synapses from an afferent CA that matter.
  2. The larger the afferent CA, the smaller the individual synapse max strength needs to be in order to achieve any given probability of activating a postsynaptic unit (i.e., in a target CA).
  3. So the functional cell assembly doctrine actually argues for the sufficiency of a smaller max synaptic weight and thus, for a lower range between naïve (i.e., wt=0) and max weight.
  4. Suppose there is a “typical” pyramid-to-pyramid positive wt delta (i.e., from a single pre-post) and that it actually equals the absolute max synaptic wt. Then one such pre-post drives the synapse to max wt, i.e., learning at such excitatory synapses is effectively binary. [N.b. This increases the importance of the classical binary associative memory research (Marr, Wilshaw, Gardner, Palm, others) to how the brain actually computes. On the other hand, it draws into question the relevance of models in which excitatory wts are continuously graded and learning deltas are small, e.g., MLP/Backprop models and more generally, most optimization-centric models.]
  5. If learning at the single synapse is binary, but yet learning / memory at the whole-item (i.e., behaviorally observable) level is shown to improve gradedly across trials, or more generally, over a consolidation period, then what’s changing during the learning process? There are at least two candidates: a) the precise sets of pre or post units may vary across trials, so that the overall associative complex changes over time (cf. additions/modifications to an evolving schema); and b) permanence, i.e., it could be that synapses start off with maximal plasticity (minimal permanence), but that a synapse becomes less plastic, i.e., more permanent (more resistant to passive decay), with successive trials, provided that the timings between those trials meet some criteria.
  6. Both a and b are consistent with single-trial learning, but b is the focus here. Thus, associations get formed with full strength on the basis of a single pre-post between two cell assemblies and the permanence of the association, not its strength, is what increases in graded fashion across trials, in particular, across “hippocampal replays” of the trial. [Note however that it does not rule out that some strength change also occurs, e.g., short-term strength changes, implementing a form of working memory, in the sense of making recently activated CAs more likely to be re-activated / retrieved over some short-ish duration].

This essay’s main goal is simply to emphasize this key functional possibility (and I think strong likelihood) that CAs can form at full strength, with respect to their afferent (upstream) and efferent (downstream) associates, on the basis of a single trial. The second point, that a form of “metaplasticity”, specifically permanence, is the primary variable modified during consolidation, differs sharply from prior / mainstream conceptions [not only within machine learning (ML), including deep learning (DL), but within neuroscience itself] of how the hippocampus actually performs its role in memory, in particular, consolidation.


Buzsaki’s 2010 “Synapsembles” paper defines a CA only in terms of the correlated efferent effects, i.e., upon a downstream “reader-integrator” neuron. But that paper did not focus on how CAs are learned. In contrast, in the theory described here, the CA is defined by the correlated afferent increases to the set of neurons that simultaneously become active as the CA. As this essay goes on to argue, this change of point of view in defining the CA, from that of the downstream “reader” to that of the upstream activation pattern, e.g., a CA in an afferent field, and the correlated afferent synaptic changes resulting from that upstream pattern, has potentially large consequences regarding how CAs are learned.