Book Review: “Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems”
This book, by Peter Dayan and LF Abbott, is an exciting exploration of brains and their components. Basic differential equations, probability and statistics, and linear algebra are needed to understand it. It was published in 2001 by MIT Press.
The book is comprised of three parts:
- Neural encoding and decoding: The study of how stimuli are converted into neural responses, in particular, action potentials or “spikes”. Chapters include “Neural encoding I: Firing rates and spike statistics”, “Neural encoding II: Reverse correlation and receptive fields”, “Neural decoding”, and “Information theory”. The authors alternate between describing neurons and their physiology / circuitry, providing analogous mathematical models, and comparing said models with real neural data. The chapter on information theory is especially useful for understanding how much is possible, in principle, to encode with neurons.
- Neurons and neural circuits: Single compartment and multi-compartment models of individual neurons are presented in terms of electrical circuit theory, and their properties compared. Firing rate models of networks of neurons are presented and analyzed with respect to real neural data. Chapters include “Model neurons I: Neuroelectronics”, “Model neurons II: Conductances and morphology”, and “Network models”.
- Adaptation and Learning: Both simple and realistic notions of synaptic plasticity are used to describe learning in pairs of interconnected neurons. Statistical machine learning and reinforcement learning concepts are related to learning functions in neural circuits and to the classical and instrumental conditioning of animals. Chapters include “Plasticity and learning”, “Classical conditioning and reinforcement learning”, and “Representational learning”. Learned representations from machine learning algorithms are compared with primate visual system receptive fields.
Coming from the study of machine learning (and a bit of neuroscience along the way), this book was at the right level of reading difficulty for me to learn quite a bit while reading relatively fast. Indeed, the last few chapters of the book has a good discussion of machine learning models that I was mostly familiar with, but were related to neuroscience concepts in useful ways that I hadn’t seen before.
Overall, the book was a solid read, and I would recommend it. On the other hand, there are some parts of the book that I could have done without. I won’t go into specifics here (this would require a careful traversal of the book again), but I will say that certain sections seemed long-winded and not apparently useful. Some of the book’s plots were simple and illuminating; others were extremely difficult to understand.