Human Memory, Machine Learning, and Overfitting

Computer Science and Cognitive Science have a closely related historical connection in the sense that our understanding of how the brain works is heavily influenced by computer models, and vice versa. In fact, the very first model of a cognitive process was based on that of a computer model.

Models of Learning

In 1958, cognitive psychologist Donald Broadbent who studied selective attention and short-term memory created the first flow diagram of a cognitive process.

Photo from Professor Barrera’s lecture in COGS 1

As shown above, the filter model for human cognition and detection is akin to that of an input processor of a computer.

Interestingly, Broadbent’s model depicts multiple stimuli for the human model, but not for a computer, suggesting that there is a lot more information coming into our experience than what we necessarily pay attention to.

Information that is relevant is more likely to make it through this “filter” and into our field of awareness. For instance, think about how you’re much more likely to look up if you hear a friend say your name or if you hear a loud sound — our cognitive filters are continuously differentiating signals from environmental noise.

Source: https://www.rainbowsymphonystore.com/products/color-filters-set

Why doesn’t a computer have the same filter?

Supposedly because they have limited relevant inputs, which are managed and pre-processed by the designer or programmer. There are also innate limits to the kinds of data a model can take in: you can’t physically insert something into a search engine or input scent information; they are outside the realm of a computer’s senses, just like how humans can’t perceive ultraviolet light or hear under a certain threshold of volume.

Additionally, the data fed into a computer is inherently more structured and consistent — often from pre-processing — than the vast array of data that flows nonstop into the human cognitive model.

However, despite limitations on both sides, we can work towards improving Machine Learning to expand the range of what a computer can sense and get it to figure out for itself what the relevant components of the stimuli are, and where the underlying patterns lie without us explicitly telling it.

But of course, this isn’t always successful.

Overfitting

By Ghiles — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=47471056

Because we as programmers and designers typically have the control over what inputs to present to our algorithm in the training and testing datasets, we are overlaying our own filter. This can become an issue when overfitting occurs.

Overfitting is when the algorithm cannot be generalized to new stimuli. This happens when the dataset that it is trained on is not representative of the whole; it starts to think that some features that are specific to the training set appear in all sets when they do not. However, because overfitting occurs when an algorithm fits well to the training dataset, the algorithm is unable to correct its mistake because it does not recognize that a mistake is being made.

But computers aren’t the only ones that incorrectly repeat the wrong method that had previously worked — humans do it as well.

Our Own Overfitting

Learned helplessness is when a person (or other animal) continues to act as if they are helpless even when situations change.

Initially, they are helpless in the situation and whenever they try to act, it is punished. For example, let’s say that a student has a teacher that criticizes them every time they ask a question. They learn that it’s just easier to not ask questions because they get so upset that they can’t even remember what they asked. This strategy might be adaptive for that one class and that one teacher, but what happens when classes change?

They may take the same approach and refrain from asking questions, and because they never try, they never learn that it is okay to ask questions, hurting them in the long run.

External punishments don’t last forever, but they are still stuck with this mindset that attempting anything is useless. This one instance of associating action with punishment isn’t generalizable to all other actions, but they learn to think that it is, similar to overfitting.

However, we have a mechanism for letting go of things that don’t serve us!

As previously discussed, we have filters on our attention that select what is important for us to attend to. This also applies to human memory: what we pay attention to gets saved in memory and what isn’t important is let go (in reality, human memory is more complicated than this but for the time being let’s stick with this oversimplification). Building onto this, we also have the ability to forget memories that used to be useful but aren’t anymore.

https://unsplash.com/photos/TLBplYQvqn0

In a practical sense, it is useful to forget certain information to make room for new things. For example, moving to a new place or changing a password. Not being able to forget information or at least putting it outside of your attention will lead to you giving the wrong address or typing in the wrong password.

But more abstractly, we can reconstruct a different sense of self if we forget the past. Most adults have difficulty recalling their early adolescence and childhood due to their brains not being fully developed at the time. However, a positive side-effect of this is that once we are older and have a better grasp on life, we can forget mistakes that we’ve made in the past and actively shape who we want to be as individuals.

Source: https://unsplash.com/photos/46juD4zY1XA

Even when in a state of learned helplessness where a negative construction of self is made, there is always room for growth and new experiences that can overturn it, and the past can be put out of mind, though not as cleanly as Machine Learning models can.

Just like when training algorithms, we can remember to diversify our inputs! And within our own lives, we have the abilities to try out new things and create new experiences and memories that better prepare us for novel situations in the future and ultimately bolster our versatility, flexibility and open-mindedness.

--

--