About Semantic Organization

This is a series of blog posts about the learnings and discoveries I made while studying Cognitive Psychology at the University of Berkeley, California. It will introduce you to the basic concepts of human cognition; attention, memory, categorization, semantic organization, language, problem solving and decision making.

Marc-Oliver
Jul 25, 2017 · 3 min read

What models of semantic processing do human organisms use most when accessing information?

This is an interesting question and I can only craft a highly speculative assumption with the limited amount of neuroscientific information I gathered from the learning materials provided in Stephen K. Reeds book ‘Cognition’ – see link below.

The group of researchers at the Duke University (Rubin, 2006) showed some promising results and were able to confirm with neuroimaging and the use of autobiographical memory codes that our brain indeed combines or triggers different areas of the brain when accessing information. These memory codes can be perceptual and non-perceptual or both combined. It was fascinating to learn from their experiment how the brain basically activates three key areas during autobiographical memory retrieval and how those ‘brain activities’ are arranged somehow along a timeline or three main stages; firstly, we could observe how incoming information triggered participants emotion, secondly some ‘verification’ process took place and thirdly subjects started to reconstruct a matching scenery — or mentally visualize it — from their (autobiographical) memory. I can imagine from this example very clearly, how a cue word must have triggered closely related/linked nodes or concepts and how this process spread over to other memory components of our brain. Some of these nodes were just representing pure facts or semantic information, while others were emotionally and contextually ‘charged’ with sensory ‘data’ and had therefore a higher impact in verifying the importance or meaning of a specific moment/memory.

This reminds me: I once had a bad experience with a swan. I came very close and the swan started chasing and attacking me. The memory I now have about swans is not only that they are white, beautiful and majestic, but that they can attack and harm you. This new experience not only altered the memory codes I had previously stored about swans, which were mostly amodal. It also altered the knowledge about the concepts of birds in general. The script my brain was using for ‘bird watching’ added a new central event, that will forever influence my perception of birds.

These, very personal reflections and other research findings from Reed’s book suggest that there must be some perceptual semantic network in place, that links effectively related amodal and modal knowledge with each other, when necessary/available. The hierarchical network model, the feature comparison model, and the spreading activation model all seem to play their role in puzzling together how we — in reality — retrieve and process information from memory. All these models are well fitted to highlight specific mechanics that make up the overall final model, mentioned above. Schemas — in that sense — explain well, how clusters of information are being presented and help researchers understand and predict how humans might reconstruct memory and respond with routine activities that often match cultural and social constructs. Reed nicely summarized in chapter 9, that both amodal and modal theories have different focuses but in my most humble opinion, they always appear in parallel.

Can neuroscience ever advance enough to determine exactly which model of semantic processing we use most? This would not only require the ability to spot the neuron that represents, for example, the colour red in our memory, but also the neurons that are linked with our emotions and past (sensory) experiences with that same colour. In addition, this — yet to be invented — ’neuron-detection technology’ must be able to form a semantic network on it’s own in order to interpret the facts and ‘emotional’ patterns discovered in different individuals across the globe. It sounds scary, but Christine Ann Denny from Columbia University already developed a technique “to label the cells that encode individual memories in the brains of mice and are able to indelibly tag these neurons using fluorescent molecules” (Link). Meanwhile, enterprise funded biotech and computer science researchers ship the first artificially created brain-cell structure. More to come …


Related academic resources and videos:

  • Stephen K. Reed, 2012 [Link]
  • Christopher Gade, 2017 [Link]
  • Berkeley Course Material, Dr Christopher Gade

The Versatile Designer

The hidden stories every product designer should know about markets, products & consumer behavior.

Marc-Oliver

Written by

Senior UX Manager @Appnovation, Canada. Writes about Cognitive Psychology, Behavioural Economics & Platform Design. Creator of https://axurewidgets.com.

The Versatile Designer

The hidden stories every product designer should know about markets, products & consumer behavior.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade