Behavioral Science & AI: The Good, The Bad & The Ugly

A brief dive into Behavioral Science & AI and the relationship between the two.

The Gooood 😄
The Bad 😔
The Ugly ☹️😱

Behavioral Science? What is that? Is there a whole field dedicated to the study of human behavior? You bet there is. This isn’t something necessarily new, behavioral science by itself is an interdisciplinary approach to understanding human behavior via anthropology, sociology and psychology. This field is incredibly useful when applied to application design in the tech world, because after all apps are made for humans. If you as a programmer can predict with a high level of certainty how a user is going to behave, you can design your app to exploit that behavior.

A cornerstone of behavioral science is something called the B=MAT model.

Behavior = Motivation + Ability + Trigger

B=MAT’s components are Behavior, Motivation, Ability and Trigger:

  1. Behavior — The way one acts or carries themselves in the presence of certain stimuli.
  2. Motivation — The general desire or willingness of someone to do something.
  3. Ability — The means or skill to do that something. This is directly tied to an app’s ease of use and ease of access.
  4. Trigger — An event or emotion that causes a certain behavior.

Currently, most companies are using something called the hook model in conjunction with B=MAT.

What is Hook? Hook is a loop model which follows the flow of trigger to action to reward to investment. Below is a breakdown of each part of the model.

  1. Trigger — Two types of triggers that exist: external and internal. External triggers are things that’ll basically tell you what to do i.e. buy-now ads, click here, play this…in essence clickbait. Internal triggers are stronger than external triggers, they exist inside of a person’s brain. The internal trigger is formed through an association or memory in the person’s brain. The most frequent internal triggers are emotions, specifically negative emotions.
  2. Action — The simplest behavior that one can do based off of a trigger. This can mean a search on google, pressing play inside of a youtube video, scrolling on Pinterest. Action is done in anticipation of a reward. The likeliness of an action (behavior) is defined by B=MAT.
  3. Reward — In general, a reward is something positive that makes you feel good, this is the direct result of an action (i.e. opening up Facebook or Instagram). That can mean seeing a certain type of post in your newsfeed (say you like cats, you might see more posts regarding cats). There are 3 different types of rewards that exist: rewards of the tribe, rewards of the hunt, reward of self. Companies try and control the reward aspect of the hook model to ingrain that behavior that was rewarded and create a habit. Social media companies that have newsfeeds are utilizing rewards of the hunt.
  • Rewards of the tribe — Things that feel good such as cooperation, competition and partnerships amongst members of the ‘tribe’.
  • Rewards of the hunt — The hunt for resources: i.e. the money you might win from playing poker or seeing the one post that peaks your interest in your newsfeed amongst a series of unfavorable posts.
  • Reward of self — Things that feel good in of itself. Finishing items on a todo list, checking all the notifications in your e-mail…getting all the greens to pass on learn.co’s labs 😅

4. Investment — The investment phase of the hook model is where there is a store of value for an app, i.e. the more friends you have the more popular you are, the more followers you have on twitter the larger audience you can reach. Developers try to provide cumulative value in their app so that people will become ‘hooked’ and come back for more.

As sad as it may seem, the ‘best’ product is not even dependent on its objective value but one that utilizes the hook model and B=MAT the best.


Artificial Intelligence? What’s that? On a scale of Microsoft Word Paperclip to iRobot, what is the current stage of AI?

Before I can discuss the current state of AI, what it is, how it works and its future, I would like to discuss consciousness.


What is consciousness? The state of being awake and aware of one’s surroundings. What is the source of our consciousness? Historically, neuroscientists haven’t been able to pinpoint the source of consciousness and just attributed it to some sort of voodoo magic. However, recently, scientists are beginning to see that consciousness on a very basic level is just a bunch of neurons firing and chemical reactions taking place in the brain.

The brain finds out about the world through electrical impulses, which are only indirectly related to objects/actions in the world. Perception is a process of informed guesswork where the brain is taking these signals and expectations and beliefs of the world based on previous data (that exists in your subconscious) to figure out what caused those signals. We don’t passively perceive reality we actively generate it. Another thing to consider is if hallucination in our world is just uncontrolled perception, then inversely, perception is controlled hallucination. It’s all in the mind.

OK. We’re going on a roller coaster here, but stick with me this its only going to get better.

What do we mean when we say something is sentient? Sentience means when an object/person is aware of itself and can think for itself. The term is pretty much synonymous with consciousness. But on what level? On the level of a basic life form? Such as a dog knowing it’s hungry and that it needs food to survive? Or on the level of a human? Is the level of intelligence of a dog the same as a human? If we’re talking about intelligence when it comes to AI, we are often comparing AIs to ourselves as a benchmark. Per Leonard Mlodinow’s book titled “Subliminal”, there are different ‘orders of intelligence’ and can be illustrated through taking the dog example to a higher level. As humans, on level 1 we are able to think if a family member is hungry, on level 2 we can muster the thought that the family member knows that you know that they’re hungry, the next level would be that you know that they know that you know that they’re hungry. It keeps going and gets more complex, our intelligence as human beings can objectively be measured through our ability to think at high levels of abstraction. These are all things to keep in mind as we move onto the definition of AI.

A good quote that illustrates the relationship (or lack of) between consciousness and intelligence is “You don’t need to be intelligent to suffer, but you probably have to be alive.” Consciousness begets intelligence.


What is Artificial Intelligence? The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

The way AI works is through machine learning. Machine learning is the process of feeding in mountains of data to an algorithm to ‘teach’ it to do a certain task. On a very basic level an AI system’s algorithm can tell if an image of a dog is a dog. This type of functionality is called computer vision and is already being used in today’s tech, namely, in Tesla’s self-driving cars. This is a very cool but tedious process, because all of the images being fed into an ML algorithm have to be manually labeled, but after numerous iterations the algorithm will start to pick up trends and create associations. Mislabeling can lead to an AI creating faulty associations in its data.

Your AI is only as good as the data you feed it. A good example of this is google’s dream project. Images of dogs were fed into the ML algorithm and now it starts to find dogs in any image that it looks at.

This is similar to a human brain on psychedelics, psychedelics break the associations that exist in our brain beforehand (which, again, exist in your subconscious!). While you’re ‘high’, you see hallucinations, because it is your mind trying to find patterns in what it is looking at to properly identify it at a subconscious level.

AI is very data driven, meaning the scope of its intelligence is limited to its data. The human brain at this current point in history is a culmination of evolution happening over millennia. There is no way you can teach an AI system to be human through ML at this current point. Even if someone were to track data over a person’s lifetime and feed it into a ML algorithm it wouldn’t be sufficient because the algorithm is not witnessing all the subconscious actions the human brain performs. The reason being because not everyone will come across the same stimuli during a lifetime and so to feed an AI system a single person’s data or even a thousand people’s data would only have the system pick up those people’s biases collectively and not create true sentience. We can’t even trust our own senses at times, this is illustrated in the McGurk effect, seen below:

What will happen when AI reaches higher orders of intelligence? We can’t be totally sure, but let’s hope it doesn’t go full Ultron on us. Let’s discuss how Behavioral Science & AI go hand in hand in good use cases and bad use cases in the next section.


The Good and the Bad

Let’s start off with the bad cases first:

  1. Social media is a big user of the hook model. Social media platforms were a good thing and a tool used to connect with friends and family members across the globe. However, their utility nowadays is questionable. The feedback loop created by likes, hashtags, retweets and the introduction of ad revenue as the main revenue model for social media apps creates an incentive for companies to keep you addicted in an infinite loop of the hook model.
  2. AI can be used in conjunction with Behavioral Science in that AI can be used to identify trigger points of an individual user based on data pertaining to your demographic. This allows developers to be even more effective in their targeting because now they not only have control over just the motivation and ability part of B=MAT, but the whole damn framework that leads to a behavior!
  3. An example of an impractical application for AI is where police precincts were using AI software to predict which neighborhoods to patrol. You can already guess this isn’t going to end well. The algorithm, based off the data fed to it, was sending officers to predominantly African-American neighborhoods. Based on data in the U.S., we know that there exists an implicit (or explicit) bias against people of color/minorities when it comes to police interaction. Hence, you can imagine why the AI is racist…faulty, biased data!
  4. Another bad use case of the two is where hackers can potentially hack into user data from a social media site and use that information to effectively predict your future movement and potentially sell that information to criminals, especially if you’re a wealthy person.
  5. Enough of the bad cases, I can go on forever. A good use case for B=MAT in isolation is using the knowledge of the model to be able to break bad habits. One would essentially tweak the x and y axises of the B=MAT model (deleting FB from your phone, turning off notifications, etc.)
  6. A basic but cool nonetheless use would be using computer vision to determine if a grocery’s produce is rotten or not. This can potentially abstract away certain jobs and cut costs for companies.
  7. The last good use case that I’m going to mention is…learn.co! Flatiron School definitely took behavioral science into consideration when designing its platform. Remember the green lights? That is a way to keep you hooked. We’re learning and ultimately benefitting from this added level of focus!

Protection of user/customer data is a very important topic and one that I will segway into in my next article!

Sources:

“Subliminal” — Leonard Mlodinow

“Hooked: How to Build Habit-Forming Products” — Nir Eyal

https://www.ted.com/talks/anil_seth_how_your_brain_hallucinates_your_conscious_reality/transcript?language=en

https://www.smithsonianmag.com/innovation/artificial-intelligence-is-now-used-predict-crime-is-it-biased-180968337/