Designing Agents of Change

Designing the Young Lady’s Illustrated Primer — the neuroscience and learning theory behind the Primer, prior art, system requirements and tradeoffs, major design issues, expert interviews, and unresolved technical problems.

Bethanie Maples
7 min readApr 21, 2018
Penny from Inspector Gadget with her OG computer book

Like most Sci-Fi nuts, I’ve dreamt of building the Young Lady’s Illustrated Primer ever since reading The Diamond Age*.

I went to Stanford in 2017 to explore all the ways we can use Artificial Intelligence and advanced algorithms to aid human learning and cognitive development, so I saw my d.media class at the d.school as the perfect time to finally explore the Primer.

I had 10 weeks.

*For those not familiar with the Primer, it is considered by many to be the best example of an artificial intelligence agent for human learning/cog dev. Kind of the silver bullet for education.

Why build the primer?

Scaling one-on-one tutoring is seen by many experts and researchers as the silver bullet for human cognitive development, and by extension solving a host of economic and social issues for society. But how do you make one program to fit any human mind? The solution has to be adaptive, both in terms of subject matter expertise and feedback —i.e. it has to interact with the learner in a personalized way, keep them motivated, and remove mental and phycological blocks to learning.

Inspiration: the Primer + eButler + Alexa

First I looked at the best examples of AI agents in science fiction. Obviously we have the Primer, but the eButler from Pandora’s Star is pretty close to how people will interact with AI in the future: a personal agent to manage your data, code comes to you, and you allow semi-autonomous action from your agent. The closest things we have today seem to be Siri, Alexa, Replika AI, or Fin. Edwin AI is taking a crack at it starting with English.

Prior Art: Who else is, or has, tried this?

Discussions on Quora — pretty theoretical.

This gentleman took a stab at it in 2011 and came up with the system mock up to the left.

The best platform for building adaptive tutoring seems to be Carnegie Mellon’s Cognitive Tutor, but (to my knowledge) no one has made a product using it that combines the complete set of subject areas or the level of interaction/conversation that the Primer imagined…

Inquire http://inquireproject.com/ — I’ve been working with Prof. Vinay Chaudhri from Stanford’s AI Department/SRI on his AI-enabled textbook. Ignite takes biology textbooks and poses intelligent questions and answers that test progressively deeper levels of knowledge and understanding.

Multiple groups at Stanford is also looking an AI-enabled learning and textbooks — but that’s their story.

Based on what we know about learning science, what needs to be built into this agent?

  1. A conversational agent
  2. Adaptive responses
  3. Conversations around specific learning outcomes AND mentoring/non-task oriented themes
  4. Both kinds of conversation leading to critical thinking (both CMU and IBM Watson do this, some)
  5. Learning outcomes demonstrate deepening thinking (think Bloom’s Revised Taxonomy)
  6. Non-task conversations lead to psychological and emotional balancing and awareness (this shit is hard, but Woebot’s CBT agent it taking a stab at it)
  7. In order to get #5, you need to know what the user is trying to learn, and how they are being assessed, and you need to know what prior knowledge they have (’current knowledge state’)

So I re-mapped system requirements:

How will people adopt this agent? It’s a trust/time issue:

And the question implicit in the map of trust building to the left is: how do you get your dataset? If you’re not a Big 5 tech company with a s**t-ton of nlp data, you can’t just jump to perfect future functionality, you have to build an initial use case to get people into your AI.

Ergo, Version 1: An AI to Make you Smarter

So I designed an offering and UI for Version 1, and a couple things became clear:

  1. non-sci-fi nuts don’t really have a mental model of what an AI agent can do for them, at least today.
  2. The agent needs to bond with just one person. Like in Diamond Age. Building trust is critical.
  3. Initial functionality massively affects user data sharing willingness and future trust. If you pitch the agent as an expert, people demand more. If they think it’s a baby, they’re more likely to feed it less advanced conversation, but more likely to forgive functional errors. People basically kept being confused about the agent’s value.

Embodying the AI in an Identity.

Based on my prior research into robots and AI interfaces, I took the stance that a named entity/agent would produce better adoption AND cognitive development than a nameless program.

So I designed eight different AI personalities, and surveyed Stanford students, tech and non-tech professionals about which one they preferred (n = 25).

  1. A Flame
  2. Conscious entity disembodied in ambient natural scenes
  3. A magical animal that grows up as your knowledge develops — no choice
  4. A wise teacher — can be culture specific
  5. An animal, can be chosen and switched a limited number of times
  6. Magical being, can be chosen and switched a limited number of time
  7. Just eyes, not ambient nature
  8. Different faces of a teacher — fun, serious
AI Agent — entity design concepts

Results: 2 and 3 were vastly more popular than the others. BRIC students preferred a teacher identity. Both tech and non-tech professionals preferred a magical being avatar, or a disembodied avatar (eyes + clouds). A normal animal, a ‘flame’ (non-human) and a ‘wise teacher’ were the least popular.

Conversational Agent and AI Expert Feedback:

Then I went and talked to experts from Google, Facebook, Replika, Woebot, Playstation’s Magic Lab, and the Internet Archive about the Primer and my thinking. Here is what they said.

The Google answer is pretty interesting to contemplate…

Version 2: Key Challenges in building ‘Your Personal AI Tutor’

V2 became much more crisp on promised functionality.

Still, people either wanted it to work seamlessly, integrating across the Google application suite, or were not totally bought into the idea of tuning up their own agent, even if it was for their ultimate cognitive good.

So here the tale pauses. The 10 weeks were up.

The fundamental problems with building the Primer:

Misalignment between the dataset owners and revenue models — clearly Google or Amazon are in the best place to do this, but have no economic incentive to cut themselves off from seeing inside personal information and being able to manipulate user preference (which I believe they would need to do to gain the trust needed to implement this agent). So gaining data access for a startup is a big challenge.

Media Multitasking — the Primer worked in theory because the human it was serving had basically no other media available. So the Primer got all the tuning data from its human over time. The reality of child media exposure today is they have multiple data repositories with different sites, and their need for social contact can and is fulfilled with other platforms like Snapchat, Instagram, etc. They might not use the Primer enough for it to be as impactful as it could be if other media was not available.

Low perceived need by mature learners — college and graduate learners surveyed were more interested in speed to learning and assessment success than tools for long-term cognitive enhancement. This might mean we have to deploy Primer-like agents with initial efficiency functionality that then edges into enhancement, instead of leading with the enhancement message — its payoff horizon is just too far away.

What I covered here is: Learning theory behind the Primer, prior art, system requirements and tradeoffs, agent design user test results, major design issues, expert interviews, and unresolved problems.

Hopefully this helps future developers of the Primer. Let me know how it goes.

--

--

Bethanie Maples

AI for cognitive development. Sailor. Science fiction nut.