Imagining an AI-First Student Experience
by Big Tomorrow| March 3, 2017 | More Articles
This is the third installment in a larger series exploring the intersection of design and existing artificial intelligence technology through experiments, prototypes and concepts (Article 1 & Article 2). We believe this is a critically important topic for the design community and beyond, so we’re sharing what we learn along the way.
We’ve recently been exploring how artificial intelligence might better connect the dots in higher education. The short film below documents thinking around an AI-first student experience that coordinates machine learning capabilities to deliver the right information at the right time — additional details about the exploration follow.
Fragmented Student Portals
Universities are massive institutions. In addition to academics, many offer a variety of services and resources to help students make the most of their education. But these are often a tangled maze and students are left on their own to navigate impersonal, overextended student portals. For many, the experience is fragmented at best and outright broken at worst — no one has the patience to wade through an endless sitemap to renew a library book.
So why isn’t there an alternative to the flat architecture of student portals? Where is the dynamic portal-killer that understands a student’s context and can deliver the right thing at the right time? A start-up called AdmitHub offers a glimpse of what it might look like: they’ve built an AI-powered chatbot that can guide students through a university’s existing FAQ or admissions resources. And it’s proving pretty effective: Georgia State used AdmitHub to significantly decrease summer melt (a phenomenon where up to 1 in 5 admitted students fail to enroll for their first semester) through proactive outreach.
How do you extend a model like AdmitHub’s across all a university’s offerings? Would a conversational interface be sufficient?
The promise of conversational interfaces seems clear: if we could just tell a computer what we’re trying to do the way we would tell a person, that would be much better than manually searching through an index of everything the computer can do. But when free-form text is your input, how do you know what you can and can’t do?
Blending Conversational & Traditional UI
In order for a university to deliver a truly context-driven experience that is both flexible and legible, we think an AI-first interaction model needs to blend the best of conversational and traditional interfaces:
- Conversational Baseline — Most interactions would start with a conversation so the system can determine a learner’s intent in an open-ended way. Rich actions embedded into keyboard conventions help learners navigate what actions are available during a given interaction.
- Service Transitions — As a conversation narrows to a particular task, service transitions help surface new sets of actions and capabilities through keyboard conventions.
- Inline Applets — When a particular action is more complex, small apps more consistent with traditional UI are contextually surfaced inline by the system. When the action’s complete, the applet disappears to preserve the conversational flow of the interaction.
Building out an AI-first student experience would require a bit more technological capability than is currently available, but most of the conceptual pieces are in place: intent discernment, intelligent routing, leveraging existing data, and trained models.
Natural language processing (NLP) is employed to determine a student’s intent, which is then mapped to corresponding actions, scripts, or modeled responses within the system. As Shane Mac of Assist points out in a recent Medium post, there are lots of ways to say the same thing, and locking people into decision trees of canned questions ends up resembling a form more than an actual conversation.
A better way to parse intent is to listen for both intent and parameters that map to the corresponding action within the flow of a conversation. For example, saying “I want a to renew a library book” and “I want to renew Gulliver’s Travels” should initiate the same
book renewal action, and the system should take into account
Gulliver's Travels as a parameter for that action in the latter. The subsequent response from the system should be to fill in missing parameters: “I’ll renew Gulliver’s Travels for you. When do you want to return it?”
Once intent has been mapped to an action, the system needs to complete the action by routing the student to a resource, returned result, or additional piece of functionality. Sometimes this might be a question targeted at getting a missing parameter as mentioned above. Other times it might be a simple confirmation that the action has been completed, and other times still it might initiate a more complex action via an inline applet.
Intelligent routing also applies to intents that don’t correspond to an action in the system. If the system isn’t capable of performing an action, it should at least know enough about what the student is trying to do so that it can contextually direct them to a person or resource that can assist.
One of the most interesting things about AdmitHub’s model is that it can be layered atop a university’s existing infrastructure, serving as a bridge between siloed services and resources. In other words, it can make the most of what’s already there. Similarly, we imagine that an AI-first student experience could be accomplished with a phased approach that initially aims to connect and coordinate existing resources, and then progressively integrates them under one roof over time.
Interpreting in-the-moment intent is one aspect of creating a contextually aware system. Another important aspect is leveraging existing data about individual students, the student body, and other sources. This could allow the system to know where a student is in their college career and deliver the right piece of functionality at the appropriate time. Sometimes this would impact an interaction initiated by the student, like pulling up payment information on file so the student doesn’t have to re-enter anything when paying a bill, while other scenarios might be system-initiated, like generating an orientation schedule for a freshman’s first week.
Existing data could also be used to build predictive models about student interests, campus activity, and all sorts of other things. Models like these are critical to providing recommendations, generating personalized content, and mapping particular intents to the most relevant actions, resources, or services. This is especially important when the system tries to account for changes over time — new training data yields new recommendations, new personalized content, and new routing logic.
We hope to see an AI-first student experience that opts for personalized attention, contextual awareness, and fluid interaction over the one-size-fits-all ethos of the current student portal sometime in the near future. If you’re in higher-ed and it’s something you want to start building, let’s talk. You can reach us at firstname.lastname@example.org.
There’s a lot of great thinking about conversational interfaces, artificial intelligence, and just-in-time apps out in the world. Here are a few highlights:
- Messenger Bots: Decision Trees vs. Web Views → Facebook Messenger’s Mikhail Larionov discusses how the Messenger Extensions SDK can be used to enable complex rich actions within a chat flow — this informed some of our thinking around inline applets.
- Random Access Navigation → Assist founder Shane Mac introduces a new model for conversation design that moves away from decision trees.
- Conversational AI and the road ahead → It’s critical to think about the difference between Natural Language Processing and Natural Language Understanding (NLU) when it comes to building a conversational interface.