Designing Contextual AI in the Browser & Beyond

Emily Sappington
Jul 11, 2018 · 5 min read

When many people think of AI interfaces, they think of chat or voice interactions, which are useful modalities to interact with larger agents. However, for more scoped, scenario-focused AI applications, a conversational agent may not be the best way to help users complete tasks. Our users are doing research online, so I wanted to show them how we build our understanding over time, while also providing a simple and playful interface to navigate.

At Context Scout, we’ve just launched our web extension, the first publicly available AI for the web browser that aids with knowledge work online. Our beta product currently serves recruiters and salespeople who spend hours a day doing research online. This single-sector start helps us prepare our tech to build an intelligent browser by scaling up industry by industry. Unlike at larger companies venturing into AI (such as with my last team, Cortana) scale for a startup like ours needs to be strategic both in sector and design, rather than boiling the ocean. I believe scale should be a measured part of the design and navigation, and the contextual capabilities in this user experience enable that future expansion.

But first, a nod to our Tech

On the left is a screenshot of one of our knowledge graphs next to a design of how we present that exact same information we found about a company, in the extension, which is running over that company’s homepage, pulling insights from other websites on the right.

Our software is rooted in knowledge graphs, and our ability to create those for the user as they browse. Explaining the intricacies of this technology would be a detriment to how playful and simple Context Scout is to use, but I still wanted to reference it. My design takes on a molecular nature with the orbiting categories around one focal topic, the same way our knowledge graphs work. While users browse, our knowledge graph grows and simultaneously the categories around the subject of your research grows.

In games like the Sims, certain actionable things can be the subject of various tasks that a user can take, as seen in the below example. We’re currently using our circular tabs to control a view of the content, some of which are actionable. Once Context Scout expands domains and sectors, we’ll be adding more relevant actions to those categories as well. This allows our technology to scale to other views than just simple websites, which we think sets us up well for the future. What gets me excited as a designer, is the uptick in use we saw as our users (who spend hours on the same few websites researching) found our more playful way to navigate information. Most AI interfaces like Siri, Alexa, and Cortana currently have very open input opportunities for users that wait for the user to initiate interactions. Context Scout’s interface lets you discover what contextual actions are available for a particular subject or object, rather than leaving users to guess “can I really say anything to this?”

Here in the Sims, users are shown contextual actions that can be taken on a refrigerator, similar to the way we see Context Scout working, only aided by contextual search.

Our aim with Context Scout, like in the Sims, is that the orbiting buttons show users their options, leaving little to guess at. This is one of the major pitfalls of natural language assistants and reasons most people use Alexa just for an alarm clock.

Making knowledge work fun

Business software is going through a renaissance of more delightful design, and clearly-scoped feature sets. Slack is a great example of this, and it’s a startup we look up to at Context Scout. Most of the users of our product are spending hours a day searching, researching, and vetting information online, and our interface is a more playfully gamified experience to have those research discoveries than traditional search aids. We choose some playful animations to both anticipate the information we’re hunting down, and to reveal more information

Context Scout’s Logo as a loading state

Here our new logo animates like origami as our software loads and finds more content. It’s a metaphor for unfolding to see more of the web, revealing a clearer picture over time.

Next is an example of how our categories pop to appear as Context Scout adds to our knowledge graph and builds that circular web of information.

This is the Context Scout button, it floats on top of websites you visit and the central focal point changes depending on the subject of the webpage you’re on. As you scroll or read, the categories pop in to let you know we’ve found information from other websites.

Where the technology is going

Creating scalable tech doesn’t stop with the back-end, it extends to envisioning and then planning for where the product can go. In larger companies, this is important to avoid learning & re-learning interfaces as the tech evolves, and with smaller companies it’s about helping users and investors take the leap with us about where the tech will lead. As we discuss using Context Scout to offer up contextual actions to users, we also thought about how else our software can help people. If you were for example, using Context Scout to generate leads at work, you may have Context Scout running with the focal point of your task being a particular company. In your free time however, that focal point could be a bag you’re thinking of purchasing. The modular interface, like our knowledge graphs, grows and adapts based on each individual task that you’re doing online.

A user is shopping for a bag on Amazon. Context Scout has a variety of options in the side panel, one of which is Social Media (content we already source in our Beta), focused solely on this product, providing images and feedback on the it.

Scaling modalities

Rather than teach and re-teach a user how to use AI elements as UI changes to adapt to new technologies, I think its important to lay the groundwork early. Augmented and Mixed Reality are directions we can see Context Scout going in the future. When we think about the future of the web, we have plans to overlay content we find directly on pages. What we are building is an intelligence layer, and as we work next towards UI on top of (not just alongside) your content, we chose to envision where that could take us. Below is an exploration of how this design could recognize images in an environment, and how users could interact with the information we source.

Envisioning artwork of a headset with MR or AR, so they can still see the sports game going on. Context scout is recognizing images and text that it can work with to deliver insights, some actionable, some factual. In this case, the user has selected the baseball player up at bat, to see statistics about their performance.

This is just the start. Soon we’ll announce our plans for more contextual insights as they’re tied directly onto content on the page. For now, we’re giving our users a more playful way to interact with the information we source for them, growing our knowledge graph, and expanding their research as they explore the web. The shared metaphor of our knowledge graph technology and the molecular growth of new categories is something I feel is core to our company identity, and thus our product identity. In completing Context Scout’s UX update, we have a solution that is scoped for current users’ needs, yet also sets up a framework for scale in both sector and modality — a win-win for any startup.

— —

Want to try the product? Context Scout currently works on Twitter, LinkedIn, GitHub, StackOverflow, AngelList and more. Or, connect with me to talk all things design & AI.

Context Scout

Thoughts on the future of web search from the Context Scout…