What could you do with a virtual research assistant?

You most likely have heard the stat before… the amount of data generated has been exponentially growing each year, with the last number I read being somewhere in the realm of 90% in the last 2 years*.

While that is an impressive stat of how far we have travelled since the early days of computing I also get a foreboding sense that it will become overwhelming in the near future. I mean, with these mountains of data ready and available — how will you quickly find the right information?

The answer goes beyond our favorite search tools, and starts blending the understanding of what search really is. Using current keyword-based paradigms, we quickly run into limitations when trying to understand the context of a question. For example, if you search for “Jaguars” — are you interested in the car brand? …the animal? …the football team?

Keywords require you to be very specific which become problematic since you can only be as specific as your own knowledge of a subject. Keywords also only provide very little about your intent for using the information. If you turned to your buddy and said “Jaguars”, I would also be very surprised if you got the response you were looking for without some form of clairvoyance, but if you asked “Can I outrun a jaguar?” then we can start to piece together the real request. As if that wasn’t enough, there are still major limitations on the types and sources of data that can be referenced as well. Search engines have been steadily improving this by allowing other mediums to float to the top and providing alternate search ideas, but everything is in the public domain . What about the mountains of private or sensitive information that you can’t open up to the outside world?

We started to explore solutions to this problem working with the search technology company Enlyton developing virtual research assistants to help people find the right information for specific areas of knowledge. The core of the assistant lies in Enlyton’s ability to create searchable datasets from almost any source — structured or unstructured, private or public — while using natural language to find connections in a contextual way. This meant that they could curate a list of datasets and create robot experts for any subject. A great example is patent research, by collecting datasets from issued patents, patent applications, technology news, and business news sources we can decrease the time to research technologies into minutes, uncovering and visualizing meta information that would previously take weeks to put together manually. However, that was just the backend, they still needed to create the right interface for users to interact with while being flexible to fit multiple industries and use-cases.

We started very simple but through ongoing experiments, beta applications, and user feedback we introduced more complex interactions that were familiar to users, but more useful than the legacy search experience.

Here are some of the lessons learned:

It’s hard to escape the keyword mentality

This has been engrained in user’s heads since the search engine was born, change is like speaking a new language and requires a completely different experience. People are too accustomed to the single search input and go on autopilot if that is the main way to interact. To try and prevent this we took a two routes — by creating a search that was more conversational through chatbot style interaction, and by creating more of a wizard-style interaction splitting user input into steps. Each interface created it’s own benefits, but it really came down to the type of data and how prescriptive we could be. Data that was easily categorized worked incredibly well in the wizard allowing us to pinpoint where to look even further. We still haven’t fully moved people from keywords, but I see this becoming less of an issue as new technologies in machine learning and vr/ar become more mainstream and interaction habits change.

Nothing beats a two-way conversation

This isn’t a big surprise for most of us working with and fascinated by the latest technology, but it’s still worth noting. Two-way conversations with software are changing the way we live and work. Our interactions are becoming more conversational and while still not perfect you may not know when you are interacting with software or a human. This fit perfectly with our use case. We needed our users to provide information like they were talking with another human vs inputting for a search engine. By making the interface chat-based we inherited the existing habits people use with applications such as Slack or SMS without needing to teach a new skill.

For the virtual assistant we integrated the IBM conversations API to help us start this conversation with our users. This allowed us to begin with a very simple tree and expand quickly, learning from searches, maintaining context, and tweaking responses. One important lesson learned is that navigation within a chat environment is much different than a web application. Everything needs to be easily digestible within the chat frame viewing area. This means thinking through how a user should interact with a listing, action, or link while not disrupting the session.

Get to the point, but also give alternatives

Speed is the primary goal with a search application, the faster you can give the user exactly what they want the more effective the tool is. However like most search interfaces you may have many results that are related and potentially valuable to the searcher. The balance is to give the best answer while providing alternatives for the user without confusing or overwhelming. In the virtual research assistant there are a couple of areas where we try and get to the point. First, if a user asks a question and we have what we think is the right answer to that question, it gets displayed in a more prominent way. Second, when a user views a raw search result it can be pages of information to sift through. To simplify we leveraged the Enlyton Hotspot technology to highlight and show the applicable content within the original document allowing for the context to be maintained and quickly scanned.

I think the most engaging part of the solution is how universal it can be applied. Whether from health care, law, customer support, education, you name it — a better way to find information means more time to solve the bigger, more important problems.

We will continue to iterate and introduce new ideas with Enlyton as different users begin to adopt the application, and we are excited to see how the technology progresses.


Interested in learning more about the Enlyton search platform capabilities or create your own virtual research assistant? Visit enlyton.com for more information.

Want to build something new or integrate your platform with these services? Visit Mashbox.com for more information.

Michael
@encryptomike

SOURCES:
*https://www-01.ibm.com/software/data/bigdata/what-is-big-data.html

Show your support

Clapping shows how much you appreciated Mashbox’s story.