Is Search Broken?
Google, through it’s single-line query function, has been our gateway to the Internet for as far as we have needed to digitally search the web for information. But what if the way we find information today is slowly priming us to adopt a method of search which may not scale to our needs?
How does search have to evolve to keep pace with the next billion users the Internet is getting onboard?
Our reliance on Google to access information is so heavy that most of us have become part-time experts at ‘Googling’ information prodigiously. We start entering words to run a search, take the input of auto-complete suggestions for tailoring it, start seeing results and then re-run searches with a slightly different mix of words to more closely surface what we’re looking for.
Effectively, we have intuitively internalized how Google’s Search works and we enter queries in a manner that aids surfacing apiece of information. With every single search we make, we are building upon a process of trial-and-error that guides us to input just the right mix of keywords which we believe will surface the most relevant results.
We have become sophisticated searchers by course-correcting on previous countless searches.
As digital pervades our lives, more and more of the information (that restaurant review or news article or funny GIF or YouTube video or friend’s wedding picture) we also search for belongs on the Internet. Search functions to augment human intelligence by bringing information on our fingertips. As we try to reach further into and more firmly grasp this information, the management of our search queries needs to keep pace.
Google’s original keyword-driven search and link-authority based approach to helping users find information was a breakthrough in surfacing and ranking relevant information. This approach has continuously evolved to maintain quality in search results while factoring in changes in how people are using the Internet and the complexity of search queries. Some of the major changes have included results driven by user-personalisation, auto-complete style instant searches, inclusion of real-time information, localising search by integrating with maps and adding places and social signals.
The biggest change, however, has been the establishment of the Knowledge Graph. Announced in 2012, this has been Google’s first major step in semantic search. Essentially, through the Knowledge Graph, Google goes beyond keywords to classify objects into different categories and illustrate the relationships between these objects. The Knowledge Graph and its relationships appear in a panel on the right side of the user’s screen when searching for a well-known individual, e.g. Barack Obama surfaces his birth information, his family relations, other politicians also searched for etc.
While the Knowledge Graph is a step into semantic search, we are ultimately constrained not just by limited space but more importantly one-time input into this box.
Amidst researching sources for this piece and looking for well-known individuals to illustrate the Knowledge Graph (above), I searched for the founder of YourStory, Shradha Sharma — I was greeted by an ‘intelligent’ panel based on the Knowledge Graph which showed me details on an emerging singer from Dehradun who has nothing to do with YourStory but is a pretty talented singer. The single-input function is thus not able to effectively understand the larger context within which I’m searching.
For search to be truly semantic and increasingly relevant, it is important for the search engine to understand the intent and context within which a search is taking place. Limiting input to a single-line query naturally restricts user input and guidance that can help provide intent and context.
At its bare basics, the function of search is to find information.
When we have a conversation or interact with somebody, we are following a strand of conversation that starts in a broad domain and with each back-and-forth question-and-answer seeking information,we are using and building upon context to find more relevant information.
We are also using the building contextual framework to understand the intent of the other person. Whether somebody is looking to buy something, learn more about a topic, understand the relationship between certain topics or browsing information to make a more informed decision, intent becomes increasingly important in keeping the conversation relevant.
Let’s look at a few newer approaches to search to understand the extent to which context and intent augment search:
Google Now: Baked right into Androids, Google Now plugs into and links together data sources like location, email, calendar, app usage and app indexing to present intelligent information in a card view as and when needed. Google Now is a big leap in understanding and accommodating the passive and general context of a user’s requirements.
Wolfram Alpha: Unlike Google, this ‘computational knowledge engine’ fetches the answer to queries inline from external data sources by computing and extracting the relevant information.
Vurb: Similarly to Google Now, Vurb curates information around specific objects into cards. A social element helps users collaborate on plans, but the focus for Vurb is around curated organisation of information without a capacity for understanding context or intent.
In-app searches: By their nature, in-app searches provide a closed system which gives a broader domain/context within which the search is occurring. E.g. Searching within the Wooplr app assumes that the user is looking for apparel rather than documents. The intent of such a search can range from discovery or information or transaction or more.
Most of the above approaches are evolutions along the lines of curating and organising information better in terms of categorising objects and drawing relationships between them. None of these approaches delve deep enough into understanding context and intent as a natural conversation would.
Now until Oculus (read “Facebook”) has us plugged into a virtual reality, indistinguishable from the real world, where Siri or Google walk around with us (I personally imagine them as dogs), ready to strike up a conversation (I shudder to think of what happens when that person you hit it off with at a bar is really just a Durex ad) and give us any information we need, what does the intermediate bridge to that reality look like?
Well, through its Search API, Apple is gaining access to more and more information that is hidden behind apps. Siri, which can understand broad contexts from location, time and historical behaviour, will start to double-down to narrower contexts and understanding of intent by having conversations that mimic the back-and-forth of human interaction. This eight-second video is a glimpse of the power of such interactions.
But, more seriously, is this type of natural search something that can scale for these next billion Internet users?
Siri has not proliferated the Indian market and is unlikely to in the immediate future. Localisation to a substantially different audience is something that gets more complex in a conversational framework which necessitates specificity. The accessibility of search to the next billion users of the Internet is thus also increasingly important.
This debate about search is thus not only about how to make search increasingly relevant, but also about making the gateway to information much more accessible for the next billion users.
If Google is priming us to conduct searches a certain way, then invisible barriers are being created for people who do not have the same intuitive understanding of how to surface information correctly. The behaviors of this new generation of direct-to-mobile internet users is raw and new to us; a large number of them who are on Facebook and Whatsapp have no idea they are even using the internet.
From Whatsapp being one of the most popular products in India, it is encouraging to see the barriers of entry for consumers lower and commerce starting to happen through similar chat interfaces. A strong indication of the natural inclination to use conversations is India is being observed by the many startups that have popped up this year; built upon conversational user interfaces, particularly as assistants in the e-commerce space. The rapid adoption of Conversational UI products is a promising positive.
I believe a system to encourage the buildup of context and intent is a pre-requisite for the next stage of evolution of search and to enable this, search needs to be built upon a Conversation UI which most closely resembles how humans effectively converse with each other for information.
Siri is a step in that direction, but how do you build truly semantic search for the next billion users of the Internet?