Slow search, fast search

Integrating human, organic rhythms in information seeking systems

There used to be a time when I sat down with a piece of paper, a pen and I would think.

Now I use a keyboard and I search.

And I used to struggle with the emptiness of a white piece of paper. That’s when thinking happens, the kind of thinking that is slow, deliberate, and effortful. Sometimes, maybe even logical.

It seems like an odd quirk that the only thing that didn’t change over the years is the emptiness of a white search box. Everything else changed: the algorithms, the interaction and even what search really means. But that white search box, still forces me to slow think sometimes.

“Remember what it was like to search in 1998? You’d sit down and boot up your bulky computer, dial up on your squawky modem, type in some keywords, and get 10 blue links to websites that had those words. “

Amit Singhal, senior vice president of search at Google

That basic idea of putting in a couple of key words and getting back a result that contained those exact matches worked pretty well in the early days. Until somebody figured out that relevant documents often have relevant neighbours. This idea, known in the industry as “fish search”, is what made keywords evolve into some sort of digital magnets, pulling on other data structures to create a space of meaning, which the machine can then interpret.

Going from strings to things, search is now ubiquitous. From text-based, to voice recognition, to visual matching, search has evolved into a context based system that can recognise and understand references to actual “things,” i.e. ideas or entities. And with that, the interaction changes as well.

Now, instead of inputting unnatural, mechanistic queries, you can ask almost-human-like questions. Instead of handling the system, you talk with the system.

“The future of search is a conversation with someone you trust”

— John Battelle, The Search

There’s something vaguely synaesthetic about talking in writing. As unnatural as that feels, we all adapt by transitioning into a more — what Kahneman calls — slow type of thinking, which activates the conscious, logical bit of the brain, because the fast part — the more primal, instinctive and emotional one — needs help.

I know how to talk to another person, but I don’t intuitively know how to talk to a computer. So I need to think. About keywords, queries, algorithms and generally about how the system works.

Talking to a machine like talking to a human is — for now, at least — still a thing of the future. The vision is a dialogue between an agent and a human, in which they know the context, proactively suggest things and interrupt each other. Like humans do.

But the truth is that we’re not quite there yet and I’m not always ready to think about the mechanics of it all. For now, we’re stuck in a temporary loophole, in a constant switching between slow thinking and fast thinking, forced by the design of our machines. And that continuous swap has a rhythm of it’s own.


There used to be a time when the only things gravitating around my words were my thoughts. Now, each letter I type in attracts swarms of algobots that organise themselves around my words. And it’s getting harder and harder to read what they’re doing. But maybe there’s a way to feel what they’re doing.

Getting what I say has to do a lot with integrating technologies like speech, natural language processing, semantic autocomplete and getting them to align with the context I’m in.

The system takes into account social, cultural and organisational settings in which computing and information technology will be used. It uses contextual awareness to semantically infer meaning. To understand intent and to respond accordingly.

From location, time and task awareness algorithms to reasoning, planning and learning algorithms, they all need to work together and align with semantics, in ways that are not human readable.

And that’s ok sometimes. I’m not always thinking of the inner workings of the system. On the contrary, most of the times I start a search with whatever is available to me as a trigger: a sound, an emotion, a physical characteristic, a moment, it could be anything.

I start a search by thinking fast. And that’s usually when search fails — when my input is one you’d normally find in the longtail. But with things like semantic auto-complete and semantic snap-to-grid, the system can fail gracefully, guiding me into a more slow way of thinking, so I can type in words that a machine can understand and process. Or that others have typed in for millions of times. It guides me back to the fathead, so I don’t have to slow think anymore.

Paradoxically, it makes me think so I don’t have to think. And the way that unfolds is a bit like a sine wave, that repeats each time I perform a search. So, there’s an important aspect to this type of interaction that we, as designers, should be talking more about: rhythm. A concept that is neither visual, nor linguistic, but essential to our daily lives.

In thinking, fast and slow, we are creating rhythmic patterns. If those patterns match those that our technology imposes, then the interaction becomes fluid, it disappears from your conscious mind.

Good design communicates with the broader, faster, more emotional system.

— Joichi Ito, Director, MIT Media Lab

A sense of rhythm is fundamental for any interaction, any conversation, even one with a machine.

In real world conversations, we leave out the context under the assumption that our communication partner knows the context as well. Same goes for conversations with machines. Only that it’s the machine’s job to fill in that context.

And now that we’re teaching our machines to process information contextually, maybe we should start thinking about ways of teaching them how to use that information organically.

We’re already creating interfaces that send information to — and receive controls signals from — our fast system. Wearable sensors, assistive robots, embodied technology in general, all have the potential to enrich our words by adding to the what we’re already collecting — at least 57 data signals if you’re not logged in.

So maybe rhythm can be expressed as an orienting feature — rather than just another datapoint — for structures in information seeking systems. As a descriptive force, that can be used, once you determine a rhythm’s function, to make educated predictions about how it’ll manifest in a variety of situations, like daily activities which occur in regular patterns.

But so does the cyclical balance of our slow and fast minds. And the transition from one state of mind to another gets reflected in our interaction with the system. So rhythm needs to be embedded in the interaction, at the level of the interface.

If we can figure out how to do that, maybe then we can talk about a real conversation between a man and a machine.