Thought experiments in search: an example of design thinking
“Searching” is the fourth single from the album Elegantly Wasted by INXS. And how you choose to navigate large online inventories of categorised products has become a large part of Sainsbury’s business too.
But what if we said (based on some observation of people using the Sainsbury’s website), that people are bad at searching?
Like all first approximation statements about human behaviour, it’s a massive generalisation. Like saying men are good at DIY. But you have to start somewhere to understand what motivates people to do things.
So if people need help searching, what then?
The current hypothesis
It’s fair to say that our “incumbent assumption” — on which our online shopping experience is broadly based — is that customers know what they want.
This view of search says that you enter the name of something you have decided to buy, and find it. You then put it into your basket. We also (tacitly) assume that in doing that you also intend to buy it: to check out and go.
We might even call this the “classical model” of online shopping. The “finding” part of which concentrates on the activity of the search page.
But a big part of UX is to ask the stupid questions.
The stupid questions are the most powerful questions. When you ask why is it this way? That’s where the power is, when you push. And that’s the breakthrough that leads to great radical innovation. — Don Norman
Do we believe that people know what they want? Is that why our customer experience is as it is?
Signals: inflection and reflection
At Sainsbury’s, we’re lucky to have access to lots of good quality analytics data. This is useful to help confirm our “models” of customer behaviour (but remember data alone can only tell us what is happening, not why).
Sometimes, we can see disconnects between what that data says and what we think are the reasons for things. In fact, part of the reason for writing this article is that right now there seems to be a disconnect between our data and what you would expect from the classical model.
The data shows customer uncertainty in the form of changing basket contents, filters and the use of “favourites”, amongst other things. There are also clearly problems (for example) with filtering, which by definition removes things from view, reducing serendipity, and other things.
The more you dig, the more the incumbent assumption that people know what they want seems problematic in terms of things like product availability, interaction, decision making and utility.
Testing the idea of a new thing
So if we ask the stupid questions, one place that might lead us to is a “recommendation” journey.
Obviously, there are going to be various ways that such a journey might be designed. But for the sake of a worked example, let’s say it’s a system that looks at who you are and what you like, and shows you a suggested shopping basket based on our understanding of other people’s shopping habits (Big Data! Like it’s 2010 all over again!).
But I have the feeling there would be some pitfalls in this theoretical “recommendation” design idea.
Part of the problem might be in the absence of agency…
The activity of search is complex, and that complexity is in turn stimulating. A good outline is given in this paper which cites the existence of 16 information seeking strategies and 20 search intentions. These include learning from the data set, self-improvement and the exercise of information “foraging” skills.
We are pre-disposed to work for results in any domain, and resent having results handed to us (at least too much). The psychological feedback our work creates is linked to mental health, self worth and purpose — the latter strongly linked to happiness.
So the idea of simply being told what to buy — even if the system gives us variety — seems unfulfilling. Even the laziest customer at a fast food restaurant will spend some effort trying to decide from the limited choices on the menu.
Not a good idea after all?
Well, it depends… But wait — can’t we research this?
You’ll notice that none of the above has mentioned user testing 😉
The trite reason for this is that if it were possible to just ask or observe customers to obtain the right design for a search experience, we wouldn’t need UX designers.
But it’s also true that the only way we can acquire beliefs is by observation, which in turn needs interpretation on our part. Luckily for my example, the UX of search in a commercial context is a very rich area for design. So we can’t avoid testing our stated hypothesis with design ideas in the buying of a basket of groceries, toys for a child, or a TV for the living room. The commercial imperatives are too great not to do so.
But whatever our observations, we have to think through the meaning of our ideas and why we believe they are or are not working.
So what?
Aside from showing that user experience design is eternally difficult, the point of all this is to show that it’s our beliefs and not the design interventions based on them that really matter.
Putting it another way: ideas are easy to come by and mostly fairly obvious. But the beliefs that give rise to them are harder to recognise and challenge. We must “ask the stupid questions”.
So in the the case of search, while there is doubtless a local maximum involved in any particular approach, experimenting with the “bad at searching” idea (while preserving satisfactions brought about by earning the results) might be a worth it in case the incumbent hypothesis isn’t as strong as we think.