Improving the usability of multi-selecting from a long list

Selecting multiple items from a list was never a pleasant task. There are some innovative solutions that work okay on big screens, but usually they are a nightmare to use on mobile devices. Here’s the story of how I found a solution to improve the tag selection for our providers.

Zina Szőgyényi
Tripaneer Techblog
8 min readJun 13, 2018

--

The existing UI solutions

When you search for UI solutions for multi-selecting usually you find similar solutions: give them a drop down that they can both scroll through or search in, and display the items as pills in the input field. Like this.

Scrolling and searching through a long list

Or you get the lists with checkboxes hidden into a drop down. Or two lists, and you move your selection from the left one to the right. Like these.

Hybrid Dual List with Filter solution from doejo

Our system already had a component from Select2, which was similar to the first example. Our developers started using it in multiple areas, as well as the listing creation process, which is made of multiple steps, right on the first step we ask our providers to pick 4–6 relevant tags to their listings from a very long list.

The problem with searchable multi-select

The list we display to our providers is very long. It contains about 300 items and most of them they are not familiar with. From screen recordings I saw that (1) users scrolled only a few times and then gave up on scrolling through the whole list and then (2) they started searching for what they had in mind.

The multi-select pill box is a good solution when the user is familiar with the content of the list and they know what they’re looking for. They can easily find by searching or just scrolling to the relevant part of the list.

In case when the content of the list is not familiar there are multiple issues. Scrolling through a list this long and read every item is extremely tiring. No one does that. Especially when it’s not even sorted alphabetically.

Then their search doesn’t return with any results 50% of the time. Either because they use a different phrase or word order than the strings in the list or the content just isn’t there. After a few trials and errors they give up and move on without actually picking the most relevant tags.

From our content quality team I heard that the overall quality of the picked tags was really low, very generic and not enough, it required a lot of extra researching and time investment to help out the providers and improve the content quality and ranking of their listing.

First idea for improvement

I was seriously considering the “Hybrid Dual List with Filter” solution, as the options are exposed (easier to read) and the selection is clear in list on the right. But how is it mobile friendly? We do develop with a mobile first approach even to our providers, and this solution didn’t fit.

What else? Let’s expose all the options to the providers in a form a tag cloud, like Foursquare does with “tastes”.

Select your tastes with Foursquare

With this improvement, we hoped that it’d be easier to find the relevant tags and select them. We were aware that the tag list is too long and unstructured, but it was a good enough small step to make and start learning from.

The before state — watch the size of the scroll bar
The after state — all items exposed

This was solely an UI change, no data or sorting in the backend was touched. As you can see on the example the “Categories” are not ordered alphabetically, which concerned me a bit. Is it going to be good enough?

Before putting it in the front of our customers I gave both versions to colleagues with 6 items to pick (which I took from a real life example from a screen recording) and measured the time it took them to pick the items I gave them.

On purpose I gave them the list as the provider tried to search for them, not as they are in the list.

In the old version 3 colleague of the 5 gave up the task, 2 of them finished it in over a minute. In the new version they all managed to complete the task within a minute, with a combination of Ctrl+F search and scanning. That was a good enough result for me.

Learnings from this test

I exposed half of our providers to the tag cloud based feature for about a month (due to low traffic), but unfortunately there were no real improvements in (1) picking more relevant tags neither in (2) the speed of finding these tags.

I had to realize that only just exposing the options was not enough, especially when it’s not even sorted. It requires a lot more focus and brain bandwidth as the eye has to jump back and forth, up and down amongst the tags instead of just following a pattern.

Iteration: the next step is a bit bigger

As my belief towards exposing the options didn’t change I had to deal with bringing in order and pattern into the story. This required some changes in the infrastructure as well. Was it going to be worth it?

To make it a little smaller I reused an existing but unused information architecture design which created groups of tags based on similarity and relevance. This saved us a few days of thinking, and I could jump into the iteration with my developer immediately.

We created reasonable size of groups with very bold titles (so users can scan those first quickly and then look into the actual list if they find it relevant) and very nicely organized lists in same sized columns, which responsively changed width in percentage but kept the from-up-to-down reading direction.

First I built the columns with display: flex; which ordered the items from left to right instead of from up to down. The magic solution was column-count: 2; Resolving this was a win on its own! (Click for the JSFiddle for illustration.)

The two lists were even longer than with the tags though, so we decided to separate these fields into it’s own page instead of shoving it into a form with 4–5 more items. This gave us the chance to monitor the new feature and it’s usage more carefully.

Analyzing the result

After exposing half of our providers to the grouped and exposed list of tags vs the multi-select pill box I watched screen recordings to see how well accepted this solution was.

My perception was positive from the sessions I watched, people did scroll through the whole list but only 1.93% of them got stuck with on this page. I wanted to have some hard data on the number of tags picked (I wanted to see clear improvement) and the amount of time it took to complete the steps (I wanted to see decrease in time). Unfortunately, the experiment tool we use is developed to measure changes in the customer conversion funnel, and not at all optimized to measure changes in the usability of form in our admin area.

So I had to gather and analyze the data manually.

First I identified the listings created in the control group and the treatment.

For the control group (V0) I queried the seconds taken for each listing to proceed from Step 1 to Step 2 (step 1 contained the multi-select pill boxes), and the amount of tags picked.

The treatment group (V1) had the tag picking on a new step, in Step 2, so for them I had to query how many seconds it took them to get from Step 1 to Step 3, as well as the amount of tags picked.

I put them into a Google Sheet to start the analysis, and quickly did an AVERAGE on the numbers I got. And the result was surprising:

A sample of the data set I worked with

10000 seconds extra in average??? Almost 3 hours longer in the treatment group? That can’t be right. I’m not a business analyst but something came back from my college studies about statistics and removing outliers from data sets, so you don’t see the impact of — in this case — providers who left the process for a whole day and then returned.

So what’s the best way to remove outliers that Google Sheets can easily deal with? After some research I decided to use the TRIMMEAN function, which calculates the mean of a dataset excluding some proportion of data from the high and low ends of the dataset. In the calculation I excluded the top and bottom 20% to get a more real picture of the usage.

And the result was surprising!

The completion of the 2 steps combined in the treatment group was 33% faster then the one step in control group and there were 4 more tags picked in the new version of list. What a great improvement!

The conclusion

Design is contextual. It might be a simple task to pick your home country from a list of 195 items, even if you have to scroll for a while.

When it comes to unfamiliar items it’s better to visually expose the items instead of hiding them. It’s even better to do it in a logically organized way: create groups with meaningful titles, and let the users zoom in to the groups they are interested in.

With less cognitive load the task takes less time to complete and leaves the user with more energy to go through the whole process.

--

--

Zina Szőgyényi
Tripaneer Techblog

A Digital Product Designer, traveler with addiction to fitness, based in Ottawa, discovering new places, foods and craft beers