Responsive Design: Why and how we ditched the good old select element

How rethinking the way users make complex selections across devices completely changed our design.

We’ve all seen this and know what it does:

The standard select element as rendered in Chrome/OSX

It’s the HTML select element. The invention of select dates back to 1995 with the introduction of the HTML 2.0 specification. So most of us have never experienced designing for web without select as an option. But it can be a really, really frustrating component to let into your designs. Let me tell you why.

Good things first

By using the select element it’s a no-brainer to create a list of selectable options. It’s easy and it’s cheap. It’s supported by all new and old browsers in use, and it comes with a lot of nice features, such as grouping options, keyboard navigation, single and multi select and reliable rendering across platforms without me having to put on my thinking hat. It just works!

So why not just use it?

At Tradeshift we’ve been working a few months on some soon-to-be-released updates for our user interface. Some of our core features include creation of invoices, quotes and purchase orders. It’s business documents with substantial amounts of data. Most often a human is involved in creating these business documents. Luckily, this human user has access to a lot of already existing data from various sources, which potentially makes document creation faster. All this data is predominantly represented as lists that the UI must enable the user to select from — efficiently and effectively — no matter the device.

Presenting option lists to users is most easily done by using checkboxes, radio buttons and by using select. However, some limitations in these components made the design team hit a wall for a number of reasons. Here’s an excerpt from a longer list of drawbacks of using select. The drawbacks would to some extent also apply to radio buttons and checkboxes:

  • The number of selectable options we have is often counted in hundreds which makes the standard select element hard to navigate.
    Example: When specifying the unit type on an invoice line, the complete list contains hundreds of possible units. It’s not just hours, meters, liters, kilos, pounds and pieces — but also crazy units such as hogshead, syphon, ‘theoretical ton’ and ‘super bulk bag’. Tradeshift deals with global trade and compliance and we must be able to provide all these options. Standard option selectors would turn into haystacks.
    A more common example is country selectors. I often find myself struggling to select United States in most selectors, no matter how smart the sorting of options has been done. For ‘popularity reasons’ United States is often found at the top of country list. Other times Afghanistan tops the list due to alphabetical sorting. Sometimes United States is far down the list, just after United Arab Emirates. Sigh! In addition to this, keyboard search is not available on most mobile devices. This forces the user to flick through the options manually. Searching is slightly better on desktop though, but it’s still limited to searching from the first letter onwards, so typing Emirates on your keyboard is not going to give you United Arab Emirates. You get it… and we've not even started talking synonyms yet.
  • The user often has to modify the options in the lists provided.
    Example: We provide a set of default taxes that the user can apply to each invoice line item. Often, however, legislation and taxes change and we must provide the flexibility for the user to add and change the default options. We don't want the user to go to the engine room (aka settings pages) while creating an invoice. For a fluent workflow, users should update properties like these in context, else we risk the product becomes harder to use than say a word processor template. Unfortunately, the select list cannot technically be extended with inline interface for mingling with taxes. We could of course show a modal dialogue with an interface to modify the taxes list. We'd then return the user to the updated select element when editing options is done. It’s an option, but quite a UX derailment that we've seen cause confusion to less experienced users.
  • The same input value can be generated from different selection paradigms.
    Example: Payment terms can be expressed as a relative measure (e.g. Net 30 days), or an absolute value (e.g. Dec. 10th, 2013). One could imagine many solutions combining radio buttons, calendars and selects. None of them seems to provide the kind of simplicity we were aiming for. We don't want two distinct inputs to select one value.
  • Select element UI interaction makes bad use of screen estate on mobile devices.
    Example: On an iPhone 4 the select element takes up 54% of the screen space (520pt of 960pt vertically). This allows barely five options to be visible in the list. This simultaneously limits gesture space to the same 54% of the screen (Android does a slightly better job in many cases, though).
More than half the space is taken up by barely five options in the select element. Flicking through many options is a pain.
  • Hierarchical data can be a real pain to deal with using the standard select element.
    Option groups which is a part of the select element’s features, have limited usage when you deal with complex hierarchies. Country selection offering sub-selection of states is an obvious example. Standard solutions typically involve lining up multiple select elements. So interaction goes like this: first the user picks one option in one list, then closes that list, interprets the UI adding or unlocking another select element, which must then be clicked, etc. Not totally insane on a desktop browser, but on mobile the pain grows and the visual/contextual relations are easily blurred. I recently heard the previous Principal Designer at Twitter, Josh Brewer, quote someone that Mobile is a magnifying glass for your usability problems which seems right, and in this case it definitely corresponds with Tradeshift’s own usability studies.
  • Styling the select element is poorly supported.
    There’s a whole bunch of reasons for the historically limited options for styling the select element — and even more scripts/hacks exist to overcome these limitations. Bottom line is that if you want your selectable options to fit nicely into your design in various browsers you're pretty far into Hackland. And even if you go with one of these very nice styling scripts, you've not solved any of the interaction issues listed above — you may actually have added a few issues if your hack has changed the scroll wheel or touch behaviours or eliminated the standard “search feature”.

So in spite of the advantages mentioned initially, the many shortcomings we experienced in more complex scenarios simply left us frustrated with the standard select element.

So what can we do now that the cookie cutter solution does not make the cut?

We looked at many existing solutions, also the scripts that re-style the select element. We figured out we had to dig deeper. Please note: I don't claim we've made big inventions in the following or that we invented the solution we ended up choosing. Variants of our final solution have been seen in many places. Also the solution we picked definitely has new shortcomings that we're working on solving now — but most importantly, it allowed us much more freedom in working with user input and we can provide a consistent experience to our users across a number of scenarios and platforms. I only claim that we had a good critical process where we evaluated the most obvious options, found them insufficient and came up with a solution through a solid RITE process. A process of describing our needs (some listed above), ideating, prototyping and end-user/acceptance testing over and over. We wanted a new UI component that provided richer interaction options while completely replacing the select element, since we didn't want a mixed user experience depending on what the user is selecting.

The solution

I'll skip the process and describe what we ultimately ended up deciding on. Mostly by using screenshots — please be aware that these are somewhat early screenshots where copy is not final. To explain, I'll use a few simple examples from the invoice creation feature, which requires a lot of selections by the user.

Basically the concept is to stack layers with the appropriate options providing ample space for rich interactions:

Phone size view of invoice creation: Stacking rich content layers allows the freedom in designing that we need

In the UI a subtle triangle indicates that there’s a list available for the field (full keyboard navigation is of course supported):

The indicator, here on the invoice due date field, tells the user, that the field must be populated via a picker.

Upon clicking a field with the triangle indicator, a panel sides in smoothly (in most browsers) and the page is darkened with an opaque overlay, which focuses attention on the panel; we call this panel a picker. In this example the user clicks the invoice due field and a list of standard payment terms are presented:

User clicked the invoice due field and gets default options with current one highlighted.

If none of the standard options satisfy the user there’s also the option to specify an absolute date by clicking the last option, specify date:

The second layer presents more fine grained options for specifying an exact due date.

This second layer presents more fine-grained options and is visually layered on top of the first layer, providing context to the user, keeping the user’s mouse and eyes in same position while also allowing back-navigation by closing the picker (escape key or clicking/tapping ‘x’). The visual layering provides an almost breadcrumb-style sense of navigational depth. What’s missing here on the screenshots is unfortunately the smooth horizontal animations further strengthening the sense of context.

Picking a date value closes all picker layers and sets focus back to the initial field activated, invoice due, and the user can tab on:

Focus is back, user can click or tab on…

Another example is clicking the unit type selector in an invoice line (the one that says PCS in screenshot above). Here the current value is highlighted in the picker:

Only four out of hundreds of possible options are listed. Search allows the user to select from the remaining hundreds.

As aforementioned the full list of unit types is counted in hundreds. Meanwhile many smaller companies only use a very limited set of unit types, so instead of presenting the full list we only show the most recently used ones and a search field. Searching, in this case for kilowatt, returns the options from the full set:

Searching quickly brings up options from the huge list.

Picking a value, here Kilowatt hour (KWH), closes the picker and returns the focus to the target field:

User picked a value and is now back at the initiating field.

Clicking a unit type field again now has Kilowatt hour (KWH) available as an option. Users who use a unit type once are very likely to use that unit type again, so this approach provides a settings free way of defining custom/individual lists:

Reopening the unit picker provides the newly used unit as an option in the short list.

There’s a ton of other examples with more complex dialogues (not least configuration of taxes) which keep the user in the context and don’t abstract away into who-knows-where settings pages. Our studies show that the user usually knows where to find, and how to use the values added, when it all happens in the same context.

Pickers first came up during discussions about dealing with complex selections on smaller devices. (Phone illustration:

The concept of pickers first appeared when we started designing the new Tradeshift from a mobile first perspective. I.e. not trying to squeeze the desktop experience into mobile, but more the other way around. On phone size devices we now also have entire invoice lines in pickers instead of presented in the “page body” as on tablet.

It adds some extra layers of pickers, but we've found out, that the visual clues provided for the user to establish a mental model of where things are going on, are sufficient to go at least three layers deep. Example of a three layer deep scenario could be: Invoice line (picker on mobile) > List of applicable taxes > Add new tax to list.

Obviously, we wouldn't believe this could also be the the solution on desktop if we'd not tested it. But out of the different scenarios setup for complex selection of field values in the cases we have, this one won hands down, also on desktop. We've found out, that compared to using a series of select elements and modal dialogues, this solution decreases the cognitive load on the user significantly. This, by the way, reminds me of a comment Rebekah Cox (Quora’s first employee and designer) once made: “Design is what we don’t ask the user to do”. I couldn't agree more. We should free up the users’ mind to work on their business not our tools.

This doesn't mean there’s not room for improvement. For instance we've figured out we need some way to not stick the picker to the right edge of the browser on larger resolutions and keep it closer to the field.

Extended use

An extension made a bit later during the redesign process was using pickers to manipulate and navigate using objects (such as invoices) as “hubs” for navigation:

The invoice is here a hub for navigating and interaction/manipulation.

This allows us to reuse a small-screen friendly design pattern already known by the user while not forcing the user reload another page to get the options.

Implications for the overall design

We’ve come to love the concept of pickers. We use them every time the user needs to populate a field from a set of options. We’ve done enough testing that we’re also confident that our users understand and prefer the pickers over complex select element combinations.

Using the pickers as navigation hubs allowed us to further simplify navigation and present options in-context without forcing the user into subpages or even worse, cluttering the UI into a non-decodable mess. Our lists are now cleaner, it’s easier prioritize the screens for end-user consumption and decision making, and synergies in desktop/mobile seem to pay off as users need to learn fewer patterns. Another benefit is that we technically have less different UI components to maintain.

If you had to start from scratch and the standard form elements didn't exist, would you end up designing your “multiple options selector for any platform” as it’s implemented with the select element today? Maybe not, and for us this was reason enough to reconsider.

You can follow me on twitter @mibosc

Next Story — Mobile UI ergonomics: How hard is it really to tap different areas of your phone interface?
Currently Reading - Mobile UI ergonomics: How hard is it really to tap different areas of your phone interface?

Mobile UI ergonomics: How hard is it really to tap different areas of your phone interface?

I have seen many diagrams explaining easy/hard tap zones on mobile interfaces. While I do believe these diagrams, they are mostly without any explanation of the analysis behind. Because of this I decided to do an experiment myself, measuring the time it actually takes to tap different parts of a phone’s screen. I expect there’ll be some correlation between the time it takes to tap an area and the experienced difficulty of tapping. More on this later.

As observed by more UX practitioners, it’s normal for users to adapt the way they hold their devices to the interface of the app they’re currently using. But for this first experiment, so far, I’ve only tested single, right-hand use on an iPhone 5S. (yes, I know that’s an old phone now, but the reality is sometimes faster than I am).

As Steven Hoober notes in his study from 2013, How Do Users Really Hold Mobile Devices, users who interact with their phones hold them predominantly in three different ways, listed with the statistics from his research:

  • one handed (49%)
  • cradled (36%)
  • two handed (15%)

Limiting the research needed for this experiment, I’ve focused on the dominant – one handed – hold.

The Experiment

For the test, I built a small web application with 7x12 tiles (=84). Each tile turns red once, in random order. When the test person taps a red tile, it instantly turns yellow for a moment and then back to white. The respondents’ reaction times – defined as red-to-tap – is stored per tile. The average tap-time for everyone who participated in the test is calculated and then visualised for easier analysis and digestion.

Red highlighting was chosen as it’s the colour which is most efficiently captures attention of humans. The tile size for iPhone 5S form factor gives a good approximation to the minimum tap area size as suggested in Apple’s iOS design guidelines (44x44 points).

Single, right-hand experiment. Testing the agility and speed of the thumb reaching different areas of the screen.

Each test person was approached in a similar way, and asked to sit comfortably, holding the phone in their preferred hand while not resting it on any surface. The test persons were also asked not to use their other hand for any interactions with the phone during the experiment. It was stressed that the goal was to tap the flashing tiles as quickly as possible and focus on the task of tapping all tiles in the shortest amount of time.

To increase participation and by invoking a sense of competition from the testers, I told them they will be informed how they performed compared to other testers, prior to the test. This seemed to help focus them and it increased their attention.

The Results

Single-hand use (right-hand hold)

iPhone 5S, right hand experiment with 7x12 tiles. Numbers are milliseconds from tile highlight to user tap event.

The diagram shows the average tap-time for 20 people, holding an iPhone 5S in one hand: their preferred, right hand.

It’s immediately clear that center and just below the center are the fastest to reach areas, and it takes almost 50% more time, to reach the upper and lower edges of the screen.

The vertical edge furthest away from the thumb is slightly faster to reach, than the closest edge.

Contrary to some claims, all corners actually cause significant delays in tap-time.

While this pattern is probably not super-surprising to anyone, it does confirm that there’s a sweet spot in the most natural resting position of the thumb. More surprising is maybe that there’s a slightly slower vertical zone before the last column of tiles on the far left. I.e. it’s actually faster to tap a tile further away, when it’s bordering the phone’s edge. The reason for this could be, that there’re less neighbouring fields which the user risks tapping and further, plus the condition that the edge can be used for aligning the thumb.

Limitations and Sources of Error?

  • Small set of respondents (20). I hoped to have many more respondents, but figured out I’d never get to finish this post. So instead of never publishing these findings, I went for the 2nd best solution I could think of. This also means that some tiles next to each other have very different reaction times, as outlier data isn’t “ironed out” as it would with more tests.
  • Only tested on iPhone 5(S) form factor. It’s a relatively small phone which rests differently from many other phones in the users hand. So these results may not be applicable to other form factors.
  • Phone 5 touchscreen accuracy may have caused slower tap-times along the edges, due to mishits. In case of mishits, the test person needs to tap multiple times for a single tile, which obviously increases tap time for that tile. Having personally supervised all tests myself, it seems this has not been a significant issue. Ideally I’d have recorded mishits/mishit tiles to also get a measure of accuracy. Anyway, for the specific device, it’s also worth accounting for screen inaccuracies when laying out an interface.
  • Definition of tap-time may be off. The tap-time also includes the time it takes for the test person to recognize a tile changing, reflect on that change and and initiate thumb movement. Visual changes along the edges may be harder to recognise, or, the fingers may cover a red tap, and will naturally add to the tap-time. This way, it’s possible, that it’s relatively faster for a user who already knows an interface, to tap off-center elements than this experiment suggests.

Next steps

This study was a small experiment I did to confirm some assumptions I had. When time allows I’ll dig into further analysis of some areas:

  • Correlate with thumb-length: Is there an optimum thumb-length for phone-UI single hand use?
  • Correlate with age: How does age affect our ability to reach different areas of the screen?
  • Test more form factors as larger phones are becoming increasingly popular.

Further reading and references

Hope you liked this post. If you did, I’d love a recommendation or share.
I’m also on Twitter where I tweet about these things:

Next Story — Designing with Vision
Currently Reading - Designing with Vision

Designing with Vision

A framework to not lose your design battles

Did you ever present an awesome design concept to a group of stakeholders and then walked out disappointed? Disappointed, that they just didn’t get it? Did you experience your good concepts being put down while some of the so-so ideas received praise? Did the discussion just go awry?

To my knowledge, all designers have experienced these sad situations at some point throughout their careers. And some go through this again and again.

My experience is that most of these unlucky situations can be prevented through careful planning during the design process and framing the discussion right. It’s no guarantee that your favourite concepts make it through the decision process, but at least they’ll lose in a fair battle.

But first you have to accept my premise: designers are not artists. Excellent post about this here.

It may be that some designers also work as artists, and some artists also work as designers. But it’s two very different roles — not the same.

Through our professional work we designers are mostly not seeking to exhibit our own opinions or emotions. We’re designers, who design with purpose and requirements rooted in the world around us. An artist’s inner drive and self-evaluation of a piece work is more important than whatever the world outside thinks of it, because artists’ work is personal. A designer designing with purpose cannot be solely responsible for evaluating his/her own work. Hence, a designer cannot just dismiss the surrounding world’s feedback.

That’s why involving other stakeholders matters — and this includes how you present your work on stakeholder meetings.

How to not present your work

There’re many ways to make yourself the host of a really bad show.

You risk meeting a shit-storm of gut feelings, uneducated opinions and top-of-mind change requests, if:

  • You don’t frame the discussion. This is when you present your design proposals without proper introduction. E.g. you start presenting your concepts immediately, first concept on first slide, with no warming up.
  • You don’t allow people to prepare up front. Typically this happens when yon’t have a meeting agenda. You invite a number of people to a meeting and name the meeting: Review Designs for Project X, no agenda.
  • You use the no-meeting approach: You collect all your design concepts in a PDF. First concept on first slide. You then send an email with the PDF attached and the email body reads: Here are my design proposals. Please give me your feedback.

These approaches are surefire ways for you to end up in frustration and sadness because the stakeholders simply didn’t get it. But remember: It’s your responsibility to frame the discussion and make sure these out-of-nowhere-opinions don’t surface. You asked for it — you got it!

Here’s a model

While nothing in the world can prevent unprepared stakeholders in screwing up your evaluations, you can do a lot to frame the discussion right.

The following is a model that my design professor used for brainwashing me 10 years ago. The model still represents the backbone in my way of organising my thinking around design issues. It’s a model by Erik Lerdahl and it’s called The Vision-based Model:

The Vision-based model. I’ve tweaked it’s purpose to fit mine, so Mr. Lerdahl isn’t to blame if you disagree with my explanations.

I’ve probably drifted a bit away from some of the original concepts and made my own interpretation of the model based on my experience. So anyone who already have a deeper relation with The Vision-based Model, please forgive me my modifications.

Reading the Model

From top to bottom:

  • Spiritual/Intention: This is where you want to align the visionary leaders, managers and strategic stakeholders. Normally this level is directly connected to a company mission: What is it ultimately the company wants to achieve? Fill that purpose in here and remind everyone who evaluates your designs, that this is why we are here, and this is the most high-level reason we’re doing this project/running our company. A rough example: “Preserve beauty of nature”.
  • Contextual/Expression: Still making sure all key strategic stakeholders are listening, you want to discuss how to conceptually stage the company’s mission. “Preserving nature” can be done by being an aggressive grass roots movement trying to evoke emotions on an ethical level or it can be done by trying to rationalise based on a scientific agenda or in 1000s of other ways. It’s important when you end up presenting your design proposals that everyone agree on or understand on a contextual level, how your client/company is positioned and intend to express itself. The contextual/expression rarely changes for one company/organisation and is deeply rooted in how it’s organised, funded and defines itself. Rough example continued: “Preserve beauty of nature … By being a scientific, reliable challenger”.
  • Principal/Concept: This is where the more visually oriented designers start imagining a wealth of solutions to the required expression. This is still a pre-mockup/sketching level, though. In our rough example, many concepts for being a “scientific, reliable challenger” can be imagined. It could be concepts that on purpose fall into prototypical ideas about colours (let’s make it blue), using graphs, pictures of people with lab coats and highlighting certificates of some kind (or, something that look like certificates). In our example: Expressing “scientific” by using imagery from a chemistry lab and overlaying dystopian graphs to challenge the recipient.
  • Material/Product: Here are your actual designs. If everyone agreed to your presentation of the hierarchy above, there’s a good chance you’ll get good feedback. If some disagree to some of your higher level conclusions, it’s good time to discuss on a higher level than layouts your well-intended illustrations.

When presenting your design concepts at the lower levelsIt all comes down to connecting your design proposals to vision and purpose (see, non-artist approach).

Above is the model with transitions you’ll have to make clear. The example I used was very made up and more marketing related. But it could easily be applied to more functional product designs processes. E.g. for directing discussions about visual qualities of your product and which mental model you’ll want to provide your users.

It’s more than a presentation model

Remember this is more than a presentation model. If you’ve not used the model or this way of thinking and structuring your design process it’s very unlikely that it can just be retrofitted before a stakeholder meeting. So while designing, make sure you can always mentally relate your decisions and ideas to the higher levels.

It’s probably appropriate to read Lerdahl’s own description of the model. My description here is somewhat modified and is modelled around my interpretation and fitted to my way of designing and discussing design. So Lerdahl cannot be blamed for my misinterpretations/model-abuse ☺

Please click recommend if you liked this article (makes me happy and proud plus helps spread the message). Or, even better, follow me on twitter @mibosc

Next Story — Responsive Design: Getting Advanced Filtering Right
Currently Reading - Responsive Design: Getting Advanced Filtering Right

Responsive Design: Getting Advanced Filtering Right

A practical example

Designing user interfaces for filtering content lists can be very challenging, especially when dealing with a lot of different parameters. Think of combining parameters like date range, content types and content statuses with keyword search. Good old and well-established design patterns do exist for this scenario. But now add small screens and responsive design to the mix: Great user experience faces big threats from factors such as decreased screen estate, clumsy fingers and variable viewing distances.

A View Mode Solution

As Daniel Wiklund (@danielwi) points out in his article “View mode” approach to responsive web design, off canvas navigation works well in responsive web design because it lets the user focus on the content, when browsing content, and it lets the user focus on navigation, when the user needs to navigate from a grander overview. Wiklund goes on to exemplify how this approach can also be used for filtering products on a site and performing searches.

I noticed a lot of similarities between Wiklund’s concepts and a recent design task we completed in Tradeshift, so I wanted to share our conclusions.

Dealing with Lists of Business Documents

One of Tradeshift’s services is exchanging business documents such as invoices, credit notes, purchase orders and quotes. This means that we also have document list views for users to manage their sales and purchases.

We know most users go to the list pages to track and update the payment status of unsettled transactions, so we default to this view:

List view mode, phone size.

On typical desktop and tablet sizes the list looks like this:

List view mode tablet/desktop size

The non-filtered representation allows a person to get an overview of settled and unsettled business and work with these documents. Finding a collection of documents based on e.g. date ranges, document types, statuses and customer is another requirement, and necessitates more mechanics.

The Mechanics of filtering these Lists

Simple searches can be done on by clicking the search box and typing:

Simple search on typical desktop/tablet view.

Further, a number of filters can be applied using overlay panels. We call these overlays pickers (see my article about them here) and they’re accessed by clicking the blue “Add filter” button. This slides in a picker overlay:

First level picker

Also, pickers can be visually stacked to allow hierarchical selection without losing context:

Second level picker

When the filter has been applied, it will appear in the header section of the page, just below the search bar (yellow element on screenshot):

Multiple filters applied will line up next to each another:

Editing a filter is done by clicking the yellow filter value which opens the corresponding picker. The user will then be in filter-edit-mode and can in this isolated mode configure complex filters, with the list out of the way.

This simplified representation of filters in the header leaves plenty of space for list-browsing-mode — with the filter values serving as introduction of what the list contains.

So we achieved letting the user focus oneither view mode.

How does it respond to my Phone?

Actually, the desktop designs presented above were designed after our phone size views (yes, mobile first…). Here’re examples of the phone size UI:

Search and filter behavior. When search is active the header bar slides out and the filters compress.

There’re some tricks applied to make phone size work smoothly and allow better use of the limited vertical space: When the keyboard is active the title bar slides up-and-out and the applied filters compress to 50% height. Smooth animations ensure no abrupt visual disconnects are experienced when switching modes.

For ease of interpretation we stack the applied filters vertically, and we also give them the same yellow color treatment. The color treatment visually groups everything that filters the listed contents.

The stacked pickers work on phone size, exactly as on desktop size with minor styling tweaks such as a smaller header bar:

There’s a cognitive advantage in using the same design pattern on different devices: It only has to be learned once (probably only important if you have users accessing your service on different devices).

How about even more complex Cases?

We prefer appending the selected filter parameters above the list as a way to describe what the list contains. In extreme cases, say we had 5 or more parameters, we’d need to reconsider the way we’ve solved this, as the many parameter selections would take up so much space, that the list content updates wouldn’t be visible to the user.

Most scenarios in our case only involves adding one or two filters, so that’s not going to be an issue here. But say we had an apartment rental site. In that case the many possible simultaneous filter criteria would require a more compressed filter representation and we’d most likely have to put the individual filter edit/remove functionality into a picker instead of directly into the header of the list.

A Note on Aesthetics

We’ve chosen to keep the “thumb-sized” UI elements in our regular desktop size designs. We could easily have bumped down the sizes a bit to design around the more precise movements of a mouse pointer vs. tapping. But as the number of clickable elements is rather low and we not only aim to make a functionally easy service but also want it to look easy to use, we kept the same size UI elements. The bigger elements with the addition of white space allowed by the bigger screen sizes, simply makes it look easier to use. We even once had a user who said that Tradeshift looks easier to use than it actually is. Good for our aesthetics, calls for more work on the usability side of things, though, which this redesign represents.

If you liked this article I’d be happy if you click recommend. And/or, you can follow me on twitter @mibosc

Next Story — Avoiding pseudo-principles in your design documentation
Currently Reading - Avoiding pseudo-principles in your design documentation

Avoiding pseudo-principles in your design documentation

Make sure the opposite principle is also a potentially good principle.

The need for proper UX and UI design documentation increases rapidly, as your number of product teams grows. As in most other situations, understanding why something should be done is more powerful than just knowing how it’s done. This is the reason you should make sure that your design documentation audience gets more than just ‘how-to guides’.

An analogy: If I know why the plants need water, chances are good that I will feel as bad as the plants, if I forget to water them. If I’m just told how to pour water on the plants, without knowing why, then my empathy for the task lies in just doing the task. Not doing it matters less. Plants will die, eventually.

Product end-users need a reliable mental model of your product, so do your engineers and designers.

When creating design documentation recently, I wanted to come up with a way to ensure that what we documented would be actual decisions. Doing that I made a rule that I call The Opposite-But-Good-Principle Rule. It’s a rule that’ll help you not wasting words on documenting pseudo-decisions such as: “Your designed interface should be easy/pleasant/blah to use”, “Your interface should be visually accommodating/friendly/blah”. Who wants the opposite? Nobody. So in many (if not all) cases, the value of these sentences approach zero.

The Opposite-But-Good-Principle Rule

It goes: Whenever you have decided for a principle, make sure the opposite principle is also a potentially good solution. Make your decision a wise choice between two opposite potentially good ideas. If you just make a choice between an obviously bad and an obviously good idea chances are, you are not really documenting a decision, which can easily be a waste of words and time.

One Brief Example

Chosen Principle: Avoid settings pages by always embedding configuration options and settings in the primary features. Doing this keeps the user in the context of the primary flow of actions, so that […]

Opposite-But-Good-Principle: Always use settings pages for configuration to ensure the user has access to a full overview of all the configurable options, so that […]

There may be several reasons to choose one or another, and the reasons can be expanded upon. It’s your obligation when documenting your design guidelines to make sure the hard questions are answered. Not the easy ones. In this example the right principle depends on your insight into your users, your product and how the two should interact for the optimum user experience. For another product the opposite good example above might make perfect sense.

Strong useful decisions are either made when it’s between good-only options or between bad-only options. Not when it’s between a good and a bad option.

Sign up to continue reading what matters most to you

Great stories deserve a great audience

Continue reading