Validating aesthetics in a web design project

Marko Dugonjic
SymSoft Solutions
Published in
7 min readFeb 13, 2018

Many current design research methods can help teams and organizations to successfully validate interaction, the usability of user interface, and the findability of content. While some research method results are pretty exact and quantifiable, other method results can be less refined. However, regardless of the chosen method, if not done right, the likelihood of subjectivity creeping in during the research process becomes much more probable—potentially masking the more crtitical insights. So… how can we genuinely test the quality of a visual design and / or the aesthetic preference? Ultimately, how can we validate whether end-users approve of a visual design?

Whether you’re tasked with designing a task-oriented interface — that almost always benefit from a straightforward and usable appeal — or a marketing-focused interface, visual adherence to the brand guidelines as well as an appealing visual design are very common project requirements. Even if branding guidelines don’t exist, there are almost always some basic visual style requirements. As such, the guidelines and / or requirements ultimately set the tone moving forward.

Nevertheless, subjectivity can still pose quite a challenge for designers and stakeholders alike when it comes to visual preference. Why? For a start, it’s sometimes tricky to apply aesthetics that will resonate the same with all end-users and customers. Furthermore, for each given design problem there are usually many different viable solutions to that problem. Hence, which solution is truly the best fit for the problem at hand? Therefore, the first question we need to ask is whether we can successfully measure aesthetics?

Scientists Papachristos and Avouris (2011) defined visual attributes such as symmetry, order and complexity, balance, and contrast as low-level evaluative constructs. Perceived usability, credibility, trustworthiness, novelty, and visual appeal represent high-level constructs. This is an important distinction because high-level constructs can not be measured mathematically. They usually require some qualitative feedback from real people to be properly evaluated.

But, but, but! Why would you test the aesthetics in the first place? Is it not the case that designers should be the final decision-making authority? Well… not so fast. Indeed, experienced designers bring a lot of skills, ideas, and expertise to a project. They skillfully identify and create different viable solutions. But even for the most experienced, it’s sometimes difficult to know which solution resonates the most with the target end-users.

Benefits of including end-users

Apart from the primary reasons outlined above, there are other less obvious benefits to including end-users in the process:

  1. Aligning the team’s vision and developing common understanding. After seeing real people using the design, the whole project team, including key decision-makers and stakeholders, get to clearly understand which design assumptions were correct and which ones were less relevant. Clear understanding helps with aligning the team’s vision, developing common understanding, and pushing everyone’s efforts in the same direction.
  2. It’s easy to estimate if the visual appeal is an important factor in the first place — and to what extent. In some projects, quick access to information heavily outweighs the need for visual appeal. In others, overly sophisticated design can send the wrong message about the value of the product. However, even when core usability is a top priority, many websites still benefit from distinctive visual appeal and brand recognition.
  3. End customers provide us with nuanced feedback on how they perceive the design and the organization. As a result, the visual design is informed and can be improved based on the input provided by the external audience. Sometimes a visual style preferred by the project team doesn’t resonate with the end customer and vice-versa. After all, as a designer Zuzana Licko said, “we read best what we read most,” which is obviously a different experience for each and every one of us. What the designers or the stakeholders are used to seeing is not necessarily what the end customers are used to seeing.

Having hopefully established convincing enough arguments in favor of testing, let’s review some viable options for testing aesthetics of interfaces.

Scientific methods for testing website aesthetics

There are multiple options available for validating aesthetics. However, visual design tests don’t have to be conducted as a separate activity. In fact, the examples below can be easily combined with other evaluation methods too. This is what our design team at SymSoft prefers resulting in better efficiency and project budget optimization. For instance, after the usability test phase of the session is completed, we can also test the participant’s attitude towards the visual aspect of the interface design. How?

  1. Ask participants to solve a cognitive task after using the design. This is useful when comparing which of two or more solutions are more appealing. Science says that people perform better in cognitive tests — such as the Candle problem — after being exposed to a more attractive design. For example, the participants who receive the good typography afterwards perform better on Isen’s cognitive tasks as well as on subjective duration assessment (PDF). This test is quite reliable because the participants aren’t aware of the connection between the goal of the test (which is how they perform after being exposed to a design) and the cognitive problem task. A good way to establish a baseline is first to test the current design and then compare it with the results of the new design test.
  2. Relative Subjective Duration assessment (PDF). Time flies when we’re having a good time. Studies have shown that users underestimate tasks they felt were pleasant to accomplish. By comparing the user’s estimated time and the real-time it took them to complete the task, we can easily compare two versions of a visual design where all other factors, such as navigation labels and interaction, remain unchanged.
  3. Semantic Differential test. The semantic differential is a rating scale used to measure opinions, attitudes, and values on a psychometrically controlled scale. By offering pairs of antonyms, subjects can select a value on a scale to evaluate the interface. Two examples of scales are modern vs. traditional and appealing vs. off-putting, but we can use many more scales in the same test depending on the given brand attributes.
  4. Desirability test was originally developed by Microsoft (DOC). Each participant individually selects a number of cards (for example, three, five, or more) from a selection of cards with different adjectives written on each card (one adjective per card, 60 percent positive vs. 40 percent negative adjectives). After a few participants undertake the test, the generated word cloud from all participants provides a clear idea about the aesthetic perception of the interface.

Combine the methods above into a more robust suite of tests to push the research even further, for example, by testing different design options. However, it’s not always viable to create multiple versions of the interface. Instead, it makes more sense to apply the iterative approach to the design process.

A more simple, less biased method

When following the iterative design process, it’s helpful to use design validation for course correction, especially if we have conducted exhaustive user research upfront and it’s clear why people visit a website. The iterative design process also allows us to utilize a much simpler method to test aesthetics which can be easily combined either with user interviews or usability tests. Essenitally, we combine multiple tests into one user session.

First, we capture the interview or the usability test verbatim. After the session, we would extract all the adjectives participants used during the sessions.

Second, before the end of each session we literally ask each participant to describe the interface with five adjectives:

Please use five adjectives to describe the website you just tested:

1.___________________

2.___________________

3.___________________

4.___________________

5.___________________

This simple task is open-ended, unlike the Semantic Differential or Desirability Test that spell out options to the user and introduce bias. With an open list, participants can come up with answers that first spring into their minds.

Finally, we compare the adjectives they used during the usability test against those they selected after the test to gather insight into their perception of the interface. The ones used during the test tend to have more value. However, if the two lists of adjectives end up matching one another and they too match the list of brand attributes—then we can be confident that the design resonates with its intended audience. Be aware that if the adjectives used are too diverse, it just means that visual communication is not clear and requires more work.

The key factor is to ensure you gather a sufficient volume of feedback. From experience, having fewer than 5 participants, more often than not, does not generate any overlaps. With five or more participants, their combined answers generally reveal numerous patterns. However, we rarely see any meaningful improvement in data resolution after ten participants.

Including end-users pays off

Testing and validating designs with end-customers improves the final result and builds the project team’s common understanding of the user’s expectations. By testing designs, we end up delivering better websites that meet user’s needs and, as a result, improve business objectives.

Did you like this article? Do you know of some other method? Let us know in the comments.

This article was originally published on SymSoft blog. SymSoft Solutions deliver high-performance, usable and accessible websites for enterprises and public sector organizations. Our user experience experts have over 15 years of experience meeting customers’ needs and improving business results. Follow us for more insights like this or talk to us about your next web design project.

(Photo: Pexels / Startup Stock Photos)

--

--

Marko Dugonjic
SymSoft Solutions

Design Principal at Creative Nights. Editor at Smashing Magazine. Founder of Creative Nights, Typetester, UI Workshops, and FFWD.PRO.