Microinteractions:

yatrik raithatha
Bootcamp
Published in
13 min readApr 13, 2022

Tiny things make a big difference

Abstract

Human-computer interaction has come a long way now, we interact with simple products like toasters to complex digital products like a banking app hundred of times a day. Those interactions translate into the usability of the product and eventually user satisfaction, trust, and the overall experience of using that product. During the last two decades, designers have been focusing on these interactions (physical buttons, scroll, long press, etc) and they are continuously evolving as we move from big computer screens to mobile devices, smartwatches, and now foldable devices. This paper examines the structure of microinteractions and their relations with the interaction model, what are the different dynamics of aesthetic experience, and how we can change the elements of microinteraction to design those experiences, and at the end, we discuss the bottom-up approach to building the products.

Introduction

Microinteractions has been with us since the advent of electrical devices such as bulb, radio, doorbell, etc. Its history is as old as the history of technology. The small interactions with objects around us in our day-to-day life such as toasters, unlocking our phones, switching phones to silent mode, controlling the volume, etc are all microinteractions. From a simple button press to shaking our wrist to activate a smartwatch microinteractions have evolved with the advancement of technology. Some interactions like copying a text or moving a file that we take for granted today and are invisible to us were once novel microinteractions. Looking at the micro things has helped designers improve the products, it is really interesting to see how designers have taken metaphors of real-life to build the microinteractions such as cut-paste, zoom in-out, etc, and these details not only improve the usability of the products but it also helps to decrease the learning curve of users with intuitive interactions.

How do we exactly define what is microinteraction and what is interaction? The thesis Enabling Mobile Microinteractions (Ashbrook, 2010) defines it as “interactions with a device that take less than four seconds to initiate and complete.” Whereas book Microinteractions (Saffer, 2013) defines it as “Microinteractions are contained product moments that revolve around a single use case — they have one main task.” A whole app or a product could be just a microinteraction, for example, a weather app (Figure 1.1) or a toaster because they are only focused on one thing. A feature could be

built on multiple microinteraction, for example, LinkedIn “like” button (Figure 1.2), where someone can select different emojis to react to a post. So microinteractions can be a building block for a feature and eventually the whole product. When interacting with a product, paying attention to these elements may assist the user to understand what to do, how to accomplish it, and what to expect. Well designed microinteractions make products intuitive and enhance the user experience of the

Figure 1.1 weather app could be considered a microinteraction because it only focused on things that are to show the weather.

Figure 1.2 LinkedIn like button consists of 2 microinteraction, (a) when you long-press the like button it opens up a space to select emoji (b) you have to select emoji to react to a post.

product. Microinteraction can become the identity of the brand when you create a particular experience that resonates with your brand. The term for this is “Signature Moment — A standout interaction in a product or service that leaves a lasting, memorable impression” (Janhagen, Leitch & Judelson, 2020). Facebook’s like button became the signature moment for the product. Microinteractions are not just constrained to visuals, they could be sound, haptics, physical, or gesture-based interaction. Snapchat notification has its peculiar sound which is a signature moment of the product. Similarly, iPhone had a one center button interaction that had multiple microinteraction but that too was a signature thing for Apple. Microinteractions are usually preferred (Narvhus, J.M. 2016):

  • Communicating feedback
  • Turning a feature on and off, like muting a phone
  • Achieving a single task, such as liking a
  • post on Facebook
  • Controlling an ongoing process Showing changes or system statuses, such as loading bars or status icon
  • Changing a setting
  • Preventing human error
  • Viewing or producing content

We will be looking at some of these examples of microinteractions in this paper in the following sections.

Interaction model & Microinteraction structure

Human Processor Model

Before we go into the interaction models, we will try to examine the mentalism and mental representations to better understand how humans interact with the objects in the world. (Newell & Card, 1985) talks about how three different processors: perceptual processor, cognitive processor, and motor processor work to make sense of the world around us. Cognitive processors work with long term memory and working memory to pull out the representations of objects, feelings, actions, etc from the current knowledge to understand the input coming from the perceptual processor. Once the processing is done by the cognitive processor it sends a signal to the motor processor to act to the particular situation in a particular way. Mental representations are the mental model or understanding of how things work in the real world.

Norman’s Interaction Model

Don Norman’s seven stages of the actions (Norman, 2013) (figure 2) is a more lucid form of mentalism that explains how humans interact with objects to achieve a goal. A goal could be switching on a light or transacting online on a mobile app. Let’s dissect a simple goal of unlocking a phone into seven stages of action.

  1. The goal is to unlock the phone
  2. How can I unlock a phone? Type a pin or scan a finger.
  3. Which option to choose? If I am using a pin then specify the sequence of touches on the screen.
  4. Once I have a specification of actions I will perform the task.
  5. Then I will see what happens when I hit enter.
  6. I see that I have reached the home screen
  7. I can compare my previous experience if I have reached my goal.

At every stage of this cycle, there is a process of Input -> processing -> output and at the macro level as well we are following the same process to achieve our goal. We can derive the interaction cycle (Norman, 2013) (figure 3) from the seven stages of the action model. The interaction cycle consists of the gulf of execution which is the information needed to act and the gulf of evaluation which is the information needed to understand what happened when the action was performed. Overall user experience is dependent on all parts of the cycle during the interaction. The goal of microinteraction is to bring those gulfs close to make products intuitive. Now let’s look at the structure of microinteractions and understand how they can help bridge those gaps.

Structure of Microinteraction

Microinteractions are powerful not just because of their nuances or small size, but also because of how they are constructed. The structure of microinteractions consists of four parts (Figure 4.) (Saffer, 2013) the trigger that initiates the microinteraction, the rules that determine how the microinteraction works, and the feedback that illuminates the rules, loops, and modes, the meta-rules that affect the microinteraction. The trigger can be initiated by the user for instance, by clicking on an icon, filling out the form, etc and the trigger can be system-generated as well for instance low power mode dialog to inform the user about low battery.

Let’s take an example of the Duo app which has a microinteraction of authenticating a user.

Trigger: Send me a push is a trigger for microinteraction (figure 5. (a))

Rules: It will send a notification with two actions (figure 5. (b))
Users have to click on one of the two actions (figure 5. ©)

Feedback: Once the user clicks on to approve it will provide feedback in green saying logging you in. (figure 5. (d))

Loops & Modes: If you have selected remember me for 7 days and the next time you log in it will directly show you green feedback instead of sending you the notification.

So now as we deconstructed the microinteraction and understand the structure we can see that structure of microinteraction supports the seven stages of action and eventually it is structured in a way that helps to bridge the gap between the gulf of execution and the gulf of evaluation. A well-designed trigger helps users Plan -> Specify -> Perform the action and a well-designed set of rules & feedback helps users Perceive -> Interpret -> Compare the result of an action performed to achieve the goal. We will talk about the designs of the trigger, rules, and feedback in the next section to understand how microinteractions help in building the desired user experience.

When we look at the mentalism part there is this cognitive processor which contains the mental models of how different things work and it constantly evolves as we experience new objects in our daily life. From these mental models, designers can create conceptual models which is a high-level plan for how products and features should work once we have conceptual models we can extrapolate them into seven stages of the action model which will help us create the microinteractions which are intuitive to the user. This is a bottom-up approach for building an interaction -> a feature -> the product.

Aesthetic Experience & Microinteractions

In the previous section, we talked about the interaction model and structure of microinteractions and explored how they go hand in hand to make products intuitive and usable. Now we will be looking at the dynamics of aesthetic experience and how microinteractions can build those experiences at the micro-level of the products. The rhythmic dance of aesthetic experience has an internal, dynamic structure. Dewey identified closely related processes such as cumulation, conservation, tension, and anticipation to refer to the internal dynamics of experience (McCarthy & Wright, 2004) The overall user experience is a combination of these dynamics and it can vary depending on the previous experience and knowledge of the user. Not just previous experience but any aesthetic experience is dependent on context: the life and abilities of the user, the affordances of the artifact, and in whatever physical and social space the interaction takes place (Petersen, Iversen, Krogh, & Ludvigsen, 2004). Experiences are invoked when we interact with the objects/products be it touch, smell, visual, or sound. Designers have been focusing on these interactions for the past few decades because that’s where the magic happens. Interactions should not be just about conveying the meaning and completing the task it should be thought-provoking and encourage people to think

differently. We will be looking at a few examples of microinteractions that invokes different dynamics of aesthetic experience.

Cumulation: Refers to the build-up that attends the temporal unfolding of an experience. I believe google pay payment is a microinteraction that builds up the experience and has a very nice unfolding screen and a sound when payment has been done. I consider this as a microinteraction because it is focused on one thing and it does it with a very fulfilling experience. The interaction starts from tapping on the payment terminal -> selecting card -> loading -> payment done (Figure 6.). Here the animation and the sound gives that unfolding experience.

Conservation: Refers to the tendency to hold onto some of what has gone before, be it energy or meaning. Conservation of meaning or information is really necessary for a good user experience as it reduces the cognitive load. Many microinteractions conserve the information while filling out forms on websites and apps. One very simple example of such microinteraction is when you copy a link and try to type it into the web browser, it shows the option to directly paste the link and even shows you what you have copied (Figure 7.) One thing we can observe here is out of different things from the structure of microinteractions, emphasis has been put more on the feedback of the interaction which communicates the status that something you copied is still in the memory and you can use it to search in the web browser. This is a great example to show how we can emphasize different elements of microinteraction to change the experience.

Tension: Refers to both the opposition of energies within the experience and between people involved in the experience. This is a technique that social media and the streaming platform use when the users are looking forward to consuming content in a quick timeframe they keep this pull to refresh microinteraction ( Figure 8.). This microinteraction keeps the audience engaged enough so that they don’t drop off but also creates an experience of friction because the user wants to consume content fast and you are making them wait. In this particular microinteraction, the emphasis is put on the rule which is to show a loader until the content is loaded.

Anticipation: It can be seen as occurring in two temporal phases. Phase one is where the user is expecting something from the interaction and phase two is where the user relates to what happens during the interaction. One microinteraction that I could think of is the scratch card rewards (Figure 9.) in the Google Pay India app. Because of this particular microinteraction, it was a huge success in India and the reason is people love to anticipate things even if the expectations are not met. If expectations are met then the experience is rewarding. Once payment is done users can go to the reward section and click on a card to scratch this whole experience creates anticipation in the user's mind. In this microinteraction the emphasis is put on the trigger of the action to make it fun and engaging to the user, the trigger could have been just a button click but instead, they designed a scratch action as a trigger.

As we can see from the various example mentioned above, the experience can be controlled by adjusting the elements from the structure of microinteraction. Microinteractions are small yet

powerful because they are the details of the product and details are the differentiators. If crafted well they can take the product to next level.

Can microinteraction be annoying?

We looked at the power of well-crafted microinteractions in creating an aesthetic experience now let’s explore what could go wrong if they are not crafted carefully. One example that I could think of it is the mobile notification. I am not saying that notification is a poorly designed microinteraction, but it is designed so well that now it is working against users' will. Notifications have system-initiated triggers and it shows only the right amount of information to lure users into clicking it. It happens to everyone that we just wanted to look the time, we saw a notification & clicked on it and bam there goes half n hour! It’s not just about time, it’s more of a serious thing because people tend to look at notifications during driving as well and get distracted. Another such example is the famous like button by Facebook. It is also a very well-crafted microinteraction and it became the signature moment for Facebook, but no one knew such a microinteraction could have this big impact on people’s lives. Recently Instagram had to stop showing the number of likes on a post because of negative impacts on human behavior. The above example shows how small things could have a big impact.

Microinteractions can be annoying too for example if you are filling out a form and at the end when you click to submit it pops up a dialog (Figure 10.) saying there are errors in the form. Here the trigger is the submit button, the rule is to show a dialog and feedback is static information — “ there are validation errors”. Here the form of microinteraction is wrong, if there are validation errors, they

should reflect near the text box plus the feedback is poorly communicated static information which is of no use for the user because it does not specify the place and type of error. So it is really important to see the context of use and construction of the microinteraction.

Conclusion

Throughout the paper, we explored what are microinteractions, and how they are constructed in order to comply with the interaction model and bridge the gap between the gulf of execution and the gulf of evaluation. Then we looked at the dynamics of aesthetic experience and how we can create those experiences at the micro-level of the products with microinteractions. There are a few ways for designers to look at microinteractions, Adapt to the strategy of the bottom-up approach think everything small and as little as possible, and then build features. Another way is to think of a whole product as a microinteraction, if you want to add another feature then think of it as another product with just a microinteraction. I believe google pay was built in this way because it just started with payment and they were razor-focused on just payment, Later they introduced rewards and a few other things. Microinteraction helps build a brand and its user experiences. The reason why we love some products and hate others is because of the details. Details are one way of showing users that you care about them and think about them & that is the reason why tiny things make big difference.

References

[1] Saffer, D. (2013). Microinteractions: Designing with Detail. Sebastol: O’Reilly Media, Inc.

[2] Daniel Lee Ashbrook (May 2010) Enabling mobile microinteractions PhD Thesis, Georgia Institute of Technology

[3] Janhagen, Leitch, Judelson (2020) In Search of Signature Moments published by Idean and Capgemini Invent

[4] Narvhus, J.M. (2016). How can Design of Microinteractions Contribute to Increase Trust in Mobile Payments?

[5] Allen Newell & Stuart K. Card (1985) The Prospects for Psychological Science in Human- Computer Interaction, Human-Computer Interaction

[6] McCarthy & Wright (2004) A Pragmatist Approach to Technology as Experience. In McCarthy & Wright Technology as Experience (pp. 62–65) Cambridge, MA: MIT Press.

[7] Petersen, M.G., Iversen, O.S., Krogh, P.G., & Ludvigsen, M. (2004). Aesthetic interaction: a pragmatist’s aesthetics of interactive systems. DIS ‘04.

[8] Norman, D. A. (2013). The design of everyday things. MIT Press.

--

--