How Hi-Fi Prototyping Improved My Design Mindset

Using Framer helped me learn code and a better way validate the user experience.

Anton Kosarchyn
Framer

--

Editor’s note: We’ve made some big changes to Framer, and this article refers to a deprecated tool. Learn more about what Framer is today →

In this article, I’m going to describe the technical difficulties I faced during the prototyping process, design decisions I made and how they influenced my way of thinking. I’ll provide bits of code, so experienced users can also use them. If you’re not one of them — don’t worry, coding knowledge isn’t required. Enjoy! 😊

The backstory

This journey started about a year ago when my mate Martynas asked me to help with the design of his pet project — a mobile app called “WallTip”. The app allows you to set various (or custom) motivational wallpapers on your phone’s locked screen.

The wallpapers in the app are organized in the thematic albums like “Travel”, “Inspiration”, “Lifestyle”, etc. When you select a picture from an album — a carousel with tips surfaces. The app allows you to change tip’s size and position, apply blur and color overlay for the background image. Afterward, a preview of your future lock screen is available.

Play with the prototype:
P.S: use the Safari of Framer Studio for the best experience

It was around this time that I started researching high-fidelity (Hi-Fi) prototyping tools, primarily because static mockups (or even low-fidelity prototyping tools) couldn’t satisfy my requirements:

  • Feel and feedback of the real application
  • Rich animations and interactions
  • Live data import
  • Easy prototype sharing
  • Quick iterative process

In the end, my choice was easy, because only a few tools are capable of doing everything I required.

I had previously worked with Axure. It’s powerful and limited at the same time, capable of really advanced stuff but required a lot of hustle and workarounds.

When I tried Origami by Facebook, it looked a bit odd with all the nodes and patches. It made me worry about the difficulties in maintenance at the later stages.

So I ended up with Framer.

First steps

After playing around for a couple of hours, I finally decided to use Framer as my main prototyping (and later on, design) tool for this project.

My first goal was to turn my previously static Photoshop mockups into live Framer templates, which I could later populate with the data I wanted. Of course, I could just import my layers but I made it my mission to get acquainted with coding and the Framer API. So, laying out stuff looked like a perfect beginner task.

Using code, I was able to design the main parts of the app pretty quickly. It took me only about two or three evenings, and I consider that to be a colossal success for a beginner! I filled albums with mocked-up images stored in a local folder. Later, to make it more lifelike, I decided to render a separate set of images for each album and pulled them from the cloud dynamically (I used Unsplash for that).

Here comes the surprise

When I previewed the album for the first time, it appeared to be empty. As I was figuring out what happened, the images started to emerge slowly, one by one.

Low bandwidth simulation, no image placeholders

I discovered… bandwidth.

Damn, in hindsight it seems so obvious that images need time to load, and that depends on their size and bandwidth connection. But these problems had never occurred to me in the past. So from that point on, I tried to keep it in mind.

To improve this behavior, I decided to create some kind of an animated preloader to show the users that images are in the process of loading. Preloader is a simple .gif animation, done in After Effects, which is applied as the “image” property to a placeholder.

class Picture extends Layer

constructor: (options) ->

super _.defaults options,
image: "images/preloader_white.gif"
style: backgroundSize: "35%"
backgroundColor: "rgba(0,0,0,.2)"
Low bandwidth simulation, animated placeholders

So, now when an album is open, placeholders are displayed by default, to help users understand that something is about to happen, even as images are loading.

But how do we know when the image is loaded and ready to be shown?

After brief research, I’ve found that developers usually use browser’s stock function image.onload(). Basically, this is a piece of code which gets fired, once the image object we created, is fully loaded and ready to be displayed.

We can pin the callback which notifies that we want to replace the preloader_white.gif with the actual image URL that has been already loaded into the memory.

# This creates an empty Image objectimg = new Image()# This renders the picture when it's fully loaded in memoryimg.onload = ->
picture.style = backgroundSize: "cover"
picture.image = img.src
# Here we assign image urlimg.src = "source.unsplash.com/id"
Pictures replace placeholders on album open

This trick came in handy later when I decided to optimize my prototype. Initially, I used pretty large images for small previews but the prototype’s performance and image load time significantly impacted the UX. I decided to load smaller images by default and once a user clicked on them —to replace them with the larger copies.

That’s why I think tools such as Framer are important:
Because they allow designers to work in a real environment that helps them find, understand and fix user experience problems through design solutions.

Tip’s drag interaction

When I approached the design, I envisioned that users would be able to change tip’s size, vertical position on the screen, and switch between them by swiping to the sides. Once I’ve implemented the tip’s basic behavior and tested how it works, I immediately saw gaps in my design.

The first thing to arise was the text-align issues. When a user increases the size of the tip its bounding box grows down by default. This behavior might be an issue when the tip is located at the bottom of the screen — it can go beyond the screen bounds at some point. To prevent this I decided to incorporate the relative positioning of the tip.

For example, on resize:

  • If positioned at the lower 3rd of the screen, it’ll grow upward
  • If at the upper 3rd — downward (default behavior)
  • If somewhere in between, the tip will grow both sides

It’s done by finding the highest tip across all and aligning all the remaining tips with it’s top, middle or bottom sides. Also, I’m changing a position of a small pagination indicator as the user drags or resizes a tip. When it gets closer to the bottom edge — it jumps to the top of the tip.

# Function to change tip's alignment regarding of it's positiontip.alignOtherTips = ->

for tip in tips
# Tip is at the top 3rd if this.midY < screen.height / 3
tip.y = this.y

# Tip is at the bottom 3rd
else if this.midY > screen.height / 3 * 2
tip.maxY = this.maxY
# Tip is at the middle else
tip.midY = this.midY
# Align the label and tips on dragtip.onDrag -> pageIndex.alignSelf(highestTip)
this.alignOtherTips()

Tip’s color picking

Soon I’ve discovered that underlined images have spots of different lightness. In simple words, the image may contain a landscape with dark bottom part (earth) and light top (sky). So my tip’s color should adapt to different background lightnesses when being dragged over. After a research, I’ve found the solution by George Kedenburg called the “magic text”. I slightly modified it to use more points for sampling.

Text color adapts to background brightness

It uses HTML canvas element to obtain RGB values of the image’s pixels within the selected area, convert RGB to HSL colors, and calculates the average lightness. Then it compares the light of an area to the predefined threshold, let’s say 0.6 (where 0 is black, and 1 is white) and changes tip’s color accordingly.

findBgLightness = (layer) ->  # Define variables  x = layer.screenFrame.x
y = layer.y
width = layer.width
height = layer.height
pixels = width * height
imageData = []
luma = undefined
avgLightness = undefined
# Find color data in certain points from bgImage imageData = c.getImageData(x, y, width, height).data ###
Loop through imageData array by 4, where each 3
are the r, g, b values of a single pixel of the image
###
for d, i in imageData by 4 r = imageData[i] * .2126
g = imageData[i+1] * .7152
b = imageData[i+2] * .0722
luma += (r + g + b) / 255 # luma value of a 1 pixel
return avgLightness = luma / pixels

Also, I used lightness–to–luma conversion because it’s a little bit closer to what I perceive as light–dark. You can find out more about luma in this article.

My tests showed that this solution really improves text readability over images with more or less plain color areas. But it can’t help if an image is heavy on little details, which make the text above hard to read regardless its color. For those cases, I decided to simply add a color overlay and blur functions to the image. So, if a user thinks that the picture below the text is too noisy, he or she could turn on this feature.

Blur and color overlay over the complex image

Interface color picking

As I played with the image overlay functionality I noticed that an overlay with a bit of color tint looks more interesting than the transparent black. It kinda ties the room (…sorry, wallpaper) together, if you know what I mean. But then I thought — why use color for the overlay only? It could bring some tint to the whole interface and is worth a try.

So I decided to use two colors:

  • dark muted color for overlay and backgrounds
  • light and vibrant color for accents and active elements

As I picked a color dynamically from the image once it loads, I decided to use the vibrant.js library, a JavaScript port of the native Android palette class . It processes a given image and returns one dominant color or a palette of 5–6 colors to choose from. All colors within the palette are grouped as Vibrant and Muted, each group has 3 colors — light, dark, and something in between. Awesome! 😍

# Custom function to obtain vibrant palettegetPalette = (url, callback) ->  # Load the image before passing it to the library  img = new Image()
img.crossOrigin = "anonymous"

img.onload = ->

# Create new swatch from the image
vibrant = new Vibrant(img);
swatches = vibrant.swatches()
# Pass the swatches to the callback (provide on function call) callback(swatches)

img.src = getCORSImage(url)
Interface use colors from the picture

After a small test, I’ve discovered that color combination of DarkMuted and Vibrant works best together, so I used them. The only issue I’ve found was that sometimes those two colors don’t have enough contrast between each other, so the simplest solution was to adjust them manually. For this purpose, I’ve converted them from RGB to HSL colors and cranked up the L (lightness) and S (saturation) for the Vibrant color. I did that vice-versa for the DarkMuted color. Here’s an example:

getPalette url, (swatches) ->  # Here we provide callback (what we're going to do with swatches)  # Define primary color  if swatches["Vibrant"]?    priClr = "rgb(#{swatches["Vibrant"]?.rgb.join(",")})"
priClr = new Color priClr
# Convert rgb > hsl, make adjustments, rewrite color priClr = priClr.toHsl()
priClr.l = .6; priClr.s = .8
priClr = new Color priClr

# Provide fallback if swatch doesn't exist, eg b/w images
else priClr = new Color "white"

This isn’t a perfect solution because sometimes when cranked up, vibrant colors look “too vibrant” for some hue ranges. The reason is the nonlinear color perception of a human eye, I guess. Still, this solution works for 80% of cases, which is good enough for a prototype.

P.S: If you have a better solution, please let me know in comments 😊

Lessons learned:
Sometimes relative values like connection bandwidth, pictures lightnesses, and complexity can have a significant impact on UX in ways you wouldn’t usually expect. Try to prototype in a native environment with live data. Change relatives and see how it affects the outcome. This will help you to make smooth and well thought-out UX. The devil is in the details, as you may know…

Conclusion

It was a small part of the design/prototype story for the Wall Tip app. I have tested this prototype with a couple of my friends and proceeded to the gathering feedback stage. This is a crucial part of the process as it allows products to adjust for really great UX.

So what are the outcomes?

  • Trained myself not give up with on (seemingly)hard stuff
  • Improved myself as a designer (or developer, who knows?)
  • Learned a bunch of new tricks
  • Shared my experience
  • Had a lot of fun :)

And what’s most important:
This experience has forced me to think about “how it’s going to work” and “what kind of experience will a user have”, rather than “how it looks and acts”.

Do you dare to try? 😎

PS: If you liked the article, please ❤️ my shot on Dribbble 😋 Thanks!

--

--

Anton Kosarchyn
Framer

A guy next door, passionate about design, music, and new experiences