Fun With Images, Colours, and Dark Mode in iOS

Style your app in code using only pre-defined colours while maintaining proper trait support

Kevin R
Better Programming

--

Making good use of XCAssets already helps us plenty with organising and maintaining our colours and images, and helps us automatically respond to changing interface traits such as dark mode, or state such as components being disabled or highlighted.

But then the amount of components grows, or the German localization requires some other color tint for some reason (designers, am I right?), and for users with lesser eyesight, you’d like to provide higher-contrast colors. This can amount to quite some combinations that all may require different images. But the colours used in those images should already be defined in the same assets catalog, right? Wouldn’t it be cool to just define the colours and let the app do the rest, instead of providing all those images?

Let’s find out what it takes to programatically create a UIButton that uses images for styling and adapts properly to changing traits, such as dark mode, all without any predefined images.

The Setup

For the sake of this explanation, let’s say you’re going all-in and want to do this all in code, even defining the colours. It’ll also help explain how this works some more.

Let’s say the result we want to accomplish is this button style:

Button design: example on top, desired reference image on the bottom, which we would normally put in our assets catalog.

So the background consists of a solid colour, a border, and rounded corners.

For the background and border, we’ll want to define a colour that’s really more than one colour. In an asset catalog, we could just change the Appearances option to Any, Dark and add an extra resource.

The information that we need to determine the properties of our environment is available as part of the TraitCollection. This class contains all the information needed to understand the device and its configuration. You can fetch this information as a property from any UIView or UIViewController instance, but also use it to specify colours and images to be more dynamic. This is what we’re going to do.

In our case, we’ll need to define our colours first. There’s an initialiser available to create a dynamic color; it takes in a closure that allows you to return a colour that fits the situation:

Note the UserInterfaceStyle enum has three cases. Usually, you’d want to default to light when there’s no style set.

There are many more properties available as part of the UITraitCollection, such as size class, display gamut, content sizes used for font scaling, etc. You could make as many combinations as you’d like.

Next up, we need to create images for a specific state. Let’s create some helper functions to build an image for a specific state that fits our requirements:

The functions take some shortcuts, but they fit our needs well. In short, it:

  • Checks how big the image should be, depending on the corner radius
  • Generates an image of that size, filled with the background colour
  • Cuts the corners to the correct radius and adds a border to it.

To generate an image we can use for our button, we can call it by using:

The image looks fine, but isn’t very dynamic yet: it doesn’t adapt to any traits. To change this, we can use an UIImageAsset and register the different versions of the image we have. In our case, we have two images — one for light mode and one for dark mode. Let’s create an asset and register the different versions:

Some things you might have noticed:

First, the displayScale is specifically set for each set of traits. If you don’t do this, a default of 1 seems to be used, which causes the image to be rendered incorrectly on @2x and @3x devices.

Next, we’re asking the set for an image of the .current traits, which doesn’t seem very dynamic. But the code documentation tells us why this works:

Images returned hold a strong reference to the asset that created them

This means the resulting image has a property called imageAsset, which can be queried to fetch another version of the same image. You can also use this to double-check the other image versions are really there:

And that’s it! To check out all the parts together, you can view this gist. You can simply place it into a playground to check the results.

What's Next?

Although the theory is very interesting and could work for a simple app, I wouldn’t recommend using it directly like this. The setup only renders a single image scale. Also, you either have to generate the images each time or keep them in memory, which would become problematic, as the amount of images (and/or combinations of images) grows.

If you keep on using this anyway, one way to solve this would be to move this into a build phase step. This way, the images that are generated compile time and can be referenced just like “normal” images.

Thanks for reading!

--

--

Kevin R
Better Programming

iOS Developer, Swift enthousiast. Working for Team Rockstars IT in the Netherlands