A deep dive into Flutter’s accessibility widgets

Muhammed Salih Guler
Flutter Community
Published in
11 min readJan 14, 2019

Many mobile users have visual, physical or age-related limitations and these can prevent them from seeing or using a touch screen. There are also users with hearing loss, which can leave them unable to hear notifications and other audible alerts. According to the World Health Association, more than a billion people are living with some form of disability; and between 110–190 million are not able to complete their daily tasks because of the challenges they face (source). Technology can have an incredible, positive impact on these people’s lives if designed properly. It can empower those facing these challenges, enabling them to be more productive and independent.

Not all mobile users are going to interact with your application in the same ways. Therefore, it’s important to make sure your application is accessible to everyone. Implementing accessibility correctly can enhance the quality of your application, increase your number of installs and strongly impact how long your users are going to stick with you.

Here, we’re going to take a look at Flutter’s accessibility widgets and how they function.

How do you make a Flutter application accessible?

Flutter has three components that provide accessibility support:

Large Fonts

As some people age their eyes don’t always focus as well as they used to, and many are born with less than perfect vision. They’ll often have problems reading text at the default size many of us have come to take for granted. Because this is such a widespread condition that affects a billion people or more, one of the most important things to check for is that your text scales properly when a user has enabled text scaling in their accessibility options.

In Flutter, text size calculations are handled automatically. The Text widget has a property called textScaleFactor and, put simply, the given font size gets multiplied by the textScaleFactor to determine the font size that is actually rendered on the screen in logical pixels.

So, if you wanted the text to be 150% of it’s normal size then you’d set your textScaleFactor to 1.5 .

One thing to keep in mind is that if you manually set the textScaleFactor, automatic text size calculations from the user’s accessibility settings will be overridden. So, the user’s accessibility settings won’t work anymore. If this causes a large enough problem for the user, then they’re probably just going to uninstall your app.

If you don’t assign any value then this will return either MediaQueryData.textScaleFactor with the related context variable or, if we don’t have a context, it returns 1.0; which will not affect the size of the text.

But making the text scalable is not enough. If you don’t properly anticipate your users enabling larger text when you create your layout, the text might cut off and end up causing the user more problems than if they hadn’t used accessibility at all. This is why it’s important to always double check that your text displays properly at all accessibility settings.

Sufficient Contrast

When implementing an application interface, we should specify background and foreground colors with sufficient color contrast.

A “contrast ratio” is a computation when viewing an interface on devices in extreme lighting conditions. This ratio ranges from 1 to 21 (often written as 1:1 to 21:1), where increasing numbers mean higher contrast. There are many tools available for computing the contrast ratio of two neighboring colors, such as this color contrast ratio calculator.

The W3C recommends:

  • At least 4.5:1 for small text (below 18 point regular or 14 point bold)
  • At least 3.0:1 for large text (18 point and above regular or 14 point and above bold)

Screen Readers

Screen readers are essential to enable the visually impaired to use your apps, and just about any other software.

For Android, Google has included a screen reader called TalkBack. With TalkBack, users perform inputs by using gestures (e.g. swiping) or an external keyboard. Each action performed by the user trigger an audible output to let the user know their swipe was successful. It can also read text for the user, who only has to touch a paragraph for TalkBack to begin reading it.

TalkBack can be turned on simply by pushing both volume buttons on the device for 3 seconds. It can also be toggled in the settings.

For iOS, Apple has a screen reader called VoiceOver. With VoiceOver, just like Talkback, users perform inputs with gestures. Like TalkBack, each action results in an audible acknowledgement of the gesture. VoiceOver can be turned on by clicking the home button three times (but you need to add VoiceOver to accessibility shortcuts first), or you can toggle it in settings.

Now that we have a screen reader we can use, let’s see what happens when we run a Flutter app and check it out. Since we get a sample app as soon as we create a new project in Flutter, we don’t need to write our own in order to try out the screen reader. One thing to keep in mind is you need to do this on a real device, an emulator isn’t going to work.

Enable the screen reader on your device and start up the default app. You’ll see the screen reader is working out of the box.

Let’s take a look at how it works.

Flutter offers us several accessibility widgets, allowing us to create a highly accessible application for all the users. The first one we’ll look at here is Semantics. Semantics annotates the widget tree with a description of its child. You can add annotations to tell a visually impaired user all kinds of things; such as what the text is, if a button is selected and you can even tell the user what something will do will do when tapped or long pressed by using the onTap and onLongPress hints.

So, when you want to have a description about a widget you can wrap it with a Semantics widget. And that’s the secret behind why screen reader was able to read our sample application.

AppBar source code snippet

If we check out the code above, we can see that, if we have a title, it will be wrapped with a Semantics widget. Boom, surprise! In Flutter widgets, we can see accessibility widgets are already implemented in most cases. If we simply delete the Semantics from the source code and re-run the application, we will see that the TalkBack doesn’t read the AppBar title anymore. Pretty cool, isn’t it?

But what else can this bad boy do? Let’s keep digging and find out how Semantics work.

When we create a widget tree, Flutter also creates a Semantics tree with SemanticNodes. Each node can help us describe its corresponding widget with the help of a screen reader. It can also have custom or predefined actions, from SemanticsAction.

I know, I know. So far, Semantics sound pretty cool but how can we create one? Let’s start by checking out the constructor:

You can see the standard constructor for Semantics adds a lot of properties when it extends its base class, the SingleChildRenderObjectWidget. Its other constructor, Semantics.fromProperties, needs a required SemanticsProperties object called “properties”. According to the docs, if you want to make your Semantics object constant then this is the way you want to go.

Properties in the SemanticsProperties class are used to generate a SemanticsNode in the tree; but we’ll get back to that later.

We want to take the time to really understand these properties because that’s what’s going to allow us to most effectively implement accessibility in our apps and create the best experiences for our users.

Let’s check out the table below. Keep in mind that these properties are null by default. The explanations have been written in a way that I hope is more easily understood by everyone.

As you see, it gives us a lot of ways to describe the related widget. Let’s take an example from the Flutter SDK of how the Flutter team used SemanticsProperties.

Here are the Semantics of a ListTile widget. You can think of the ListTile as a List Item; it’s an individual item inside of a list like a single tweet in Twitter’s home screen.

Let’s go over this and see what it does for our users. First of all, we can see that we don’t need to create a SemanticsProperties object separately (although we can create one by using thefromProperties named constructor of the Semantics class). We can also pass some status information about the widget while creating it. We’re able to see enabled and selected flags getting triggered with the values that were defined within the widget’s constructor. If we create a ListTile now, it will read the text inside of it aloud, and tell us if it’s enabled or disabled / selected or not selected.

We can dynamically set the values for each ListTile in order to create custom semantics for each individual tile:

The code snippet above will create a list view with 5 elements and disable all of them except the second element and, it will set the selected state of the first element to true.

When we run the app with the screen reader enabled (I will run Android’s TalkBack now), it will say: “Selected main title for 0 item, sub title for 0 item disabled”. As you see, this gives the user the information we provided about the each item. But of course, we need to test the other cases to be sure it is working.

If we click once on the second item, we will hear : “Main title for 1 item, sub title for 1 item”. Since our second element is not selected and enabled we can be sure that it is working correctly too. With TalkBack, one click causes the screen reader but it takes a double tap to actually trigger the onTap.

Let’s run the third item to test the last case. We will hear now “Main title for 2 item, sub item for 2 item disabled”. Since it is not selected and it is disabled, it checks out and we can be sure that it is working correctly.

Now that we have a basic understanding of Semantics and how to create them, let’s take it up a notch. But, before doing that, let’s learn about the concept that we pinned earlier: SemanticsNode.

As we stated above, when we create our widget tree we create a Semantics tree along with it, and this tree is what gets used by screen readers. In the programming world, a tree is a data structure consists of node and leaf.

In our case, the SemanticsNodes will be our nodes. Each SemanticsNode is a node that represent semantic data. A node might cover semantic data for one or several widgets. Each SemanticsNode will have some values that can be triggered by the SemanticsAction. E.g.: SemanticsProperties has parameters called increasedValue and decreasedValue for increase and decrease actions. It also has a key for identifying it in the list of nodes. These are used during the tree deconstruction to get the correct node during a rebuild. There is also anid value for identification . E.g. id is 0 for the root node, and this value is auto-generated as we create child nodes.

Besides this, we can also find out information about the node and its relationship with the other nodes. We can check if it’s merging with other nodes at any given moment with theisPartOfNodeMerging flag. Or, we can check if it’s already been merged with isMergedIntoParent. If one widget has multiple children that each have their own node, we can use mergeAllDescendantsIntoThisNode to merge all of those nodes into a single node.

Now that we have a better understanding of the SemanticsNode, SemanticsProperties and Semantics, we can create our own custom Semantics.

With the code above, we’re using the semantics label to describe each Container that is being used in the ListView. Each is a red box with 200 height and 200 width. We’ll keep the enable and selected values from the example before. We’ll add more controls, though. We’ll create an onTap callback for double clicks and onScrollDown to test the gestures. In general, our application will show us a Snackbar that says: “Item <related position> Clicked!”. If onTap is triggered or when you scroll down (by swiping first left then right on Android), it will create a log entry stating showing the callback was triggered.

So far it’s been super cool to see how all this works, but as we’ve gone deeper we’ve come up with more and more questions. What happens when we want to merge multiple semantics into one, or if we don’t want to include certain semantics information to the user?

Don’t worry about it, Flutter’s got you covered. You can merge the semantics of your widget’s descendents withMergeSemantics, and you can even exclude some of them by using ExcludeSemantics. In addition to these Flutter comes with even more widgets for semantics, such as BlockSemantics and IndexedSemantics . Let’s check them out.

For that, I want to extend the example we’ve been using:

We’ve changed the code a little bit. We added MergeSemantics as our root. This means it merges all the available child semantics into one, and the screen reader will handle all of them at once.

Also, we put a Column with four children inside of our Container. In the second child in the list item, meaning the second Container, you can see we have used BlockSemantics. Therefore, the widgets before this node will be omitted and not read by the screen readers.

In the third child in the list item, there is also ExcludeSemantics. The child widget of this semantics widget will not be part of the semantics tree.

Let’s run the application and click the first element. The screen reader should say, “Selected Container with 200 width 200 height and red background second inside text of item 0 fourth inside text of item 0 disabled.” As you can see, it gathered all of the semantics into one, while excluding the child that we do not want to share.

We’re still missing one Semantics that we talked about. It’s IndexedSemantics. IndexedSemantics helps us keep track of the relevant information that’s passed to accessibility screen readers. For example, with a ListView it will create an IndexedSemantics for each individual element. But in a ListView, we might have some elements that have zero use-cases. E.g., we might have dividers in the list and these have no use-case other than visual representation. To prevent the dividers from being read to the user, we might use IndexedSemantics like this:

For this example, accessibility tools will only consider the elements with IndexedSemantics while they read it out loud or jump between elements.

Conclusion

Accessibility is an important topic that should never be neglected. We should always take it into account, ensuring that we add accessibility to our applications and make them available to everyone who uses a smartphone. We can make a lot of people’s lives easier with just a little extra effort.

Since the Flutter team has already implemented Semantics in most widgets, that makes it much easier for us. But, when we create something custom, we should always be sure to add Semantics for it.

Remember, every person deserves to be able to use your application; so help them to use it!

Thank you and see you soon!

P.S. If you want to discuss this please send me a DM on Twitter or leave a comment here.

Also, I would like to thank Norbert for proofreading and Scott Stoll for spending a lot of time helping me out with proper English grammar :)

--

--