Computers Are Hard: representing alphabets with Bianca Berning

Wojtek Borowicz
Sep 27, 2020 · 6 min read

Provided you’re storing and processing text in Unicode, it just works.

Illustration showing a keyboard connected to a monitor with characters from the Japanese alphabet on display.
Illustration showing a keyboard connected to a monitor with characters from the Japanese alphabet on display.
Representing alphabets with Bianca Berning. Illustration by Gabi Krakowska.

If you look deep enough, all data on our computers is binary code. Strings of 0s and 1s are the only language that processors understand. But since humans are not nearly as adept at reading binary, there are multiple layers of translation data goes through before it’s presented to us in a legible way. And we rarely think about it, but that’s a herculean task.

Unlike computers, we don’t have a unified way of communicating. We speak thousands of languages, written in hundreds of scripts. We write equations and use emoji. Many engineers spent years trying to untangle the messiness of language for the purpose of representing it in software and then they spent some more years trying to agree on a common approach.

To find out more about what goes into representing languages and alphabets in software, I reached out to Bianca Berning, an engineer, designer, and Creative Director at a type design studio Dalton Maag. We talked a little about the history of encoding standards and about how fonts come to be and how much work it takes to create one. We also touched on tofu, but not the edible kind.

Character encodings assign numeric values to characters. They’re used because digital data is inherently numeric so anything which isn’t a number needs to be mapped to one. There have been many competing standards for encoding characters over the past hundred years, but ASCII and Unicode are the most well known.

ASCII, from around 1960, used seven bits to represent characters, meaning it could encode only 128 different characters. This is just about acceptable for English, but will struggle to faithfully represent the alphabets of other languages. Straightforward extensions of ASCII, using eight bits for 256 different characters, appeared rapidly, but they were many and incompatible. Each language or at best, groups of languages needed their own encoding as 256 characters is still not enough.

There were standardization efforts which led to some rationalization. For example, there are a range of ISO standards for 8-bit encodings which group languages together, such as Latin-1 (formally known as ISO/IEC 8859–1, first published in the early 1980s) for the languages of Western Europe.

It was Latin-1 which was used as the basis for Unicode in 1988. By growing that 8-bit encoding to a 31-bit encoding there were finally enough codepoints for every character from every language, while retaining some easy compatibility with one of the most common 8-bit encodings.


Unicode is the globally adopted standard for character encoding, maintained by the Unicode Consortium, which has some of the largest tech companies as its members. Unicode supports most writing systems, current and ancient, as well as emoji 🦛. That’s why every platform has the same set of emoji, even though they look different on iOS/Mac, Android, and Windows.

Control characters control the behavior of the device which is displaying the text. Many are rooted in the physical nature of early teletype devices, such as backspace which moved the carriage back one character (it didn’t delete) and bell which rang a little bell on a typewriter.

I’m probably not the right person to talk about implementation, but in general modern operating systems and application development environments are fully Unicode aware and compliant.

Provided you’re storing and processing text in Unicode, it just works. If you’re not, you’ll get a lot of missing glyph tofu characters.


Each font should include a .notdef glyph. It’s the glyph that appears when a website or an app is trying to display an unsupported character. Usually, .notdef glyph is a white square (like this □), which is why it’s called tofu.

There are writing systems, such as Arabic and Devanagari, in which letter shapes vary depending on the context in which they appear. While their Unicode characters are as straightforward as any other writing system, they require an additional stage of processing, known as shaping, to get from sequence of characters to correctly formatted glyphs.

It depends on the writing system and the language. As an example, Japanese keyboards have both the English QWERTY layout and hiragana indicated on their standard keyboards.

Many minority scripts don’t have a history of physical keyboards, but digital only keyboards can often be installed to be able to input in languages that use those scripts. It’s far from perfect and far from complete, but people have adapted.

Creating a custom font can be easy or hard depending on the scope and ambition of the project. Each font is a collection of graphic representations of characters, known as glyphs. The more glyphs there are, the more complex the behaviour, and the more diverse the script systems being supported, the more specialist knowledge and skill will be required.

To guarantee cross-platform compatibility, we’re relying, again, on industry and formal standards. The most common file format for fonts, OpenType, is an ISO standard (ISO/IEC 14496–22) and the most common encoding for accessing the glyphs is Unicode.

We refer to the technical steps by the umbrella term “engineering”. It includes everything from rules for getting from characters to glyphs, to adding extra instructions to help a glyph make best use of the available pixels when it is displayed on screen.

Alphabet written in the Comic Sans font.
Alphabet written in the Comic Sans font.
Obligatory Comic Sans joke. Illustration by GearedBull.

There are some existing tools, such as Wakamai Fondue, that can provide you with a list of languages that are likely supported by a font. Most of them are based on Unicode’s CLDR which has the largest and most extensive repository of locale data available but it should by no means be considered complete or absolute.

You will need to check the license agreement you agreed to when you downloaded that font to find out what you can and can’t do with it — terms vary from supplier to supplier. If you chose an open source font for your project, the same applies. The terms often allow expansion of the font but there might be restrictions on, or requirements for, how you can distribute the result.

Computers Are Hard

Talking about why everything breaks all the time.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store