Encoding

Robert Mundinger
CodeParticles
Published in
6 min readOct 9, 2017

According to the dictionary, code is a system of rules to convert information — such as a letter, word, sound, image, or gesture — into another form or representation, sometimes shortened or secret, for communication through a channel or storage in a medium.

Encoding is simply a way for us to convey information to each other through a certain medium. You may think of code as a cryptic, esoteric concept but we use code every day — we’re just so used to them we don’t think about it.

Language is our most important code. In English (ideally what you speak if you’re reading this article) we have 44 sounds (phonemes) in English that are represented by combinations of 26 letters, are combined to convey meaning. We spend years in school learning to encode (write) and decode (read) our language. We have to learn these things — we’ve all seen a frustrated young toddler who can’t quite encode his thoughts into words yet and a frustrated parent who knows their child can’t yet decode the words ‘please stop crying’ so they just have to find other strategies to get them to shut up.

If you’re blind you probably learned Braille and if you’re deaf you probably learned to encode and decode with sign language.

These are all different systems to represent the same things.

What does this image represent?

In English, it’s a dog. In Spanish, it’s a perro. In German, it’s a hund.

Now, same with numbers. Take the number 12. This is its decimal representation. In Roman Numerals, it’s XII. In Binary, it’s 1100. In Hexadecimal, it’s C.

Similarly, if I piss someone off, there are various different ways someone could communicate this to me, from facial expressions to language to emojis.

In Europe, this weight would be seen as 0.453592 kilograms. In the United States, we call that same weight 1 pound.

We encode a representation of land using its own system. We call it a map.

We can also encode information through visual media. In this data visualization of a bookshelf of the New York Times best selling books from 1996–2016, the creator uses width to encode the number of pages in the book, height to show the author’s age, position to encode the book’s release year, color for fiction vs. non-fiction, shading for a female or male author and symbols for a New York Times best-seller or a translated book — packing a lot of information into one picture.

Core Competencies

Humans typically communicate using words or visuals, so those are the tools we use to do most of our encoding and decoding.

But take a dog instead. They have a sense of smell one hundred thousand times more powerful than humans. They have far more smell receptors, meaning they have better tools to decode olfactory information.

A dog gets its news in the morning, not by watching tv or hearing the radio, but by sniffing around. What mood is its owner in? What dogs have been around recently? Is there food nearby?

When you take a dog on a walk and it smells everything within a two block radius, it’s gathering information about what’s been going on. And it actually writes using its urine (really).

If humans had the same sense of smell, we’d likely be able to gather information through our noses that we normally have to try to gain through our vision (such as by decoding a facial expression) or through a conversation. Is this person at the bar attracted to me? Are they smiling, are their pupils enlarged? If we could smell like dogs do, we’d likely save a lot of money on wasted drinks.

Patterns

We’ve learned to mix and match decoding and encoding, and we can use technology to combine digital codes with our innate human codes.

We can put our voices on a copper wire. We can put our faces on plastic film. We can store movies in DNA. We’re working on hearing with our skin and typing with our thoughts. The better we get at encoding and decoding using a variety of materials, the crazier the possibilities get.

It starts with pattern recognition. If we can match a concept to the pattern that represents it, we have learned to decode that pattern.

vocal patterns

When I say ‘Alexa’, there is a certain sound wave that comes out of my mouth. Computers have gotten far better at recognizing a variety of vocal patterns and responding to them. When I say Alexa and you say it, a computer is now able to ‘understand’ despite our different sounding voices. This tool for decoding our instructions and encoding a human-sounding response is at the forefront of the technology that will be brought into our homes in the future, and will only become increasingly capable.

this is what Alexa can turn into groceries

thought patterns

Our thoughts are just patterns of electrical signals; now that we have the technology to read those signals, we can begin to decode and map our thoughts to patterns of electric signals.

patternA = thinking about an apple, PatternB = thinking about an orange

Facebook is working on new technology that will allow us to send messages with only thoughts. They’re doing this by decoding and categorizing our thought patterns (you’re not the only person to think this is creepy).

There will eventually be an open logbook — the entries will match brain patterns to thoughts, just like a dictionary matches words to their definitions. This will be a code book and companies will build products around it. Much like Alexa orders you a blanket if your vocal pattern matches ‘can you buy me a new blanket?’ the same will be possible when you have that thought.

--

--