Better Programming

Advice for programmers.

The Return of Dynamic Typing: Is It Possible?

Tarek Amr
7 min readJul 23, 2023

--

Photo by Michael Marais on Unsplash

Static typing has won! Just check Quora or Stack Overflow, and you’ll see the top answers favor static typing.

JavaScript is being TypeScripted. Python and PHP are type-hinted. And the popularity of Java, Go, and Rust is freshly minted!

And if you challenge the status quo, like DHH did, you risk getting canceled.

To understand why static typing has won, one has to first understand the mindset behind this programming paradigm.

Static Typing Is a Mindset Before Being a CS Term

To elaborate on what I meant “by a mindset,” let me briefly explain the following computer science terms:

Static vs. Dynamic Type Checking: In statically-typed language, variable types are known at compile time and do not change once set. Example: If x is an integer, it will remain so.

Strong vs. Weak Typing: Strong typing enforces strict type rules, preventing implicit conversions. Example: If x is an integer, you cannot concatenate it with a string.

As you can see, the two dichotomies form a two-by-two matrix. A language can be dynamically yet strongly typed, e.g., Python. In such cases, values, rather than variables, are the ones that hold the types.

Image credits goes to the author

Combining the characteristics of the top-right quadrant gives us the following features:

We can decide what type a variable holds, and make sure the computer will never change or implicitly convert them over time.

The first three words of this sentence already give you an idea of what I mean by the static typing mindset here: “We can decide.”

This gives the programmer the authority they need to ensure the correctness of their code. This resonates with the following quote:

“The good news about computers is that they do what you tell them to do.” — Ted Nelson

By focusing on this need for correctness, you can see that static typing is a mindset that goes beyond the aforementioned computer science concepts.

Let me use Python to explain what I mean, though you can easily translate the following examples into your favourite language.

The Static Typing Mindset in Python

The obvious example is type-hints and mypy, but that’s just the tip of the iceberg.

Data classes vs dictionaries

Take the following example:

def get_user_details() -> Dict:
return {
"first_name": "Bart",
"last_name": "Simpson",
}

It’s unlikely to see this code in bigger codebases. The programmer here got an A for their type-hinting efforts, but the Dict doesn’t say much about what keys this dictionary holds.

The caller of this function will break if the keys of the dictionary are capitalized, for example, or if it returns family_name instead of last_name. Strings are too, ehm, dynamic.

Thus, this dictionary is usually replaced by an object instead. Objects have fixed attributes with well-defined spellings. A data class is usually preferred here since they minimize the __init__'s boilerplate.

So, you will likely encounter the following code in bigger codebases:

@dataclass
class UserDetails:
first_name: str
last_name: str


def get_user_details():
return UserDetails(
first_name="Bart",
last_name="Simpson",
)

I did not type-hint the method on purpose. This mindset depends on what one thinks of as typing.

I hope this example clarified that typing is a mindset separate from what a language provides — just like Object Oriented Programming (OOP). I used to write OOP code in languages that don’t support classes. Similarly, one can apply the typing mindset to the most dynamically typed language or hack the most static language to become dynamic if they want to.

OK, time for another example.

Enums vs strings

Similarly, enums (enumerated type) tend to replace strings in specific use cases. Here’s an example:

if state == "alive":
normal_walk()
elif state == "dead":
lie_down()
elif state == "zombie":
zombie_walk()

For the same reasons data classes replace dicts, the code above is usually replaced with the following one:

class State(Enum):
ALIVE = 1
DEAD = 2
ZOMBIE = 3


if state == State.ALIVE:
normal_walk()
elif state == State.DEAD:
lie_down()
elif state == State.ZOMBIE:
zombie_walk()

Again, not a single type was hinted at here, yet the mindset prevailed.

I can think of more examples, but I hope you got my point already, and I would be happy if you let me know about your favorite examples along the same lines.

I cannot deny the merits of this mindset.

As you can see, this mindset offers a lot of benefits. It prevents unnecessary mistakes. And I benefit from it in my code every day.

Not just me; my IDE does too.

Modern IDEs, such as PyCharm, will reward you with code completion and point you to mistakes in your code if you use type hints and the static typing mindset in general.

But is this mindset the panacea of software design? Is there a dynamic typing mindset as well? What will we miss if we abandon this alternative mindset?

To answer these questions, I need to compare two related fields: software engineering and artificial intelligence.

Symbolic AI is the Static Typing of AI

In the early days of artificial intelligence, the dominant approach was Symbolic AI. The main idea is that human programmers are the smart ones in the room.

They provide the computer with hand-crafted knowledge in the form of rules, and the computer uses its computation power to search and find solutions to the problems at hand based on the given rules.

I can summarize this approach as follows:

We can decide what rules to use when solving a problem, and make sure the computer never change or implicitly edit them over time.

I hope you can see the similarities between my Symbolic AI definition and my static-and-strong typing definition:

We can decide what type a variable holds, and make sure the computer will never change or implicitly convert them over time.

The challenge with both mindsets is that, in pursuit of correctness, we restrict our software’s capabilities to only what we can comprehend and fit within our own mental limitations.

And if Symbolic AI is the static typing of AI, then machine learning is its dynamic typing.

In both fields, we have been flip-flopping from one side of the dichotomy to the other.

Taking a step back, one can generalize this dichotomy and trace it back to David Hume’s “Problem of Induction” and the dichotomy between deductive and inductive reasoning.

Tracing the historic origins of this dichotomy lets one borrow arguments from both sides to justify their stance. Pre-Hume, ancient Greek and Indian philosophers also offered supporting arguments. But let’s now focus on one argument.

In deduction vs induction, correctness comes at the expense of generality.

The “Alive-Dead-Zombie” code above will crash if it happens to encounter Jesus Christ. And we all know what happened to Symbolic AI, seeing the induction camp—machine learning, deep learning, and large language models — kicking its butt nowadays.

Of course, the same correctness argument holds in AI. No one trusts Chat GPT’s outputs, but you cannot ignore it, given its usefulness.

And by the way, there is a continuation of Ted Nelson’s earlier quote I mentioned. He suffixed it with the following:

“The bad news is that they do what you tell them to do.” — Ted Nelson

You do not want computers to always do what you tell them to do. Sometimes, you want them to do what you want them to do. And the two aren’t always the same.

That’s why I believe there is a future where computer languages will do what I want them to do.

The Computer Languages of the Future

Of course, one cannot see the future, but as William Gibson says: “The future is already here — it’s just not evenly distributed.”

Thus, hints of how computer languages will look can be found in places other than programming.

Take the following example, where one can blame this JavaScript quirk on its weak typing:

"11" + 1 // Gives us '111'
"11" - 1 // Gives us 10

Conversely, Python 2 was equally quirky due to its strong typing this time.

3 / 2 # Gives us 1 instead of 1.5

Guess who is not as quirk as the previous two languages? ChatGPT!

Prompt: 1 + 3
> The sum of 1 and 3 is 4.

Prompt: "2" + "2"
> The expression "2" + "2" concatenates the two strings together,
> resulting in "22".

Prompt: 1/1/2020 + two weeks
> If we assume that "1/1/2020" represents January 1, 2020,
> adding two weeks to that date would result in January 15, 2020.

This gives me ideas on how future programming languages will behave.

And until these new languages materialize, I will mix and match ideas from each paradigm without being dogmatic about one and only one right approach to solving problems.

Related Articles

--

--

Tarek Amr
Tarek Amr

Written by Tarek Amr

I write about what machines can learn from data, what humans can learn from machines, and what businesses can learn from all three.