What is declarative programming and why is it important?
Imagine an alien traveler from outer space asking you what a pancake is. You would probably say something like “Well, a pancake is when you mix some eggs, milk and flour together, stir it until it’s batter, pour some of it in a frying pan and wait until it’s dry. Then flip it over for a minute and serve with some butter and syrup.” The problem is, you never told the alien what a pancake is. You just offered a recipe. So what is a pancake? It’s a flat round cake made from starch-based batter.
Offering a recipe for a pancake is easier, and comes more naturally, than offering a definition. But when we’re programming we should always strive to offer definitions. They are shorter, simpler, and more expressive. Programming with definitions is called declarative programming, while programming with recipes (sequences of steps needed to make or do something) is called imperative programming.
Diving into the code
In Swift, the smallest demonstration of the difference between declarative and imperative programming would be this:
Why? The let binding just signifies that the variable ‘something’ can only be assigned to once. So what that actually means is that ‘something’ will be defined. Like in mathematics, once you say x = 5, x can never turn into 6. Not so with ‘var’. With ‘var’ we name a mutable variable. This mutable variable can never have a definition in the strict sense, because it can be randomly changed into another value.
The easiest way to start programming declaratively is to avoid writing ‘var’. It seems very dogmatic to just avoid a feature of a language, but it’s the best way to understand the benefits of declarative programming. Programming with let variables will enable compiler optimizations, make your code thread-safe, and do a bunch of other great stuff, but that’s not really the reason why it’s so great.
Let’s talk perks
Once I started writing declarative code, I slowly noticed I was having more fun. I noticed that often, when I wrote a bunch of code and pressed play, it just worked. I noticed that when something unexpected happened and I had to debug, the bug was always in a part of the program that was still written in an imperative style.
How can that be? We’ve already seen that declarative code tends to be shorter and simpler. The recipe for the pancake is longer and more complex because it makes use of things that aren’t really part of the pancake itself (the frying pan, the serving suggestion), but are needed to make the recipe work. Another reason imperative code is more error-prone is order. We can reorder the definition with no problem (“A pancake is made from starch-based batter, and it’s a round flat cake”) but reordering the recipe yields unexpected results (“Serve with butter and syrup, fry it until it’s dry, then mix the eggs together with milk and flour”). Another reason is that imperative code triggers side effects. Maybe the frying pan isn’t available (nil) when we try to access it, or maybe we expected the stove to be turned on while frying but it isn’t. A definition can never have such error-producing edge cases.
In the end, we want a computer program to do something, not just define things. So we’re always going to need some imperative code. But minimizing and isolating the imperative part of a program can make life a lot simpler for us developers. Not easier, but simpler.
This article is part of a series. The next article will be about the tools with which we write declaratively, and after that an article about when it is acceptable to fall back to imperative code.