A Comprehensive Look at Functional Programming (FP)

A programming paradigm

Allan Sendagi
The Startup
32 min readAug 2, 2020

--

Photo by Magda Ehlers from Pexels

This is the other major programming paradigm. If you are interested in Objected oriented programming, I have written about it here…

*This article is inspired by Andrei Neagoie’s Advanced concepts in Javascript course. I highly recommend it*

Here is what to expect from this article:

  1. Currying
  2. Partial application
  3. Pure functions
  4. Referential transparency
  5. Compose
  6. Pipe

We know that Functional Programming has existed for over 60 years. Lisp — a popular programming language was first developed in 1958. The idea of functional programming originates from Lambda calculus — a formal system in mathematical logic.

Today, the popularity of functional programming has surged because programming languages built on this paradigm such as Haskell, Scala, and Closure work well with distributed computing.
In distributed computing, you have multiple machines interacting with data; but also in Parallel computing where machines work on data at the same time.

Javascript libraries like Redux and React have also popularised the idea of functional programming.

So, what is FP anyway and how can we use it as Programmers?

Just like OOP, Functional programming is all about separation of concerns. We package our code into separate chunks so that everything is well organized. Each part of our code concerns itself with one thing that it’s good at.

Classes in OOP divide up properties and methods; FP also separates data and functions. For a functional programmer, the world is data that gets interacted with. Data and functions aren’t one piece or one object.

For a Functional programmer, the world is data that gets interacted with; data and functions aren’t one piece or one object.

Although there is no one definition of what is or what isn't functional, mostly functional programming languages have an emphasis on simplicity where data and functions are concerned.

Functions just operate on well-defined data structures like arrays and objects rather than belong to them, unlike OOP and objects.

However, the goals of functional programming are the same as Object-oriented programming:

  1. Make code more clear and understandable.
  2. Make code easy to extend so that as our app grows, and as our programs grow, as the developer team grows, our code still remains easier to extend.
  3. Make code easier to maintain by multiple programmers.
  4. Make our code memory efficient by having these reusable functions that act on data.
  5. Keep our code DRY. We avoid repeating ourselves. Our code is clean and efficient.

Pure functions

Unlike OOP where we have the four pillars(Encapsulation, Abstraction, Inheritance, Polymorphism), with FP it all comes down to this concept — Pure functions.

The idea here is that there is a separation between the data of a program and the behavior of a program.

And all objects created in FP are immutable — once something is created, it cannot be changed. We avoid things like shared state and adhere to this principle of pure functions. As you will find out, in functional programming, there are a lot of things you can’t do.

A function always has to return the same output given the same input no matter how many times we call it; it cannot modify anything outside of its self — no side effects.

A function always has to return the same output given the same input no matter how many times we call it; it cannot modify anything outside of its self — no side effects.

1. No side effects

Let's have an array to demonstrate

ZTM

This function has side effects. Here is how we can tell:

Does the function modify anything outside of its self?

Here it does. It modifies the array that lives outside of its self in the global object. Hence we don't know what might happen to the array. Anybody can call it and change it.

If I call a(array) again, my array is again modified.

Let’s have another function with side effects

We are still modifying the array in our global environment

2. Shared state

With side effects, you are using a shared state that can interact with anything. Here the order of a function call matters which can also cause a lot of bugs.

So how can we write something that has no side effects? Something that won’t change whatever our array is?

Well, we can copy the array inside our function

The newArray will be the same exact thing as our global array. Note here that we have used the concat() method instead of doing = which would have given us the reference to the array— remember objects are passed by reference.

So instead of having them point in the same direction, this way we have a new copy of the array.

Now we can just do newArray.pop() and return newArray. And we can see that the original array hasn't changed.
Although we created a new state inside the function, its a local variable. We are not modifying anything outside of our scoped world.

And because it doesn't affect the outside world, we know what to expect.

map returns a copy of the array

None of my arrays have changed. We have three distinct arrays. And all these functions have no side effects. They don't affect anything that's outside of their world.

But here is an interesting case:

This is not a pure function. console.log() uses the browser to log something to the browser. It’s logging something to the outside world hence modifying something outside of itself hence side effects.

3. The input should always result in the same output

Does the function always return the same output given the same input?

Here the result is always the same no matter how many times I run the program. This is also known as Referential transparency.

Referential transparency

Referential transparency says: If I change the contents of a to 7, can that have any effect on the program? The answer is no because no matter what, if my input is the same, I am always going to get the same output.

Note that these functions have no side effects. They are not touching any outside part of their world. Parameters here are local variables.

Can we have 100% pure functions?

No, we can’t. Pure functions in a literal sense can’t do anything. A program can’t exist without side effects.

Input-output is a side effect. console.log is a side effect — that is all communication with the outside world.

Pure functions in a literal sense can’t do anything. A program can’t exist without side effects.

We can't run any code without having the side effect of interacting with the browser; we can’t have a website with just pure functions.

Browsers have to make Fetch calls, HTTP calls to the outside world. We have to interact with the dom and manipulate what's on the website.

So we can agree that the goal here is not to make everything pure functions; rather the goal is to minimize side effects.

The idea is to organize your code with a specific part that has side effects so that when you have a bug, you know right away where to go(database calls, API calls, input-output) because this is normally where side effects are happening. The rest of your code should be just pure functions.

Purity is more of a confidence level. It cannot be 100%.

Because at the end of the day, we do have to have some sort of global state to describe our application. That's unavoidable.

The core of functional programming is very simple. We want to build programs that are built with small, reusable, and predictable pure functions.

Characteristics of a perfect pure function

  1. A perfect function should do one task and one task only. We don't want a massive function. We want a simple function that we can test that does one thing well.

2. It should have a return statement. Every function should return something. When we give it an input, we expect an output.

3. No shared state with other functions.

4. Immutable state. We can modify some of the state within our functions but we always return a new copy of that output. We never modify our global state.

5. The function should be predictable. If we understand with certainty what our function does, it makes our code predictable. Functional programming at the end of the day is all about making your code predictable.

Functional programming at the end of the day is all about making your code predictable.

Key concepts in functional programming.

Idempotence

One idea that makes this clear is an elevator button. Pressing an elevator call button more than once has no bearing on the final result. Regardless of the number of times you press the button, the elevator is sent to that floor. Idempotent systems, like the elevator, result in the same outcome no matter how many times identical commands are issued.

Compare this to a function that returns a random number each time.

This function always returns a random number between 0 and 1 on every call. It's not predictable.

What idempotence means is given the same inputs, a function always does what we expect it to do.

It’s a lot like pure functions but a little different. If we take the same function

Here no matter how many times I call random, I get 5. The function that console.log’s 5 to the outside world, is idempotent. With multiple calls, it's going to display the same text even though it's not pure.

Another thing that can be idempotent is deleting a user from a database. When we delete a user from a database, we delete that person once but if I keep calling the function, to delete the same user, it's going to return the same result —an empty field where there are no more users.

Idempotence is used a lot in API HTTP requests. You always expect the same result regardless of the number of times a call is made.

Idempotence is valuable in parallel and distributed computation because it makes our code predictable.

Another idea is the ability to call yourself over and over inside of yourself and you still get the same result.

Math.abs gives me a positive number or absolute number every time.

But no matter what, calling this function over and over inside of itself always returns with the same thing. And so this is a guarantee of code being predictable.

Imperative vs declarative

Imperative code is code that tells the machine what to do and how to do it.

Declarative code tells it what to do and what should happen. It doesn't tell the computer how to do things.

Think about an image tag. It’s declarative. You include it in your code and you expect an image although you don't tell the machine how to display it.

A computer is better at being imperative. It needs to know how to do things. Human beings on the other hand are declarative.

When we looked at the Javascript engine, we saw how machine code is imperative.

Imperative vs declarative

Our machine code tells the computer, for example, to put the variable in that memory space and then take it out here and then modify it there. It’s very descriptive of how to do things vs as we go higher and higher up the chain to a higher-level language, we see declarative behavior.

We don't have to say hey, this is where you should store the memory. We just declare a variable with some sort of data and we say what needs to get done but not how to do it. The computer takes care of that for us.

Another example is for loops.

Example 1
Example 2

Which one would you say is declarative vs imperative?

Example 1 is imperative. We say declare a variable zero. And then loop for 1000 times. Also, increment i by one every time and finally console.log(i).
There is a lot of instructions here.

How can we make this more declarative? Example 2 is declarative.
Here we don't tell the program what to do or how to do it. I don't tell it to increment i by 1, or loop through things.

Another example of this is jquery. Jquery is more imperative than what we have now like React, Angular, or Vue.

Functional programming helps us be more declarative. We compose functions and we tell our programs what to do instead of how to do it.

But it’s important to note that at the end of the day, our declarative code is either going to end up either compiling down or being processed by something imperative like machine code. We can’t just avoid side effects and data manipulations.

But it’s important to note that at the end of the day, our declarative code is either going to end up either compiling down or processed by something imperative like machine code. We can’t just avoid side effects and data manipulations.

At the end of the day, something has to manipulate the dom on the webpage or talk to a database.

In the case of something like React, it abstracts away a lot of complexity so that we as programmers don't have to do it.

But the React library or Lisp itself eventually has to compile and do imperative things. The idea here is for us to get a level higher into declarative code so that we can be more productive.

Immutability

This means not changing the state. In OOP we saw classes that you could change. In functional programming, it’s all about immutability.

You are not changing the state but instead, you are making copies of the state and returning a new state every time.

State Mutation

We are cloning the obj and so our clone is a pure function. Because no matter how many times I call this function, it's going to clone — it's a pure function.

But you can see that afterward I do obj.name and update the name to Allan. As soon as I do that, the name gets changed to Allan. Here we are mutating the state. We are mutating data in our program.

In functional programming, the idea of immutability is very important. We can change things inside of our function but we don't want to affect the outside world in our programs.

Ideally, if we want to change the name, we would create a function to do just that.

source ZTM

Structural sharing

Here our initial obj James is still there. We have maintained immutability. We are returning copies every time a change is made.

Now you might say that this is not very efficient. Because we are just copying things over every time we want to make a change and we run a risk of filling up our memory.

There is something called structural sharing. Functional programming and a lot of places implement this data structure.

Wikipedia

The idea behind it is that when a new object or any sort of data structure is created, we don't actually copy everything. If it's a massive object or array, that is very expensive.

Instead of sharing the whole copy, underneath the hood, what happens is that only the changes that were made to the state would be copied.

But the things that don't change, in memory, are actually still there. And this is called structural sharing. With this in mind and the fact that today's memory is fairly cheap, it doesn't make functional programming especially the idea of immutability expensive or wasteful.

We are just saying that hey this data is not mine. So I am just going to copy it and leave the original intact so that other people can use it as well; just like a kid in school who plays with toys but doesn't destroy them so that other kids can play with them too.

We are just saying that hey this data is not mine. So I am just going to copy it and leave the original intact so that other people can use it as well; just like a kid in school who plays with toys but doesn’t destroy them so that other kids can play with them too.

HOF and closures

HOF is a function that does one or two things:
1. Takes one or two functions as arguments

2. Returns another function as a result often called a callback.

For example,

A function that returns a function is a HOF

or

It could be a function that takes a function parameter

Now, this also means we can do Closures.

Like objects in Javascript, closures are a mechanism to contain state.

In Javascript, we create a closure whenever a function accesses a variable that is outside of its immediate scope. We simply define a function inside another function and expose the inner function either by passing it or passing it to another function as a variable.

Now I can give this to a variable

Now even though the initial closure function was called and we are done with that, because of closure, this increment function remembers the variable declared in the outer scope.

The variable used by the inner function will be available to it even after the outer function has finished running.
Here we are modifying state outside of our function. This increment function is touching state or data that belongs to another function — the closure function.

This however doesn’t mean that we can't use closures. They are very powerful. We just have to be careful. Closures only make a function impure if we modified the closed over variable.

For example,

We are using closures here. Also, note that we are not modifying the state like we had before but we still have access to data outside of ourselves. As long as we don't modify and mutate that data, we are still following the functional programming paradigm.

Something important here is that we just created private variables. We are able to use closures to create data privacy which is very useful.

As a user, I can't modify count=55. Maybe it is an important variable that we shouldn’t touch. But because of closures, we still have access to that variable, and we are also making sure others don't modify it.

Closures get used a lot in functional programming for this specific reason; we just have to be careful not to modify the state.

Currying

This is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions each with a single argument.

You take a function that can take multiple parameters and instead, using currying, modify it into a function that takes one parameter at a time.

Let's look at an example

We can use currying here so that our function which takes multiple parameters at a time can take a parameter at a time.

Now because of closures, we have access inside of the b function to the a variable.

I am giving the function one parameter at a time.

Why is this useful?
Because now I can create multiple utility functions out of this.
For example:

Now, I have called this function once and for the rest of its time is going to remember this piece of data forever until we finish running the program.

So that 10 years from now if we finally remember we have this curriedMultiplyBy5, I can use it to multiply anything we want by 5. Let's say 4.

Instead of running this function over and over, I have run it once and now this curriedMultiplyBy5 is there for us to use.
So if its a function that gets called many times, we only run this part of the function: (b) => a*b

And currying reminds you of those methods on prototypes shared amongst objects; they are trying to save on memory or at least reduce on the work that our computers have to do.

Partial Application

This gets confused with currying. It’s only slightly similar.

This is a way for us to partially apply a function. It’s a process of applying a function with a smaller number of parameters.

It means taking a function, applying some of its arguments into the function so it remembers those parameters. It uses closures to later on be called with the rest of the arguments.

We can say that this code uses partial application; but what if we had 3 parameters?

Partial application says I want to apply a portion of the parameters, for example, and then the next time I call that function, I wanna apply the rest of the arguments.

But let's see the curried version first. Here we would just add another parameter

Partial application says no I want you to call the function once and then apply the rest of the arguments to it. That means on the second call I expect all the arguments.

So now if I do partialMultiplyBy5(), we have 5 bound to the first parameter using bind().

Now if I do:

I have partially applied a parameter — the a parameter and then I get to call the rest of the parameters b and c. That's the main difference between currying and partial application.

Partial application says on the second call I expect all the arguments. Currying says I expect one argument at a time.

Caching

This is a way of storing values so you can use them later on. Ideally, you can think of caching as a backpack that you can take to school.

Instead of going all the way home when you need something, like a pencil, you have a small box on your back that holds items that you need.

Caching is a way for us to speed up programs and hold some piece of data in an easily accessible box.

Memoization

Is a specific form of caching. It's used a lot in Dynamic programming.

Let's say we have a function that says we want to addto80

Now if I run this function again, I will have to go through the same steps to add n to 80.

We have done the calculation 3 times. But what if this function takes a really long time?

Every time we run this function, we will have to run long time 3 times. We go through the steps one by one even though we’ve done the same calculations.

Is there a way that we can optimize this?
This is where we can use caching and memoization.

Let's improve the above function by doing something different.

Simple property access is O(1) with a hash table.

The first time I run this function, I get long time. The first pass through went through the else part and did this calculation that hypothetically is going to take a long time.

This is a simple example but this could also be a calculation that takes a really long time.

If I run this function, the first time around, we call long time and we calculate to 85 but the second time around because this value was in the cache, we didn't have to do this long calculation and we just returned it immediately.

So what is memorization exactly?

It is a specific form of caching that involves caching the return value of a function that is based on its parameters. If the parameter of the function doesn't change, then it is memoized.

It uses the cache because it has calculated the same thing before with the same parameters and will return the same version of the function. If the parameter changes, then it's going to calculate both times.

Memoization is simply a way to remember a solution to a subproblem so you don't have to calculate it again.

Let’s improve our memorized function

Ideally, we don't want to fill the cache in the global scope. It’s good practice if the cache lives inside the function and it's not polluting the global scope.

The problem now is that we reset the cache every time the function gets called. So the cache becomes an empty object.

To get around this we can use closures in Javascript.

Our function remembers that the parameter has not changed. It’s going to check the cache and say I do not need to do that calculation I already have it.

Because of closures, we are able to access the cache inside of the outer function. This allows us to be very efficient with our code. In this sense, we use memorization to optimize our code.

Compose and pipe

This is the most powerful concept in FP.

Compose means that any sort of data transformation that we do should be obvious. It's kind of like a conveyor belt in a factory.

First, we have data that gets processed by a function that outputs some sort of data. This then gets processed by another function that outputs that data in a new form and so on and so forth.

Composability is a system design principle that deals with this relation of components. A highly comparable system provides components that can be selected and assembled in various combinations. This makes it easy to move pieces around to get the desired output based on user-specific requirements.

Let's say I want to do something using functional programming but I also want to do two things at a time.
Let's say we have a number -50 that gets multiplied by 3. We also want to take the absolute number or remove any negative signs from it. This means that we want to do two things — two functions.

How can we compose them together like an assembly line at a factory?

Of course, there are a lot of libraries that let you compose like Ramda. In Javascript, you can use compose(); not to say that it exists in Javascript, its just very common.

Let's build one on our own that allows us to multiplyBy3 and also take absolute. In order to do that, I need to compose these two pieces of functionality.

Let’s define our own compose function.

Our function will take the two functions(f, g) — multiplyBy3 & makePositive that act on the data we have

Now let's define the two functions

Now if I run this I get 150 because 50*3 is 150 and we make that an absolute number by removing the negative.

Using compose, we’ve created our own assembly line where we can compose different functions together.

Remember the definition,
Composability is a system design principle that deals with the relationship between components that can be selected and assembled in various combinations.

There is a lot of power here because now we can compose functions and build them together to add extra functionality. We take a piece of data, we take it through all these functions and then we finally have some sort of data that gets outputted.

All those functions are pure and all those functions are composable.

Compose is one of the most common functions you are going to see in a programming language. If you are using functional programming, you are definitely using compose.

Pipe

Pipe is the same thing as compose except instead of going from right to left, it goes left to right.

We just swap these around so that now with pipe the operations are different. f gets run first over the data. The first parameter that we’ve given compose (multiplyBy3) gets run first, and then g gets run over that data. Make positive gets run last.

For example,
fn1(fn2(fn3(50)))

With compose, this is what we can do

(fn1, fn2, fn3)(50)

We are saying I want you to evaluate this right to left.
Take the data then apply function 3 to it. Whatever comes out of that, apply function 2 to it; whatever comes out of that, apply function 1 to it.

Pipe is just the opposite

pipe(fn3,fn2,fn1)(50)

These two functions are going to have the same exact output because the functions are the same. They are used interchangeably. You can use whichever you prefer for your readability.

And that's the power of compose. We get tiny functions that are easy to test and we put them together to have powerful results.

Arity

This simply means the number of arguments a function takes.
If we look at the compose function, it has an arity of 2; MultiplyBy3 has an arity of 1.

Although it isn't a solid rule, in functional programming, it’s usually good practice to have a fewer number of parameters. The fewer the parameters, the easier it is to use a function.

You can do more flexible things and it makes functions more flexible.

We can use things like currying or something like compose and pipe and —compose these functions together.

The more parameters a function has, the harder it is to compose it with other functions. It doesn't mean its impossible, but it does become more difficult.

So when it comes to Arity, there is no hard right or wrong. But you may want to stick to one or two parameters.

Why functional programming is so great?

The separation of data and functions — or data and the effects that happen on that data.

Doing effects and logic at the same time, as in OOP may create side effects that cause a lot of bugs. If multiple things in your program handle some piece of data at the same time, that gets really complicated and it can cause many problems.

So the idea of keeping functions small and pure and comparable; doing one thing at a time and doing it well; immutability; the idea that a function takes inputs and returns outputs so that it can be used with other functions allows us to have a predictable program minimizing bugs because everything is so simple.

And as long as we are able to combine these little small functions together, we are able to create really complex programs.

It doesn’t mean that functional programming is the answer to everything. But because of its nature, it works well with distributed systems — systems that have different machines all over the world working with each other; or parallelism where multiple things have to happen in parallel

It doesn’t mean that functional programming is the answer to everything. But because of its nature, it works well with distributed systems — systems that have different machines all over the world working with each other; or parallelism where multiple things have to happen in parallel. These functions are pure so there are no strange bugs.

But it also depends on the problem you have. There are times when Object-oriented programming might be better.
You are building a fairly tale game and you have clear object characters in the game that have some sort of state and can interact with that state and others can interact with them as well.

Or you have an amazon shopping cart where there is clear data that needs to get processed.

In fact, let’s build an Amazon shopping cart to apply these principles of functional programming that we’ve learned so far.

Functionality

  1. Add Item to cart
  2. Add 3% tax to an item in cart
  3. Buy item
  4. Empty cart

These are the functions we are going to need

These are the functions that are going to affect the Data that we have — the user
The user

Based on what we've learned, we want to keep these functions pure. We are also applying a bunch of steps to the same data.

We can use something like compose to compose all these steps so that function purchaseItem() does all these things here.

But first, how would we approach this without compose?
Let's say we receive the user and item as parameters; we will return the user with the new item.

We know that we want to keep things immutable. We don't want to modify the user, so we will return a new object.

The third parameter is our purchased item.

Here Kim our user has purchased a laptop with a price of 344 and it's in their purchase history.
But that's not all because remember, we have to add a tax and all the other functions we saw. Maybe Kim wants to remove something from the cart. We are being too simplistic here.

So we need to compose these functions together.

I have a factory called purchaseItem that takes the data — user and item and we are giving this data to all these functions above. So let's compose all these functions together.

Note that here we have more than two functions. This is part of the reason you should use a library like Ramda or Lodash to use compose because you don't need to implement this yourself. For now, let's do it for the purpose of learning.

We are using the spread operator to get all the other arguments

Now let's define purchaseItem.

reduce is a higher-order function. Purchase item receives a bunch of items and they are composed one by one

Here we've built our own compose function that allows us to enter any number of parameters that we want into purchase item. Now we can compose these functions and act over the data we receive.

  1. Add Item to cart.
We are copying the prev cart array so we don't mutate data. We are passing the item we are buying

2. Apply Tax

If we run this we see that the price of the laptop went up from 200 to 260. We successfully applied a tax

Buy Item

Here we want to move the cart item to purchases

Empty cart

So now that Kim has purchased her item, we want to empty that cart

Kim has just purchased her first laptop.

Everything just works like a factory. And if we want to add new functionality such as upgrade Kim’s status, the rest of the code doesn't really care about that — we can just create a new function and just add it here.

That's the beauty of functional programming.

It’s the idea that we are building these small comparable functions that are each worried about their own world. So that whenever bugs happen, as long as they are tested well, it's most likely that the bug is in a place where we might have state. The rest of the functions are pure.

But here is the exciting part that makes functional programming so interesting. We can playback history as well.

As a retailer, you want to keep track of your data. You want to know what users did. So you need a way to figure out what happened — perhaps logs of what the user might have done that may have resulted in an error.

Using our functional programming paradigm, we have the ability to travel back and forth through time.
We can have a history variable that we can fill up at every stage.

let amazonHistory = [];

Now every time we do something, we want to add to this history.

In addItemToCart, we are pushing the user’s current state.

So that when I run amazonHistory, I have the entire history of Kim and what she did. We can go back in time now and see where Kim made her purchases.

Maybe now Amazon can create new functions like getUserState() or go back in history() or goforward(). The possibilities are endless.

And although we are modifying state with amazonHistory, remember we can’t have pure functions alone. At some point, we do have to mutate data. The idea here with functional programming is to minimize those mutations.

Pipe

Remember compose, we go from right to left. If you prefer reading from left to right, you just change compose to pipe and just change g into f and f into g.

We will have to change the orders of the functions as well in the opposite direction. You can choose pipe and compose based on what you like.

Conclusion

Functional programming gives us this idea of a pure function; and functions are useful because we limit repetitions. The beauty is that now every time we use a function, we can reuse it somewhere else.

Functional programming lays the foundation for creating reusable code. We can move pieces of functions around to do different things based on our needs.

Functional programming lays the foundation for creating reusable code. We can move pieces of functions around to do different things based on our needs.

And although functional programming has different concepts than OOP, at the end of the day, the goal is still the same; we want our code to be:

  1. Clear + Understandable
  2. Easy to extend
  3. Easy to maintain
  4. Memory efficient
  5. DRY

Composition vs Inheritance

Inheritance is a superclass that is extended to smaller pieces that add or overwrite things.

Here Elf uses inheritance by using “class extends “

Composition on the other hand is when we use smaller pieces to create something bigger. We compose functions to act on data differently.

Let's see how composition can solve some of the problems that come with inheritance. Keep in mind that the idea is not to pick one over the other.

You can use both.

Drawbacks of inheritance.
With inheritance, code is structured as what is vs what it has in functional programming.

With classes, we say that a class will have data, properties, and methods that act upon that data.

But when we define things as what they are, it becomes very hard to be able to change them. With inheritance, we are assuming no change yet things change all the time.

For example, let's say that from the above example, the character needs to have a sleep method added to it. Now all the classes that extend from character will have the sleep method even though they don’t necessarily need it.

Tight coupling problem

With a parent class and a child class, the coupling is very tight.

Its the opposite of reusable modular code. Making a small change to a class has a rippling effect on all its subclasses and this tends to break things.

So a tightly coupled inheritance where you can change things in one place and will have rippling effects to all the other things can be a benefit where you keep your code DRY but it can also cause a lot of problems.

With dependencies, if you change something on a class, you have to make sure that it doesn't break anything in its subclasses because they are using inheritance.

Fragile base class problem

Because the base class changes all subclasses, this can be very fragile. It can break our code down the path.

“‘fragile’ because seemingly safe modifications to a base class, when inherited by the derived classes, may cause the derived classes to malfunction. The programmer cannot determine whether a base class change is safe simply by examining in isolation the methods of the base class”.

Hierarchy problem

If there were two Elves — boss Elf and Junior Elf and the junior elf inherits from a boss elf. The problem with hierarchy is that now what if the junior elf is higher up in hierarchy for some reason than the boss elf?

And if we have more methods it means the junior elf is also going to inherit all the methods that it doesn't need. This is also know as the classic gorilla banana problem.

So how can we fix some of these bad inheritance principles with composition?

For starters, we can remove all the methods and compose them separately.

Here we are saying that these are the abilities that the Elf has. We have the bearest Elf character possible and we can continue adding abilities to it.

We are turning the inheritance model from what an Elf is to what an Elf does. By having a base Elf, we give the Elf abilities through things like getAttack() which uses functions to add to the character object different abilities. Now we can compose these small little pieces of functionality to describe our character.

Also here state is not internal; getAttack simply gets the character state and returns it, same with Elf.

So to review,

Inheritance is a superclass that is extended to smaller pieces that add or overwrite things.
And although you can be very careful with it and make sure that the base class is very general, so that we don't overload our subclasses, it can easily get out of hand as we go deeper and deeper down the inheritance chain.

And once we need to change something, it becomes really difficult. On the other hand, composition is about smaller pieces that are combined to create something bigger. We combine these boxes based on what we need to create our desired output.

And if we need to add something later on, we just add another puzzle by composing things together or even remove them if we don't need them anymore.

So arguably composition is a better solution long term than inheritance but this doesn't mean that inheritance is always bad. There are ways that you can still write great programs with inheritance but with problems that might come up in the future especially with so many unknowns, it becomes really difficult.

Composition will help us create code that is more stable as well as easier to change in the future — its simply a better tool to use when creating software.

A programming paradigm is writing code compliant with a specific set of rules. For example, organizing code into units would be called OOP and avoiding side effects as well as writing pure functions would be called FP.

In OOP, an object is a box containing information and operations that are supposed to refer to the same concept or grouping it as an object. These pieces of information inside the object are called attributes or state and the operations that can happen on the state are known as methods.

In functional programming, the code is essentially a combination of functions and data is immutable; this leads to writing programs with no side effects.

A pure function cannot change the outside world and the output value of a function depends on the given arguments. This allows functional programming to have real control over the program flow.

In functional programming, functions are first-class citizens whereas in OOP, objects are first-class citizens.

We also saw the pillars of each of these paradigms.
In OOP, we had Abstraction, Encapsulation, where it allows us to encapsulate ideas that are related together in objects. We hide irrelevant data from the user. We learned about inheritance and polymorphism.

In FP, we saw that it's all about the idea of pure functions and composing functions to act upon that data.

Both these paradigms have been around since the 70s.

OOP is very common in languages such as C sharp, Python, Java, and FP in languages such as closure and Huskel.

But at the end of the day, we are not choosing one over the other. All of them are good in their own ways. They are simply different approaches to the same problem.

Although some languages prefer one over the other in terms of programming paradigms, however, languages like Javascript allow you to do both.

The advantage of each paradigm is in the way you model your algorithm and data structures, the choice is simply what makes more sense for your project

Key differences

FP is all about performing many different operations for which the data is fixed.

OOP is about few operations on common data.

In FP, we don't modify state.

OOP is very stateful.

FP functions are pure; there are no side effects. The functions we write don't make an impact on the code that is running outside of that function.

In OOP there are side effects; methods manipulate our internal state.

FP is more declarative. It's about what we want to be doing

OOP is about how we want it to be done which is more imperative.

So when should you use one over the other?

FP is quite good at processing data for large applications. If you are analyzing user data, maybe using it for a machine learning model, Functional programming works well for high performance in processors and because you can run it on multiple processors.

If on the other hand you have many things like characters in a game, with not too many operations, then OOP might be a better solution.

But you can use the ideas from both of these to write your code. For example, the react javascript framework, uses both classes and pure functions

In all programs, there are two primary components:
The data and behavior.

OOP says, bring together the data and the behavior in a single location called object or class. It’s gonna allow us to understand our program easier and be more organized.

Functional programming says that data and behavior and distinctly different things and should be kept separate for clarity.

--

--

Allan Sendagi

Technology will save Homo Sapiens from extinction. I document my journey learning these technologies https://www.linkedin.com/in/allansendagi/