This is the second part of an article on fundamentals of dynamic programming. In the first part, we defined dynamic programming and studied three simple examples. This post focuses on three more advanced problems: the rod-cutting problem, calculating the longest increasing subsequence and the change-making problem.

Part 1 can be found here:

Suppose we have a rod of length ℓ (where ℓ is a positive integer) which is represented by a finite set of cardinality ℓ. We can cut it as many times as we want into sub-rods (represented by subsets) with a positive integral length, each cut incurring no…

Dynamic programming is an essential competitive programming technique. It consists in trading off memory for time to avoid repeating computations. This can yield significant improvements in performance, sometimes turning an exponential-time algorithm into a linear one.

This is the first part to a series of article. The first one illustrates the fundamentals of dynamic programming with three basic problems, the second one explores three more advanced algorithms and three separate articles will tackle more complex problems. Here is the list of all the algorithms we’ll cover:

- Calculating the
*n*-th tribonacci number**(part 1)** - Efficient binomial coefficient calculator
**(part 1)** - Counting…

This is the third part of a series of articles on the fundamentals of dynamic programming. In the first two parts, we studied six problems which can be solved using dynamic programming. This post tackles a more advanced problem: the integral Knapsack problem.

The Integral Knapsack Problem is an optimisation problem which has many applications and can be solved in polynomial time using dynamic programming.

Here are links to parts 1 and 2:

Suppose you have a knapsack with capacity *M* and a set *I *of *N* items *i* characterised by their weight W(*i*) and their value V(*i*). Weights are…

Linked lists are among the most fundamental data structures. In this article, we will define terminology related to linked lists, explore the differences between *singly* and *doubly* linked lists and see how we can implement them from scratch.

- Definition, operations and terminology;
- singly linked list: implementation and analysis;
- doubly linked list: operations, implementations, XOR-linking.

A *singly linked list* (usually simply called a *linked list*, or even *list*) of type `T`

can be defined as a 2-tuple (*h*, *t*) where *h* is an element of type `T`

and *t* is a pointer to another list of type `T`

. *h* is called…

This article is the second part to a series on analysis of algorithm runtime complexity. In the first part, we covered the mathematical foundations behind asymptotic analysis and analysed several simple algorithms. This article focuses on the analysis of divide-and-conquer algorithms.

We’ll start by analysing some divide-and-conquer algorithms from scratch and then derive the master theorem.

Here is a link to part one:

- Terminology and simple properties: divide-and-conquer algorithms, logarithmic, linearithmic and quasi-linear complexity, recurrence trees;
- examples of analysis of divide-and-conquer algorithms;
- master theorem: statement and proof.

*Divide-and-conquer *is an algorithm design paradigm which solves a problem in three steps…

`fold`

and `foldBack`

are two very useful functions that operate on several data structures in F#. They can prevent you from writing recursive functions in many situations. However, they might be a bit tricky to understand for the first time. This article will explain how they work and give numerous examples of what they help you do. We will focus on lists, but they behave very similarly with arrays, sets and other linear structures.

This article assumes basic knowledge of F#, including manipulation of lists and arrays.

- fold;
- foldBack;
- fold2 and foldBack2;
- mapFold, mapFoldBack and application to range queries.

`List.fold`

…

This article is about asymptotic notations and how we use them to estimate the runtime complexity of an algorithm. We will start by introducing the mathematical concepts behind these notations. The rest of the article will give numerous examples of analysis of some well-known algorithms to cover a wide range of techniques

I will illustrate this article with code written in Go and F#, depending on what paradigm lends itself more to each function. …

If you are given a distribution of grades, heights or income across a population — which usually correspond to a normal distribution —, one of the things you may want to do is to find the mean and standard deviation that best approximate your data. This is an example of something we can achieve using least-square regression. In this article, we will understand how we can use this method to find the optimal parameters of some function to obtain the most accurate approximation of a set of data points.

We will start by tackling the simplest case: linear least squares. It…

This article covers the mechanisms that lie behind the concept of iterators in Python. The goal is to understand what happens when you use a for loop to iterate over a data structure.

Iterables, such as lists, tuples and sets, can be iterated over using a for loop:

data = {0, 2, 4, 6, 9}for d in data:

print(d)# 0

# 2

# …

When you run this code, Python calls the `__iter__`

special method of the object you are passing to it. This method returns an iterator, which is an instance of a class that implements the…

Google owes a great part of its success to the algorithm which was originally used to rank webpages. This algorithm, PageRank, sorts all the pages of a network according to their popularity. The more webpages link to some page A, the highest its score is and the most likely it is to appear among the top results of a search. The example of PageRank is commonly given in linear algebra courses, as it is a good illustration of the applications of eigenvalues and eigenvectors. …