Declarative Computation

David Muhr
5 min readApr 17, 2016

--

This article is part of a series of six articles:

The declarative computation model results in strict functional programming, where a components behaviour is independent of when it is executed or of what happens in the rest of the computation, i.e. referential transparent. We say that an declarative operation is:

  • independent, does not depend on any execution state outside of itself
  • stateless, has no internal execution state that is remembered between calls
  • deterministic, always gives the same results when given the same arguments

Above properties allow declarative programs to be written, tested and proved correct independently of other components and its own history (previous calls). Reasoning about declarative programs is simple since every part of a declarative program can be fully understood by its inputs and outputs. The effort needed to understand the whole program would be the sum of the efforts needed to understand the individual parts.

Our declarative computation model could include every language construct that is regarded as a declarative operation, e.g. values, records, immutable variables, functions, conditionals, pattern matching…

Two styles of declarative programming have become particularly popular: functional and logical programming. We will have a look at functional programming in the next article.

“Declarativeness”

To fully understand declarative programming you have to understand declarativeness. Intuitively, declarativeness is defining the what (the results we want to achieve) without explaining the how (the algorithms/details needed to achieve the results).

The least expressive form of declarative programming would be to only declare the data (e.g. only records/HTML/JSON..). This so called descriptive declarativeness. Programming would be really simple if we could describe our programs only in a purely descriptive manner.

At some point in time we realise that descriptive declarativeness is not expressive enough for our needs. We have already seen above what declarative operations are. Notice that the combination of declarative operations always results in a declarative operation and our resulting program is therefore declarative! We call this programmable declarativeness. There are two fundamentally different ways to view programmable declarativeness:

  • A definitional view, where declarativeness is a property of the component implementation. For example, programs written in the declarative computation model are guaranteed to be declarative because of properties of the model.
  • An observational view, where declarativeness is a property of the component interface. This view follows the abstraction: that to use a component it is enough to know its specification without knowing its implementation. The component just has to behave declaratively, i.e. as if it were independent, stateless and deterministic without necessarily being written in the declarative computation model.

Declarative programming techniques

There are several programming techniques that define declarative programming.

Iterative computations

An iterative computation is a simple loop whose stack size is bounded by a constant, independent of the number of iterations. An important aspect of iterative computations is that they start with an initial state and transform that state until reaching a final state.

As an example of an iterative computation we will use Newton’s method for calculating the square root of a positive real number `x`. We start with a guess `guess` of the square root and improve this guess iteratively until it is accurate enough.

We definitely should not write every iterative computation like the one above. The must be a part of it that we can abstract away.

We have now built a parameterised function `iterate` that takes the problem dependent functions and abstracts over them. An abstraction like the one above is known as a control abstraction, i.e. an abstraction that can be used to provide a desired control flow. The control abstraction we have built is known as a `while` loop in many languages.

Recursive computations

Iterative computations are a special case of a more general kind of computation called recursive computation. Recursion in programming occurs in two major ways: in functions and in data types. A recursive function can call itself anywhere in the body and can call itself more than once. Recursive computations often resemble behaviour you might know from loops in imperative languages.

While iterative computations can be seen as simple loops with constant stack size, the stack size of recursive computations can grow as the input grows. It is important to avoid growing stack size whenever possible to avoid stack overflow.

A typical case of a recursive computation is the factorial function implemented below.

An experienced programmer might immediately see that the above implementation of the factorial function might cause a stack overflow. He uses a simple technique: Substitution. The substitution technique is one of the most important tool in every programmer’s toolbox. Say we want to examine `factorial 3`. Let’s rewrite our factorial function with the substitution technique.

Now you can see that the stack size inevitably grows with the input argument. To convert a recursive computation to an iterative one you only have to do one thing: carry the state forward at all times. Carrying a state forward is often referred to as accumulating the result and the state forwarded is called an Accumulator.

Our new algorithm has effectively changed the computation from `(3 * (2* (1 * 1)))` to `(((1 * 3) * 2) * 1)`. Let’s use substitution again to prove that we are not blowing up the stack this time.

Recursive computations are really useful in combination with recursive data types. A recursive data type is implemented in terms of itself. Lists and Trees are common examples of recursive data types.

Higher-order programming

Higher-order programming is a collection of programming techniques that becomes available when using functions in programs. Functions are also known as lexically-scoped closures. The term higher-order comes from the concept of order of a function. A function whose arguments are not functions is of order zero. A function that has at least one function argument is of order one. There are four operations that enable higher-order programming, functions are then regarded as first-class functions. The term indicates that they are not only declared and invoked but can be used in every segment of the language as just another data type.

  • Genericity: The ability to pass functions as arguments to other functions. To make a function generic is a way to let any specific entity (an operation or value) in the function become an argument of the function. We say that the entity is abstracted out of the function body. Each time the function is called another entity can be given.
  • Instantiation: The ability that functions return functions. These functions are often called “factories” or “generators”.
  • Embedding: The ability that functions can be embedded in data structures.

Up until now we were speaking of functions, but strictly speaking what you might know as functions often are procedures. A function is a more mathematical construct and “packages” only an expression. A procedure on the other hand “packages” an arbitrary statement.

  • Procedural abstraction: the ability to convert any statement into a procedure. Any statement can be packaged in a procedure, this does not execute the statement but instead creates a procedure (a closure). Because the procedure contains a contextual environment executing it gives the same result as executing. Procedures however can do more than just delaying execution of a statement, they can also have arguments which allow some of their behaviour to be influenced by the call.

Nondeclarative needs

Declarative programming is very useful, but it is somewhat detached from the real world in which you have to deal with state, concurrency and I/O. Even if it’s possible to stay fully declarative, remember the rule of least expressiveness and think about the costs/benefits of staying fully declarative.

In conclusion, the basic technique for writing declarative programs is to consider the program as a set of recursive functions using higher-orderness to simplify the program structure.

Resources: CTM

--

--