Photo by Fotis Fotopoulos on Unsplash

Concepts 2 — Compiletime, Runtime and Others

Dr. Timm Felden
6 min readJun 29, 2024

--

When thinking about computations in the context of programs, we usually think about what happens while the program is running. In fact, this is the most common form, but there are also other forms. Knowing these other forms has benefits when it comes to understanding costs, trade offs, errors messages, and the behavior of a program while it is running.

Classical programming languages are compiled before any code can be executed. For the moment, we’ll focus on such languages and generalize at the end of the article. So, there are two periods of time. The compiletime (CT) including everything that happens during compilation. And the runtime (RT) including everything that happens while a program is running. Note that the word runtime is sometimes also used for the runtime environment or language runtime which is something completely different — in essence the implementation of the standard library — and not related to our topic, here.

Compiletime happens only once per executable. Runtime happens once per execution of that executable. For most programs, it means that runtime happens a lot more often than compile time. This, in turn, means that performing a computation during compiletime and just using the stored result during runtime will reduce the total cost of the computation. Modern compilers will try to perform as many computations as possible at compile time to provide you with an executable that is as efficient at runtime as possible.

Performing a computation like a function call at compiletime, however, requires that its parameters are known at compiletime. Such values are called compiletime constants. Further, the compiler needs to be able to understand and provide the effect of the called function. It can, for instance, understand a +, it may understand a memory allocation, but it will for sure not be able to evaluate a function querying data from a user.

So, defining something like piSquared = pi * pi can be done during compiletime. The programmer can express the value on his level of abstraction and the compiler will make sure that the correct bit pattern is computed and added to the resulting executable. Or it can deduce that the constant is never used and discard it completely. And, hopefully, warn the user because he provided pointless code. At runtime, all cost has been eliminated.

On the other hand, having the compiler compute pi * pi requires the compiler to do so on each compilation, i.e. the compiler will take a little longer to produce the final result. Therefore, an excessive use of compiletime computation should always be avoided. Excessive use is usually C++ template meta programming creating types with the sole purpose of performing some computation that exist only as intermittent state and gets thrown away later. An alternatives to such meta programming can be the use of code generators. Many languages offer only compile time computations that they expect to be able to handle efficiently.

Another area, where compiletime computations hurt are software development and configuration-like code. While we develop software, we usually provide versions of it until the compiler is satisfied. Then we run it. This can mean that the compiler is executed hundreds of times before one of the resulting programs is actually executed. Which is only executed once, because it still contains bugs that must be taken care of. This is something that we should take into account when we pick the language for a project. Designing a new language or implementing core tooling is something most of us likely won’t do anyway. For some languages, such computations can be turned off or turned off partly. Debug configurations in C++, for instance, usually cater this goal. Also, there are usually ways to have the compilations happen asynchronously to your development. Or speed them up with incremental compilation and similar techniques.

Thus, if you just want to compute a diagram for an article, using something like R is a good choice. It is slow at runtime, but most of the time is spent setting up input processing and graphics documentation parameters. The resulting image is put into the document and once that is done, code is never executed again. If you are a startup and have no real customers, implementing your services in Python might work for you, because you don’t run them a lot anyway. If you are a larger company, however, having a Python service might cause hardware costs to outmatch your development costs. So, you might want to choose Java or C++. And yes, startups can turn into large successful companies invalidating once adequate choices.

Code Generators

Some domains make heavy use of code generators. These take a domain-specific specification and translate them into source code in some output programming language. Which, in turn, is then compiled into a program. Usually, these code generators perform complex and expensive computations and optimizations. Thus, some people use the term generationtime for the process. This is wrong in so far that such a generator is just another compiler. I.e. generationtime is simply a different compiletime. The irritating thing about this relation for most of us is that this different compiletime can happen at the program’s runtime. An example for this is compilation of regular expressions into a matching automaton which is often done at runtime even if the regular expression is constant.

Designtime, Interpretation, JIT, eval

Designtime is another related term often encountered especially in the context of content creation like web pages or web applications. It is somewhat equivalent to compiletime in that it fixes the layout of input data. As such, it can be used to perform optimization and validations on that data if that’s possible at all. Especially for web pages, it is something that happens before compilation, as the program is interpreted and compiled on every use blurring the lines a bit. Even more, most scripting languages offer an eval function allowing to compile arbitrary code at runtime to extend the program on the fly. Simply, because it is possible and easy to implement in such languages. I won’t go into details here as eval should be avoided at all cost and I’d even discourage productive usage of languages that have it wherever it is possible for security reasons.

The most tricky area is just-in-time compilation (JIT) which performs some or all compilation at runtime. The most common form is to do all the checks once and to compile into an intermediate representation (IR) that is more suitable for machine-only processing than source code. Using JIT compilation can cause computations like piSquared = pi * pi if placed in the body of a function to be executed a few hundred times until the compiler decides to optimize them out. Making accurate estimates on execution cost of JIT-compiled code is, thus, very tricky. Making accurate measurements is even harder as the code changes while it is measured. So, it isn’t even clear what an accurate measurement would be. Basically, because the user might experience the measured code only in half-optimized forms. Also, what can happen is that JIT-compilation starts just before the end of the program resulting in an overall extended runtime. Therefore, JIT-compiled languages are bad choices for short-running programs, such as ls or rm. However, they shine on projects with lots of constant runtime configuration and dead code that can be detected and thrown away such as web services with lots of code dependencies. Also, most implementations of JIT-compilation will stop program execution while the program is recompiled. Thus, they are not really suitable for hard real time applications, i.e. ones that require deadlines to be met like providing a new frame in a computer game every 16ms.

--

--

Dr. Timm Felden

Programming language enthusiast for decades. Author of Tyr. Writes about types and programming languages.