Can we call a function a googol times?
Yes, this is about optimal reductions!
Imagine if you could call a function,
f, a googol times. As in, you implement some
f and then repeatedly apply it to a
x, such as
y = f(f(f(f...(x))))), but
10^100 times, and get a result. Seems impossible, right? After all, a googol is so huge that even counting to it would take more years than the lifetime of the universe. In fact, if you could call arbitrary
fs so many times, you could easily break Bitcoin keys! Yet…
A few years ago, I was experimenting with an “optimal lambda-calculus reduction algorithm”, when I found myself puzzled with the fact it seemed to do exactly that, for a specific function I had written. I even posted a highly-voted Stack Overflow question to understand what was going on. Fast forward a few years, that experience was so inspiring that I made an entire language, Formality, around the algorithm, and wrote a somewhat popular Medium article about optimal reductions, but I just noticed I never documented precisely what happened that day. So let’s do it!
A dumb algorithm
All started with a really, really dumb implementation of
exp_mod, i.e., the
a^x mod N function, on the lambda calculus. Here is how the algorithm worked:
- First, I created an λ-encoded tuple from
- Then, I rotated it left-wise,
- Then, I returned the first element,
And that’s it. It isn’t hard to see how this stupid procedure returns
a^x mod N. As an example, suppose we wanted to compute
3^2 mod 5. First, we would create an array of 5 elements:
Then we would find
3^2, which is
9, and rotate the array left-wise
[0,1,2,3,4] ... original array
[1,2,3,4,0] ... rotate left 1 time
[2,3,4,0,1] ... rotate left 2 times
[3,4,0,1,2] ... rotate left 3 times
[4,0,1,2,3] ... rotate left 4 times
[0,1,2,3,4] ... rotate left 5 times
[1,2,3,4,0] ... rotate left 6 times
[2,3,4,0,1] ... rotate left 7 times
[3,4,0,1,2] ... rotate left 8 times
[4,0,1,2,3] ... rotate left 9 times
Then we would return the first element,
4. And that’s it,
3^2 mod 5 = 4. Yay! Of course, this algorithm is reaaally slow. How slow? Run this JS program:
3^20 mod 13 takes 42 seconds on my notebook. Anything much bigger then
20 would take years. For a perspective, GHC computes
3 ^ 2000000000 mod 13 in that same time! So, this algorithm is trash, pure garbage, and should never, ever be used. Or should it?
A dumb algorithm, an optimal evaluator
Let’s now implement the same algorithm in Formality, an optimally evaluated functional language. This is how it looks like:
This is almost the same thing, except counting backwards because I messed up and was too lazy to re-implement it in the right order. So, how long it takes to compute
10^100 mod 1234? Let’s try with
fm! Install Formality, save the file above as
Main.fm, run it with
fm -o Main/main:
$ time fm -o Main/main
My notebook takes
a0.8s to output the answer (
586)! Yes, that’s not impressive in a broader sense; after all, we can compute much bigger exp-mods with native BigNum algorithms. But we’re not using native BigNum, we’re using the dumbest function imaginable, and calling it
10^100 times, more than stars in the universe, and still getting a result. There is no clever compiler improving our dumb implementation, it is just the effect of the extreme laziness of optimal evaluation. If that’s not cool to you, that’s certainly to me!
And that’s it! The purpose of this post was to document this cool effect that allows us to run seemingly impossible programs. If that sounds fun to you, please follow the development of Formality in our official repository. The language is growing at a very fast pace and is more user-friendly than ever before. While it isn’t officially released yet, it is certainly in a point where you can actually use it without being a λ-calculus geek. Please take a look!
Thanks for reading!