The resolution of the Bitcoin experiment

I’ve spent more than 5 years being a Bitcoin developer. The software I’ve written has been used by millions of users, hundreds of developers, and the talks I’ve given have led directly to the creation of several startups. I’ve talked about Bitcoin on Sky TV and BBC News. I have been repeatedly cited by the Economist as a Bitcoin expert and prominent developer. I have explained Bitcoin to the SEC, to bankers and to ordinary people I met at cafes.

From the start, I’ve always said the same thing: Bitcoin is an experiment and like all experiments, it can fail. So don’t invest what you can’t afford to lose. I’ve said this in interviews, on stage at conferences, and over email. So have other well known developers like Gavin Andresen and Jeff Garzik.

But despite knowing that Bitcoin could fail all along, the now inescapable conclusion that it has failed still saddens me greatly. The fundamentals are broken and whatever happens to the price in the short term, the long term trend should probably be downwards. I will no longer be taking part in Bitcoin development and have sold all my coins.

Why has Bitcoin failed? It has failed because the community has failed. What was meant to be a new, decentralised form of money that lacked “systemically important institutions” and “too big to fail” has become something even worse: a system completely controlled by just a handful of people. Worse still, the network is on the brink of technical collapse. The mechanisms that should have prevented this outcome have broken down, and as a result there’s no longer much reason to think Bitcoin can actually be better than the existing financial system.

Think about it. If you had never heard about Bitcoin before, would you care about a payments network that:

  • Couldn’t move your existing money
  • Had wildly unpredictable fees that were high and rising fast
  • Allowed buyers to take back payments they’d made after walking out of shops, by simply pressing a button (if you aren’t aware of this “feature” that’s because Bitcoin was only just changed to allow it)
  • Is suffering large backlogs and flaky payments
  • … which is controlled by China
  • … and in which the companies and people building it were in open civil war?

I’m going to hazard a guess that the answer is no.


Deadlock on the blocks

In case you haven’t been keeping up with Bitcoin, here is how the network looks as of January 2016.

The block chain is full. You may wonder how it is possible for what is essentially a series of files to be “full”. The answer is that an entirely artificial capacity cap of one megabyte per block, put in place as a temporary kludge a long time ago, has not been removed and as a result the network’s capacity is now almost completely exhausted.

Here’s a graph of block sizes.

The peak level in July was reached during a denial-of-service attack in which someone flooded the network with transactions in an attempt to break things, calling it a “stress test”. So that level, about 700 kilobytes of transactions (or less than 3 payments per second), is probably about the limit of what Bitcoin can actually achieve in practice

NB: You may have read that the limit is 7 payments per second. That’s an old figure from 2011 and Bitcoin transactions got a lot more complex since then, so the true figure is a lot lower.

The reason the true limit seems to be 700 kilobytes instead of the theoretical 1000 is that sometimes miners produce blocks smaller than allowed and even empty blocks, despite that there are lots of transactions waiting to confirm — this seems to be most frequently caused by interference from the Chinese “Great Firewall” censorship system. More on that in a second.

If you look closely, you can see that traffic has been growing since the end of the 2015 summer months. This is expected. I wrote about Bitcoin’s seasonal growth patterns back in March.

Here’s weekly average block sizes:

So the average is nearly at the peak of what can be done. Not surprisingly then, there are frequent periods in which Bitcoin can’t keep up with the transaction load being placed upon it and almost all blocks are the maximum size, even when there is a long queue of transactions waiting. You can see this in the size column (the 750kb blocks come from miners that haven’t properly adjusted their software):

When networks run out of capacity, they get really unreliable. That’s why so many online attacks are based around simply flooding a target computer with traffic. Sure enough, just before Christmas payments started to become unreliable and at peak times backlogs are now becoming common.

Quoting a news post by ProHashing, a Bitcoin-using business:

Some customers contacted Chris earlier today asking why our bitcoin payouts didn’t execute …
The issue is that it’s now officially impossible to depend upon the bitcoin network anymore to know when or if your payment will be transacted, because the congestion is so bad that even minor spikes in volume create dramatic changes in network conditions. To whom is it acceptable that one could wait either 60 minutes or 14 hours, chosen at random?
It’s ludicrous that people are actually writing posts on reddit claiming that there is no crisis. People were criticizing my post yesterday on the grounds that I somehow overstated the seriousness of the situation. Do these people actually use the bitcoin network to send money everyday?

ProHashing encountered another near-miss between Christmas and New Year, this time because a payment from an exchange to their wallet was delayed.

Bitcoin is supposed to respond to this situation with automatic fee rises to try and get rid of some users, and although the mechanisms behind it are barely functional that’s still sort of happening: it is rapidly becoming more and more expensive to use the Bitcoin network. Once upon a time, Bitcoin had the killer advantage of low and even zero fees, but it’s now common to be asked to pay more to miners than a credit card would charge.

Why has the capacity limit not been raised? Because the block chain is controlled by Chinese miners, just two of whom control more than 50% of the hash power. At a recent conference over 95% of hashing power was controlled by a handful of guys sitting on a single stage. The miners are not allowing the block chain to grow.

Why are they not allowing it to grow? Several reasons. One is that the developers of the “Bitcoin Core” software that they run have refused to implement the necessary changes. Another is that the miners refuse to switch to any competing product, as they perceive doing so as “disloyalty” —and they’re terrified of doing anything that might make the news as a “split” and cause investor panic. They have chosen instead to ignore the problem and hope it goes away.

And the final reason is that the Chinese internet is so broken by their government’s firewall that moving data across the border barely works at all, with speeds routinely worse than what mobile phones provide. Imagine an entire country connected to the rest of the world by cheap hotel wifi, and you’ve got the picture. Right now, the Chinese miners are able to — just about — maintain their connection to the global internet and claim the 25 BTC reward ($11,000) that each block they create gives them. But if the Bitcoin network got more popular, they fear taking part would get too difficult and they’d lose their income stream. This gives them a perverse financial incentive to actually try and stop Bitcoin becoming popular.

Many Bitcoin users and observers have been assuming up until very recently that somehow these problems would all sort themselves out, and of course the block chain size limit would be raised. After all, why would the Bitcoin community … the community that has championed the block chain as the future of finance … deliberately kill itself by strangling the chain in its crib? But that’s exactly what is happening.

The resulting civil war has seen Coinbase — the largest and best known Bitcoin startup in the USA — be erased from the official Bitcoin website for picking the “wrong” side and banned from the community forums. When parts of the community are viciously turning on the people that have introduced millions of users to the currency, you know things have got really crazy.


Nobody knows what’s going on

If you haven’t heard much about this, you aren’t alone. One of the most disturbing things that took place over the course of 2015 is that the flow of information to investors and users has dried up.

In the span of only about eight months, Bitcoin has gone from being a transparent and open community to one that is dominated by rampant censorship and attacks on bitcoiners by other bitcoiners. This transformation is by far the most appalling thing I have ever seen, and the result is that I no longer feel comfortable being associated with the Bitcoin community.

Bitcoin is not intended to be an investment and has always been advertised pretty accurately: as an experimental currency which you shouldn’t buy more of than you can afford to lose. It is complex, but that never worried me because all the information an investor might want was out there, and there’s an entire cottage industry of books, conferences, videos and websites to help people make sense of it all.

That has now changed.

Most people who own Bitcoin learn about it through the mainstream media. Whenever a story goes mainstream the Bitcoin price goes crazy, then the media report on the price rises and a bubble happens.

Stories about Bitcoin reach newspapers and magazines through a simple process: the news starts in a community forum, then it’s picked up by a more specialised community/tech news website, then journalists at general media outlets see the story on those sites and write their own versions. I’ve seen this happen over and over again, and frequently taken part in it by discussing stories with journalists.

In August 2015 it became clear that due to severe mismanagement, the “Bitcoin Core” project that maintains the program that runs the peer-to-peer network wasn’t going to release a version that raised the block size limit. The reasons for this are complicated and discussed below. But obviously, the community needed the ability to keep adding new users. So some long-term developers (including me) got together and developed the necessary code to raise the limit. That code was called BIP 101 and we released it in a modified version of the software that we branded Bitcoin XT. By running XT, miners could cast a vote for changing the limit. Once 75% of blocks were voting for the change the rules would be adjusted and bigger blocks would be allowed.

The release of Bitcoin XT somehow pushed powerful emotional buttons in a small number of people. One of them was a guy who is the admin of the bitcoin.org website and top discussion forums. He had frequently allowed discussion of outright criminal activity on the forums he controlled, on the grounds of freedom of speech. But when XT launched, he made a surprising decision. XT, he claimed, did not represent the “developer consensus” and was therefore not really Bitcoin. Voting was an abomination, he said, because:

“One of the great things about Bitcoin is its lack of democracy”

So he decided to do whatever it took to kill XT completely, starting with censorship of Bitcoin’s primary communication channels: any post that mentioned the words “Bitcoin XT” was erased from the discussion forums he controlled, XT could not be mentioned or linked to from anywhere on the official bitcoin.org website and, of course, anyone attempting to point users to other uncensored forums was also banned. Massive numbers of users were expelled from the forums and prevented from expressing their views.

As you can imagine, this enraged people. Read the comments on the announcement to get a feel for it.

Eventually, some users found their way to a new uncensored forum. Reading it is a sad thing. Every day for months I have seen raging, angry posts railing against the censors, vowing that they will be defeated.

But the inability to get news about XT or the censorship itself through to users has some problematic effects.

For the first time, investors have no obvious way to get a clear picture of what’s going on. Dissenting views are being systematically suppressed. Technical criticisms of what Bitcoin Core is doing are being banned, with misleading nonsense being peddled in its place. And it’s clear that many people who casually bought into Bitcoin during one of its hype cycles have no idea that the system is about to hit an artificial limit.

This worries me a great deal. Over the years governments have passed a large number of laws around securities and investments. Bitcoin is not a security and I do not believe it falls under those laws, but their spirit is simple enough: make sure investors are informed. When misinformed investors lose money, government attention frequently follows.


Why is Bitcoin Core keeping the limit?

People problems.

When Satoshi left, he handed over the reins of the program we now call Bitcoin Core to Gavin Andresen, an early contributor. Gavin is a solid and experienced leader who can see the big picture. His reliable technical judgement is one of the reasons I had the confidence to quit Google (where I had spent nearly 8 years) and work on Bitcoin full time. Only one tiny problem: Satoshi never actually asked Gavin if he wanted the job, and in fact he didn’t. So the first thing Gavin did was grant four other developers access to the code as well. These developers were chosen quickly in order to ensure the project could easily continue if anything happened to him. They were, essentially, whoever was around and making themselves useful at the time.

One of them, Gregory Maxwell, had an unusual set of views: he once claimed he had mathematically proven Bitcoin to be impossible. More problematically, he did not believe in Satoshi’s original vision.

When the project was first announced, Satoshi was asked how a block chain could scale to a large number of payments. Surely the amount of data to download would become overwhelming if the idea took off? This was a popular criticism of Bitcoin in the early days and Satoshi fully expected to be asked about it. He said:

The bandwidth might not be as prohibitive as you think … if the network were to get [as big as VISA], it would take several years, and by then, sending [the equivalent of] 2 HD movies over the Internet would probably not seem like a big deal.

It’s a simple argument: look at what existing payment networks handle, look at what it’d take for Bitcoin to do the same, and then point out that growth doesn’t happen overnight. The networks and computers of the future will be better than today. And indeed back-of-the-envelope calculations suggested that, as he said to me, “it never really hits a scale ceiling” even when looking at more factors than just bandwidth.

Maxwell did not agree with this line of thinking. From an interview in December 2014:

Problems with decentralization as bitcoin grows are not going to diminish either, according to Maxwell: “There’s an inherent tradeoff between scale and decentralization when you talk about transactions on the network.”
The problem, he said, is that as bitcoin transaction volume increases, larger companies will likely be the only ones running bitcoin nodes because of the inherent cost.

The idea that Bitcoin is inherently doomed because more users means less decentralisation is a pernicious one. It ignores the fact that despite all the hype, real usage is low, growing slowly and technology gets better over time. It is a belief Gavin and I have spent much time debunking. And it leads to an obvious but crazy conclusion: if decentralisation is what makes Bitcoin good, and growth threatens decentralisation, then Bitcoin should not be allowed to grow.

Instead, Maxwell concluded, Bitcoin should become a sort of settlement layer for some vaguely defined, as yet un-created non-blockchain based system.

The death spiral begins

In a company, someone who did not share the goals of the organisation would be dealt with in a simple way: by firing him.

But Bitcoin Core is an open source project, not a company. Once the 5 developers with commit access to the code had been chosen and Gavin had decided he did not want to be the leader, there was no procedure in place to ever remove one. And there was no interview or screening process to ensure they actually agreed with the project’s goals.

As Bitcoin became more popular and traffic started approaching the 1mb limit, the topic of raising the block size limit was occasionally brought up between the developers. But it quickly became an emotionally charged subject. Accusations were thrown around that raising the limit was too risky, that it was against decentralisation, and so on. Like many small groups, people prefer to avoid conflict. The can was kicked down the road.

Complicating things further, Maxwell founded a company that then hired several other developers. Not surprisingly, their views then started to change to align with that of their new boss.

Co-ordinating software upgrades takes time, and so in May 2015 Gavin decided the subject must be tackled once and for all, whilst there was still about 8 months remaining. He began writing articles that worked through the arguments against raising the limit, one at a time.

But it quickly became apparent that the Bitcoin Core developers were hopelessly at loggerheads. Maxwell and the developers he had hired refused to contemplate any increase in the limit whatsoever. They were barely even willing to talk about the issue. They insisted that nothing be done without “consensus”. And the developer who was responsible for making the releases was so afraid of conflict that he decided any controversial topic in which one side might “win” simply could not be touched at all, and refused to get involved.

Thus despite the fact that exchanges, users, wallet developers, and miners were all expecting a rise, and indeed, had been building entire businesses around the assumption that it would happen, 3 of the 5 developers refused to touch the limit.

Deadlock.

Meanwhile, the clock was ticking.


Massive DDoS attacks on XT users

Despite the news blockade, within a few days of launching Bitcoin XT around 15% of all network nodes were running it, and at least one mining pool had started offering BIP101 voting to miners.

That’s when the denial of service attacks started. The attacks were so large that they disconnected entire regions from the internet:

“I was DDos’d. It was a massive DDoS that took down my entire (rural) ISP. Everyone in five towns lost their internet service for several hours last summer because of these criminals. It definitely discouraged me from hosting nodes.”

In other cases, entire datacenters were disconnected from the internet until the single XT node inside them was stopped. About a third of the nodes were attacked and removed from the internet in this way.

Worse, the mining pool that had been offering BIP101 was also attacked and forced to stop. The message was clear: anyone who supported bigger blocks, or even allowed other people to vote for them, would be assaulted.

The attackers are still out there. When Coinbase, months after the launch, announced they had finally lost patience with Core and would run XT, they too were forced offline for a while.

Bogus conferences

Despite the DoS attacks and censorship, XT was gaining momentum. That posed a threat to Core, so a few of its developers decided to organise a series of conferences named “Scaling Bitcoin”: one in August and one in December. The goal, it was claimed, was to reach “consensus” on what should be done. Everyone likes a consensus of experts, don’t they?

It was immediately clear to me that people who refused to even talk about raising the limit would not have a change of heart because they attended a conference, and moreover, with the start of the winter growth season there remained only a few months to get the network upgraded. Wasting those precious months waiting for conferences would put the stability of the entire network at risk. The fact that the first conference actually banned discussion of concrete proposals didn’t help.

So I didn’t go.

Unfortunately, this tactic was devastatingly effective. The community fell for it completely. When talking to miners and startups, “we are waiting for Core to raise the limit in December” was one of the most commonly cited reasons for refusing to run XT. They were terrified of any media stories about a community split that might hurt the Bitcoin price and thus, their earnings.

Now the last conference has come and gone with no plan to raise the limit, some companies (like Coinbase and BTCC) have woken up to the fact that they got played. But too late. Whilst the community was waiting, organic growth added another 100,000 transactions per day.

A non-roadmap

Jeff Garzik and Gavin Andresen, the two of five Bitcoin Core committers who support a block size increase (and the two who have been around the longest), both have a stellar reputation within the community. They recently wrote a joint article titled “Bitcoin is Being Hot-Wired for Settlement”.

Jeff and Gavin are generally softer in their approach than I am. I’m more of a tell-it-like-I-see-it kinda guy, or as Gavin has delicately put it, “honest to a fault”. So the strong language in their joint letter is unusual. They don’t pull any punches:

The proposed roadmap currently being discussed in the bitcoin community has some good points in that it does have a plan to accommodate more transactions, but it fails to speak plainly to bitcoin users and acknowledge key downsides.
Core block size does not change; there has been zero compromise on that issue.
In an optimal, transparent, open source environment, a BIP would be produced … this has not happened
One of the explicit goals of the Scaling Bitcoin workshops was to funnel the chaotic core block size debate into an orderly decision making process. That did not occur. In hindsight, Scaling Bitcoin stalled a block size decision while transaction fee price and block space pressure continue to increase.

Failing to speak plainly, as they put it, has become more and more common. As an example, the plan Gavin and Jeff refer to was announced at the “Scaling Bitcoin” conferences but doesn’t involve making anything more efficient, and manages an anemic 60% capacity increase only through an accounting trick (not counting some of the bytes in each transaction). It requires making huge changes to nearly every piece of Bitcoin-related software. Instead of doing a simple thing and raising the limit, it chooses to do an incredibly complicated thing that might buy months at most, assuming a huge coordinated effort.

Replace by fee

One problem with using fees to control congestion is that the fee to get to the front of the queue might change after you made a payment. Bitcoin Core has a brilliant solution to this problem — allow people to mark their payments as changeable after they’ve been sent, up until they appear in the block chain. The stated intention is to let people adjust the fee paid, but in fact their change also allows people to change the payment to point back to themselves, thus reversing it.

At a stroke, this makes using Bitcoin useless for actually buying things, as you’d have to wait for a buyer’s transaction to appear in the block chain … which from now on can take hours rather than minutes, due to the congestion.

Core’s reasoning for why this is OK goes like this: it’s no big loss because if you hadn’t been waiting for a block before, there was a theoretical risk of payment fraud, which means you weren’t using Bitcoin properly. Thus, making that risk a 100% certainty doesn’t really change anything.

In other words, they don’t recognise that risk management exists and so perceive this change as zero cost.

This protocol change will be released with the next version of Core (0.12), so will activate when the miners upgrade. It was massively condemned by the entire Bitcoin community but the remaining Bitcoin Core developers don’t care what other people think, so the change will happen.

If that didn’t convince you Bitcoin has serious problems, nothing will. How many people would think bitcoins are worth hundreds of dollars each when you soon won’t be able to use them in actual shops?

Conclusions

Bitcoin has entered exceptionally dangerous waters. Previous crises, like the bankruptcy of Mt Gox, were all to do with the services and companies that sprung up around the ecosystem. But this one is different: it is a crisis of the core system, the block chain itself.

More fundamentally, it is a crisis that reflects deep philosophical differences in how people view the world: either as one that should be ruled by a “consensus of experts”, or through ordinary people picking whatever policies make sense to them.

Even if a new team was built to replace Bitcoin Core, the problem of mining power being concentrated behind the Great Firewall would remain. Bitcoin has no future whilst it’s controlled by fewer than 10 people. And there’s no solution in sight for this problem: nobody even has any suggestions. For a community that has always worried about the block chain being taken over by an oppressive government, it is a rich irony.

Still, all is not yet lost. Despite everything that has happened, in the past few weeks more members of the community have started picking things up from where I am putting them down. Where making an alternative to Core was once seen as renegade, there are now two more forks vying for attention (Bitcoin Classic and Bitcoin Unlimited). So far they’ve hit the same problems as XT but it’s possible a fresh set of faces could find a way to make progress.

There are many talented and energetic people working in the Bitcoin space, and in the past five years I’ve had the pleasure of getting to know many of them. Their entrepreneurial spirit and alternative perspectives on money, economics and politics were fascinating to experience, and despite how it’s all gone down I don’t regret my time with the project. I woke up this morning to find people wishing me well in the uncensored forum and asking me to stay, but I’m afraid I’ve moved on to other things. To those people I say: good luck, stay strong, and I wish you the best.


Next Story — Graal & Truffle
Currently Reading - Graal & Truffle

Graal & Truffle

An obscure research project could radically accelerate innovation in programming language design

Since the dawn of computing our industry has been engaged on a never ending quest to build the perfect language. This quest is difficult: creating a new programming language is a huge task. And too often the act of doing so fractures the programming ecosystem, leading to an endless grind in which basic tools are recreated again and again: compilers, debuggers, HTTP stacks, IDEs, libraries and an endless procession of other wheels are rebuilt for the hot new language of the day. Because perfection in language design is unobtainable and there are always new ideas to try, we are like Sisyphus: condemned by the gods to push boulders up the hill and watch them roll down again, over and over … for ever.

How can the cycle be broken? Let’s dream for a moment and imagine what it would take.

We would want something, some kind of tool or technique, that gave us the following:

  1. A way to create a new language in just a few weeks
  2. That ran as fast as the fastest other languages, automatically
  3. That had high quality debugging support, automatically (ideally without any kind of slowdown)
  4. That had profiling support, automatically
  5. That had a high quality garbage collector, automatically … but only if we wanted one
  6. That could use all the existing code that was out there, no matter what language it was written in
  7. That supported any style of language, from low level C or FORTRAN to Java to Haskell to extremely dynamic scripting languages like Python and Ruby
  8. And which could be either just-in-time or ahead-of-time compiled
  9. And heck, why not, that supports hotswap of code into a running program

Oh, and we’d want this magic tool to be open source of course. And, er, it should come with a pony. I think that’s it.


Being a smart reader, you of course already guessed that I wouldn’t be writing this article unless such a tool actually did exist. It goes by the bizarre name of Graal & Truffle. Despite sounding like it should be a pretentious hipster restaurant, it is actually a vast research project with over 40 computer scientists from across industry and academia. Together they are building a new set of compiler and virtual machine technologies that implements our wishlist above.

By creating a way for anyone to quickly create a new language without the hassle of simultaneously creating libraries, optimising compilers, debuggers, profilers, bindings to C runtimes and all the other paraphernalia that a modern language is expected to have, it promises to unleash a new wave of language innovation that — I hope — could radically reshape the industry.

And that’s what this article is about.

What are Graal & Truffle?

Graal is a research compiler. Truffle is … well, that’s kind of hard to explain, because there’s not much to compare it to. The shortest summary I can think of is this: Truffle is a framework for implementing languages using nothing more than a simple abstract syntax tree interpreter.

When creating a new language, the first thing you need is a grammar. The grammar is a description of the syntax rules of your language. By feeding the grammar to a tool like ANTLR you get a parser, and by feeding input text to the parser you get a parse tree:

In the above image, the following piece of code has been turned into a tree representing its structure by ANTLR:

Abishek AND (country = India OR City = BLR) LOGIN 404 | show name 

A parse tree, and a derivative construct called an abstract syntax tree (AST), is a natural way to express a program. Once you have a tree of objects representing nodes in a program, the next simplest step to getting your new language up and running is to add an “execute” method to your node classes. Each node’s execute method also invokes the child nodes and then combines the results to calculate the values of the expressions or perform the statements. And that’s it!

Interpreted dynamic languages like Python, JavaScript, PHP and Ruby look the way they do because building such a language is the path of least resistance when you start from a simple parse tree. If you’re creating a language by yourself from scratch, adding complications like a static type system or an optimising compiler would slow you down a lot. The downside of doing things this way is that the result is very slow, and worse, it’s very tempting to add features that are easy to implement in simple/slow AST interpreters, but extremely difficult to make fast.

Truffle is a framework for writing interpreters with annotations and small bits of extra code in them which, when Truffle is paired with its sister project Graal, allow those interpreters to be converted into JIT compiling VMs … automatically. The resulting runtimes have peak performance competitive with the best hand-tuned language-specific compilers on the market. For example, the TruffleJS engine which implements JavaScript is competitive with V8 in benchmarks. The RubyTruffle engine is faster than all other Ruby implementations by far. The TruffleC engine is roughly competitive with GCC. There are Truffle implementations in various stages of completeness for:

  • JavaScript
  • Python 3
  • Ruby
  • LLVM bitcode, allowing C/C++/Objective-C/Swift programs to run on it
  • Another engine that interprets C source code directly instead of going via LLVM (this has some benefits described below)
  • R
  • Smalltalk
  • Lua
  • A variety of small experimental languages

To give a feel for how easy it is to write these engines, TruffleJS is only about 80,000 lines of code compared to about 1.7 million for V8.

Skip the boring bits, how do I play with it?

Graal & Truffle are a product of Oracle Labs, the part of the Java team which does VM and compiler research. You can download a “GraalVM” here. It is an extended Java Development Kit that comes with several of the above languages built in, along with drop-in command line replacements for NodeJS, Ruby, and R. It also has something called “SimpleLanguage” which is a tutorial language used for teaching the framework.

What does Graal do?

If Truffle is a framework for writing AST interpreters, then Graal is the thing that makes them fast. Graal is a state of the art optimising compiler. It sports the following features:

  • Can run either just-in-time or ahead-of-time.
  • Extremely advanced optimisations, like partial escape analysis. Escape analysis is a way of eliminating heap allocations of objects when they aren’t actually necessary. EA was made famous by the JVM, but it’s complicated and very few VMs support it. The Turbofan compiler that Chrome uses for Javascript only started getting EA at the end of 2015. Graal features an even more advanced form of the optimisation that lets it work in more cases.
  • Recognises interpreters written using Truffle and can convert Truffle ASTs into optimised native code, using a technique called partial evaluation. Partial evaluation of a self-specialising interpreter is called the first Futamura projection.
  • Comes with an advanced visualiser tool that lets you explore the compiler’s intermediate representation as it passes through optimisation stages.
  • Written in Java, which means it’s significantly easier to hack on and extend than a traditional compiler written in C or C++.
  • Starting with Java 9, it can be used as a JVM plugin.
The IGV visualiser

Graal is designed from the start as a multi-language compiler, but its set of optimisation techniques is especially well suited to compiling programs with high levels of abstraction and dynamism. It runs Java as fast as the existing JVM compilers do, but when applied to Scala programs it runs them about 20% faster. Ruby programs get 400% faster than the best alternative runtime (i.e. not MRI).

Polyglot

That’s pretty neat by itself, but it’s really just the beginning.

Truffle provides a language interop framework called Polyglot that allows Truffle languages to call each other, and a thing called the Truffle Object Storage Model that standardises and optimises much of the behaviour of dynamically typed objects, allowing languages to share them too. And because Graal & Truffle are fundamentally built on top of the JVM, all these languages can also call in and out of JVM bytecode-based languages like Java, Scala and Kotlin too.

The way Polyglot works is unusual. Because Truffle provides a standard framework for expressing nodes in an abstract syntax tree, calling into another language doesn’t involve any complex hand-written binding layers. Instead, invoking a function simply joins the ASTs of the two languages together. Those two ASTs are then compiled and optimised by Graal as a single unit, meaning any complexity introduced by crossing the language barrier can be analysed and eliminated.

It’s for this reason that researchers decided to implement a C interpreter on top of Truffle. We normally think of C as being an inherently compiled language, but there’s no particular reason it must be so, and in fact C interpreters have a long history of usage — for instance the Shake special effects app exposed one to its users as a way to script the app.

Because scripting languages are so slow it’s very common to rewrite performance hotspots in dynamically typed programs by hand in C, using the original interpreter’s internal API to interact with the scripted code. Perversely, this technique actually makes it harder to speed up the language in general because running real programs often means running their C extensions too, and that’s very difficult when those extensions make so many assumptions about the runtime’s internals.

When the people creating RubyTruffle hit this problem they came up with a clever solution: write a special C source code interpreter that not only understands ordinary C, but also macros and other constructs that are unique to Ruby extensions. Then by merging the Ruby and C interpreters together on top of the Truffle framework, the code will be blended together into a seamless whole and the interop overhead will be optimised away. This interpreter is called TruffleC.

You can read an excellent explanation of how this works by Chris Seaton, one of the researchers behind the Truffle project, or you can read the research paper that describes it.

Making C memory safe

C programs are fast, but have a major downside: they’re a hacker’s playground because it’s way too easy to shoot yourself in the foot by mismanaging memory somewhere.

The ManagedC language is an extension of TruffleC that replaces C’s standard memory management with checked, garbage collected allocation. ManagedC still supports pointer arithmetic and other low level constructs yet eliminates a large swathe of programming errors. It costs roughly 15% peak runtime performance vs GCC, and relies much more heavily on exploiting undefined behaviour than most C compilers do, meaning your program that works on GCC might not work on top of ManagedC, despite ManagedC complying with the C99 spec.

You can learn more about this in the paper “Memory safe execution of C on a Java VM”.

Debugging and profiling for free

A common problem people hit when implementing a new language is the lack of high quality tools. A good example of this is Golang, which has spent many years suffering from poor quality, primitive and often not-really-portable debuggers and profilers.

Another common problem is that making a program debuggable means the running code must be very close to the source, to allow a mapping from the paused machine state back to the program the developer expects to see. This implies disabling compiler optimisations which can make debugging a painfully slow experience.

Truffle provides a simple API that can be used from your AST interpreter that provides sophisticated debugging … without slowing down your program. All compiler optimisations still apply, yet the program state still appears as expected in the debugger. This is possible because Graal & Truffle generate metadata when your source is compiled to machine code, and that metadata can then be used to deoptimise parts of the running program back to its original interpreter state. When a breakpoint, watchpoint, profiling point or any other kind of instrumentation is requested, the VM forces the program back to the slow form, adds AST nodes that implement the requested functionality and then recompile it all back to native code, swapping the new code in on the fly.

Of course a debugger requires more than just runtime support. A user interface for it is also rather helpful. There’s a plugin for the NetBeans IDE which provides GUI support for debugging arbitrary Truffle languages.

You can read more about this in the paper “Building debuggers and other tools: we can have it all”.

LLVM support

Most Truffle runtimes interpret source code, but there’s nothing that says you have to do that. The Sulong project is creating a Truffle interpreter for LLVM bitcode.

Sulong is still very new and code run this way has many limitations. But by running bitcode with Graal & Truffle, the framework should in theory gain support for not only C, but also C++, Objective-C, FORTRAN, Swift and potentially even Rust.

Code compiled with Sulong in mind has access to a simple C interop API that lets it call into other Truffle languages using Polyglot. Again, despite the language neutral and therefore completely dynamically-typed nature of this API, at runtime it will be compiled down to nearly optimal code via aggressive use of speculation to eliminate overheads on the common/fast paths.

HotSwap

Hotswapping is the ability to redefine a programs code whilst it is executing, without a restart. This is one of the main productivity benefits of highly dynamic languages and although I’m not sure if it’s been integrated yet, there is a research paper on adding hotswap support to the Truffle framework. Like with debugging, profiling and speed optimisations, language implementors have to use the new APIs to add support to their language, but doing so is significantly lower effort than coding all the needed runtime support themselves.

What’s the catch?

As always in life, nothing is quite perfect. Graal & Truffle represent an almost complete wishlist of things you might want when implementing a new language from scratch, but they come with two big costs:

  1. Warmup time
  2. Memory usage

The process of converting an interpreter into fully optimised native code relies heavily on learning how the code being executed actually works in practice. This is of course hardly new: the notion of the code “warming up” i.e. getting faster as it runs is known to everyone who runs code on advanced VMs like HotSpot or V8. But Graal pushes speculative, profile-guided optimisation techniques far beyond the current state of the art and relies on profiling a whole lot more as a result.

That’s why the research team behind it invariably only quotes peak performance numbers: the speed of a program after it’s been running for a while. This way of measuring performance doesn’t measure how long it takes to reach that peak performance. In server side applications this is often not a big deal as it’s the peak performance that matters the most, but for other kinds of program, long warmup times can be a deal killer. We can easily see this problem in action by simply running the Octane benchmarks included with the tech preview JDK: the scores are a bit lower than Chrome even though Graal gives itself long (15–60 second) warmup times … that it doesn’t count towards its score.

The second issue is memory usage. Programs that rely heavily on speculative optimisations require tables of metadata to be generated by the compiler, so the program can be de-optimised — mapped back from CPU state to the state of the abstract interpreter. This metadata is typically the same size as the compiled code itself, even after compression and optimisation of the data structures. In addition, the original ASTs or bytecodes must also be kept around so there’s something to fall back to if the native code bails out due to relying on an invalid assumption. All this adds up to a significant extra source of RAM consumption.

Compounding these problems is the fact that Graal, Truffle and the Truffle languages are themselves written in Java, meaning the compilers themselves need to warm up and become optimised. And as you go up the hierarchy of higher level languages, memory consumption of basic data structures tends to go up too, meaning the memory consumption of the base compiler infrastructure will also place additional load on the garbage collector.

The people writing Graal & Truffle are not unaware of these problems, and have solutions in mind. One is called the SubstrateVM. This is a whole virtual machine implemented in Java which is designed to be compiled ahead-of-time to native code using Graal & Truffle’s AOT mode. The SubstrateVM is much less advanced than HotSpot is: such a VM can’t do tricks like dynamically loading code over the internet or hotswapping, and the garbage collector is also pretty basic. But by doing a full AOT compile of not only the VM but the source code to be run along with it, the entire thing can be optimised as a whole, and a significant source of warmup time can be eliminated.

There’s one final catch worth knowing about. I said at the start I wanted everything I listed to be open source. Graal & Truffle are huge and very expensive endeavours written by skilled people who don’t come cheap. As a result, only some parts of what I’ve described are fully open source.

These bits are open and can be found on github or other repositories:

  • Graal & Truffle themselves.
  • The pluggable version of HotSpot they rely on.
  • RubyTruffle
  • Sulong (LLVM bitcode support)
  • The R, Python 3 and Lua implementations (some of these are hobby/research projects).

And these things are not open source:

  • TruffleC/ManagedC
  • TruffleJS/NodeJS API support
  • SubstrateVM
  • AOT support

TruffleJS can be downloaded for free as part of the GraalVM preview releases. I don’t know how to play with TruffleC or ManagedC, although as Sulong implements some of their functionality, it may not matter much.

Learn more

The canonical full-blown, one-stop-shop tutorial on everything Graal & Truffle is this talk: “One VM to rule them all, one VM to bind them”. It’s three hours long, so be warned, it’s only for the truly enthusiastic.

There are a couple of tutorials on writing Truffle languages worth reading:

What next?

At the start I said that if we could eliminate the boulder-rolling associated with creating a new language it’d open the door to a new wave of PL innovation. You can find a list of some initial experimental languages here, but I’m hoping that there’ll be many more in future.

By trying out your new language ideas using Graal & Truffle you gain the possibility of your language being actually useful right from day one, and therefore growing a community of users and contributors who can deploy your language into their existing projects. It enables a virtuous cycle of feedback and improvement that could significantly accelerate the path from research to production. I’m looking forward to it.

Next Story — OK, what now?
Currently Reading - OK, what now?

OK, what now?

The UK will soon start two years of negotiations with the EU on how to exit the union. They will probably stretch from October 2016 to October 2018.

This makes next year a critical one for the UK, so I wanted to research what might happen during it that could affect the exit negotiations. That research has left me kind of stunned. I’d read one or two polls from the rest of Europe before, but if the datasets cited below are correct 2017 could be an absolutely brutal year for the EU.

Let’s examine four political events we know will happen and examine the possible outcomes. These events are in chronological order.


Switzerland: Midnight, February 9th 2017

In 2014 Switzerland voted in a referendum to end freedom of movement with the EU. The constitutionally binding vote gave the Federal Council three years to negotiate a new arrangement that would allow for immigration quotas to be re-established. That three year deadline expires at midnight on February 9th.

Switzerland is not in the EU, but thanks to over two hundred treaties it is effectively in both the EEA (European Economic Area) and Schengen. It also pays money into the EU budget. The treaties the Swiss have signed with the rest of Europe include an ominously named “guillotine clause”. If any one of them is violated, they are all revoked automatically. The Swiss knew this and voted by a tiny margin to revoke freedom of movement anyway, presumably in the hope of finding a better deal during the post-vote negotiations.

Those negotiations never happened. The EU flatly refused to even discuss the issue. What’s more, they put an ultimatum on the table: the EU would no longer sign bilateral trade deals with the Swiss. If they want cooperation to continue, Switzerland will have to fully join the EU and recognise the supremacy of Brussels and the European courts.

This means that in February one of three things will happen:

  1. The government implements quotas, triggering a “Swixit”
  2. Another referendum is held informing the Swiss that no negotiation was possible and asking for confirmation, and that referendum is once again lost by the government, thereby also triggering a collapse of the trade deals and the imposition of tariffs on Switzerland.
  3. The referendum is won by the government and immigration quotas are not implemented.

Switzerland joining the EU seems …. unlikely. Polls show support for joining at 15% and it’s even less in Parliament. The UK referendum showed that opinions can shift quickly and dramatically, but the mood is defiant. The Swiss Parliament formally withdrew its long dormant EU membership application just one week before the Brexit vote with one MP saying “only a few lunatics would want to join the EU now”.

It’s impossible to predict the outcome of the next Swiss EU/EEA referendum (assuming there is one), because it was so close last time. But if the Swiss do choose to get kicked out it’ll be another big blow to the EU’s prestige, and especially to Germany with which it has huge levels of trade.


The Netherlands: March 2017

Just one month after the Swiss make their final call there will be a general election in the Netherlands. Calls for an EU exit referendum there have been growing for some time and now appear hard to ignore.

This is the current Dutch Premier, Mark Rutte.

He had this to say on the topic of a NExit:

“I don’t believe there’s much interest in a referendum”

In fact a poll by Ipsos showed around half/half support for having a referendum, and that 64% would vote in if there was one. Another poll for De Telegraaf showed 88% support for holding a referendum. A third poll showed over 50% support for having a vote.

This absolute denial of polling reality by Rutte is the kind of WTF Europeans are getting used to. It makes a bit more sense when you realise that the man doesn’t actually like democracy:

“I’m totally against referendums and I’m totally, totally, totally against referendums on multilateral agreements because it makes no sense, as we’ve seen with the Dutch referendum”

Wait, what Dutch referendum?

In April the Dutch voted on a new EU/Ukraine “association agreement”, which gives Ukraine a free trade deal and steps towards visa-free access to the bloc. The agreement itself is not that critical but the vote was widely used as a vote on the EU itself. It went against the EU by 61% to 32%, albeit on low turnout of only 32%.

The referendum was non-binding. Rutte announced to the Dutch Parliament that he had been forbidden from talking about it by the EU itself and that therefore the vote would not be discussed at all until after the Brexit referendum, in case calling attention to it increased support for Leave.

The controversial politician Geert Wilders is promising to call a referendum if he wins and his Dutch Freedom Party (PVV) is leading in the polls. It is plausible that his party could win the elections, or at least end up as a powerful member of a coalition. If he does win there would presumably be a several month long referendum campaign throughout the summer of 2017, in parallel with the British negotiations. Dutch referendums cannot legally force changes to treaties, but neither can British referendums and that didn’t stop the Brits.

Geert Wilders

The Dutch have a lot to lose from Brussels forcing a UK/NL trade war on them. Most EU countries only trade significantly with a few others, and for the Brits that “few others” contains the Netherlands. If the Dutch are forced to impose import taxes on British goods in retaliation for the UK leaving, that will damage them too. They will be watching how the UK exit negotiations are going with a lot of interest.

Could Geert Wilders win his referendum? He’s not a man who runs from a fight. He lives under 24/7 armed guard and has previously been banned then unbanned from the UK due to his outspoken criticism of Islam. He has appeared on an al-Qaeda hit list and been the target of a fundamentalist Islamic preacher who urged followers to behead him. He’s also survived a prosecution for “inciting hatred” (he was found not guilty). He campaigns on a platform of exiting the EU and closing the Dutch borders, and it looks like he might well win.


France: May 2017

Just two months later France will hold a general election.

Thought Wilders sounded tough? Marine Le Pen makes him look like a jar of jelly. The leader of the French Front Nationale party was only 8 years old when she survived a bombing that destroyed twelve apartments. The explosives were a terrorist attack meant to assassinate her father, a racist anti-semite. As an adult she would expel him from his own political party and he would publicly announce that he was ashamed she was his daughter. Asked if she ever feared for her own safety, she replied “I am impermeable to fear.

Like Geert Wilders, she’s been prosecuted and found not guilty of hate speech against Muslims. And just like Wilders she is currently leading the opinion polls. The current French President François Hollande is on track to be wiped out.

Why are they always blonde?

The French presidential elections are a complex two-round system and it’s not clear if she would win the full contest, even though she will clear the first round easily. Even though her poll numbers have surged, the Front Nationale is still unpopular with many French and they might unite against her in an anyone-but-Le-Pen showdown.

But Ipsos MORI’s May poll found French support for a “Fraxit” at 40%. The French actually have lower favorability scores towards the EU than the British, with only Greece having a worse opinion. Of course polls often disguise important detail — there are lots of French people who passionately believe in the EU and the idea of a federal Europe. And for those who don’t Le Pen is their only way to get a referendum, which means many might simply prefer a more mainstream president over having a vote.

It seems possible but unlikely that France would pull out of the EU, as the Franco-German alliance is the core of the union and France rejecting it in a referendum would almost certainly end the EU for good. The British could vote out knowing the EU would continue regardless. French voters would have no such assurance.


October 2017

In October Germany has a general election. Here are the current polling figures:

Check out that light blue line at the bottom. That’s the “Alternativ für Deutschland”, Germany’s only Eurosceptic party. The start of its rise matches the appointment of a new leader, Frauke Petry, a woman who got herself in hot water when she pointed out that German law allowed police to shoot refugees at the border. The AfD is eurosceptic by German standards, i.e. not very. It advocates abandoning the Euro and major immigration reform, but not a referendum and certainly not withdrawal. The name is a rebuttal to Angela Merkel, who described rescuing the Euro as “without alternative”.

Finally a non-blonde

The AfD’s current growth cannot continue forever but if the trend seen since August 2015 holds up for another eight months then we can easily see by extrapolation it will become Germany’s leading party by February: just as Switzerland to the south is deciding on its own future.

It’s hard to imagine any kind of serious anti-EU movement in Germany. Support for the EU is currently at around 80%. The migration crisis has been ended by the Balkan states erecting border walls plus the deal with Turkey, and the fact that some refugees are returning home after realising they were lied to by the people smugglers.

It’s possible that if Turkey revokes the migrant deal and large numbers start moving towards Germany again, that the politics of the country could change fast. But I don’t know if that would result in a major shift away from the EU as a whole. Germany has a lot of influence over the EU and therefore has little to gain from leaving it.

On words

Whatever the outcome of the votes are, we will be hearing a lot more about “far right” parties in the next 18 months. I put the term in quotes because it’s not clear from my research that this label makes any sense at all.

If you put aside the EU and Islam then Le Pen, Wilders and Petry are all completely different. Wilders has a pretty conventional conservative agenda (lower taxes, emphasis on education etc) but Le Pen is an ordinary socialist (protectionism, money printing, anti-privatisation). Petry doesn’t even want to leave the EU, just the Euro. If you talked to these people about something other than immigration you’d never guess that they were supposedly ideological allies.

The term “far right” has apparently been detached from the traditional meaning of left/right wing and now means anyone who supports border controls, which is not a controversial policy in most of the world. This is probably why such parties are rising throughout the continent. Watch out for journalists, academics and other self-proclaimed political experts abusing standard terminology to make you jump to conclusions.

Conclusion

I started out by wanting to know if the chance of other referendums was real and how that’d affect the UK’s exit negotiations. By now I’m wondering if there’ll be any EU left to negotiate with by the end of the talks.

The statistics are dire. Europe is on a knife edge and its leadership is complacent. In 1989 the Soviet Union had been in existence for 67 years and seemed like an immovable object. Within two years it had collapsed completely following a wave of secession referendums enabled by a law change in April 1990. The USSR shows that political unions can seem unstoppable and then crumble overnight.

The UK will soon pick a new leader and send them to negotiate with the EU, which will refuse to compromise. The negotiators are likely to be either Boris Johnson, Theresa May or Michael Gove. I feel neutral about BoJo and Gove, and actively dislike Theresa May, but none of these people are stupid and all of them are likely to prove tough negotiators.

If the UK announces that it will create a new European Trade Area with any other economically equal countries that exit the EU, featuring no trade barriers but controlled migration, then the existence of an alternative would likely increase support for EU exit in other major European economies. By adopting a “compromise is death” negotiating strategy the EU leaves the UK with only one card to play — sticking in the knife and helping them along.

Next Story — How is the Remain campaign losing?
Currently Reading - How is the Remain campaign losing?

How is the Remain campaign losing?

On Thursday the UK will hold a referendum on whether or not to leave the European Union. When the process started last year the Leave campaign had around 30% support. Polls that once said staying in was a sure bet are now divided, and many suggest that there’s a real chance the UK may exit the EU.

This blog post doesn’t try to argue for one choice over the other, but I am fascinated by the process of deciding. Why has British opinion shifted so dramatically in only six months? It’s not like the EU has changed. People are being convinced by talking to each other, not by events.


The Remain campaign should be bulletproof. It benefits from

  • The formal support of the government
  • The support of virtually all world leaders
  • The support of the economics profession, academia, and many other intellectuals
  • EU leaders proclaiming that they will have no mercy on “deserters” and that they would refuse to negotiate with Britain post exit
  • Frequent and uncontested predictions of economic doom after an exit
  • A litany of celebrities lining up to endorse it
  • The support of many leading broadsheet newspapers
  • A divided and weak Leave campaign, fronted by politicians that are not especially liked
Would you trust this man?

Yet despite this wall of advantage, somehow it isn’t working — the polls have shown a constant rise in support for Brexit. If the UK does vote to remain, it will probably be by a very small margin.

This isn’t because the Leave campaign is doing such a great job. Both campaigns have been awful. Pro-Leave campaigners have been rightfully accused of playing fast and loose with spending statistics, making absurd claims and sometimes just being offensive. Pro-Remain supporters have found themselves literally arguing that:

“Brexit could be the beginning of the destruction of not only the EU but also of western political civilization in its entirety”
— Donald Tusk

Let’s take a moment to ponder that this wasn’t posted by some idiot on Twitter, it was said by the President of the EU Parliament himself. Game over, The Onion.

But whilst the basic things that motivate people are clear, what isn’t clear is why opinion has shifted towards Leave so rapidly. I’m going to hazard a guess or two about what’s going on.

One issue seems simple enough: pro-Remain supporters have adopted a psychologically catastrophic set of arguments which are triggering a strong backlash against their position. By arguing that the UK is weak, dependent, unable to go it alone and locked into a community of “friends” who will strongly retaliate if the British decide to move on, the EU is triggering people’s natural rebelliousness. We are all taught as children that clubs you can join but not leave are very dangerous: they’re normally called gangs or cults. And our culture teaches us to value strength, independence, knowing your own mind and having the courage of your convictions. By positioning the referendum as “if we go we’ll be made to regret it”, Cameron and Osborne have painted the EU in a way that many people would instantly consider problematic if it were a story about individuals rather than countries.

But more importantly, the British people have lost confidence in their establishment intellectuals. And for good reasons, which I will examine in a moment. So the constant barrage of pro-Remain announcements and articles by academics, experts, officials and other elites are not having the intended effect, and may even be having the opposite effect. This will be something that influences all of politics heavily in future and is worth serious attention.

On experts

Until the last couple of weeks, the most common sentiment from EU supporters has been “Of course the UK will remain. To leave would be stupid, insane and suicidal so there is no real chance of Brexit”.

How could Remain be so complacent?

The biggest reason has to be the assumption of intellectual superiority. Remain must win, so the thinking goes, because everyone-who-is-anyone supports it and disagreeing with all of those experts and authorities would be to disagree with civilisation itself.

But this assumption has proven to be dangerously misguided. The British, like many populations, have been losing confidence in their elites and are no longer minded to simply delegate complicated decisions to them. Unlike David Cameron, his cabinet colleague Michael Gove understands this trend and explained it during a national TV debate like this:

“People in this country have had enough of experts!”
experts wtf

This remark has been taken as de-facto evidence that anyone who is not pro-EU is mentally retarded or driven by emotion rather than reason. It triggered another round of backslapping complacency amongst Remain supporters: look at how superior we are!

The idea that Gove might actually be right, that people might actually have had enough of experts in this debate, was apparently an idea too appalling to analyse. Let’s sacrifice a few sacred cows and do so.

I suspect Gove is right, and more importantly that this behaviour is not irrational, unreasonable or anti-intellectual. We are not living in Idiocracy. It is the result of two things:

  1. When experts are asked to give their opinions on a topic they analyze it exclusively from the perspective of their specialism and ignore all other factors.
  2. The social groups that make up the intellectual elites have blown their credibility to pieces in the last ten years, with the result that many people now perceive them as self-proclaimed “experts” in name only.

Understanding this is key to understanding why the repetitive testimony of experts is not having the intended effect.

The first point is simple enough. When an economist is asked what they think of leaving the EU they give an answer based on economics only; when a business leader is asked they give an answer based on the needs of their business only, when Obama is asked he is only thinking about what America wants, and so on. Their predictions must be based on an extremely limited analysis and must assume a static environment in which nothing else changes because they are not experts in other fields nor can they predict the future.

Imagine if an economist said “I predict a tough negotiator would force the EU to cut a good deal by exploiting rising anti-EU feeling in Sweden and the Netherlands, so Brexit would be good for the economy”. This would be a political prediction well outside the remit of his/her speciality and so would no longer seem like an expert opinion, but more like the sort of opinion anyone could have. To preserve the voice of detached authority that makes someone an expert, they must act as if nothing outside their area of knowledge matters or can possibly change.

Likewise, when someone who runs a business is asked about leaving the EU, their analysis must necessarily be only about the immediate impact on their business. They can’t say, “paperwork would get more complicated but it’d be worth it to cut immigration”, even if that might be their actual opinion, because this would not be a narrow judgement only on their area of expertise — it would be a blended judgement weighing other factors which they are not an expert on.

But voters aren’t being asked to analyse the EU from a single perspective. They’re being asked to blend and weigh all factors both now and into the future as they change. This is a far tougher question than anything being posed to any expert, and means the average voter must consider many factors that each individual expert answer deliberately ignores. And even if the man down the pub can’t fully articulate that, he still intuitively understands it — making the opinion of any specific expert at best incomplete and at worst potentially misleading.

But the establishment has bigger problems than an abundance of overly narrow analyses.

Let’s examine a few groups of supposed intellectuals. They’ve all poured burning petrol on their own credibility over the past decade, with the result that tens of millions of Britons may nod politely when they speak but in reality no longer care what they think.

Economists

The Remain campaign relies on economists more than any other group. The argument to remain is entirely economic. Economists have predicted that the British would be permanently worse off into the indefinite future if they were to leave the EU, and this is the argument to stay. The predictions of economic doom are often extremely precise, even as far out as 20 years.

Do people trust economists? No, they don’t. From the article:

Ms Sapienza and Mr Zingales note that when Americans are told what economists believe before answering a question, their view scarcely budges.

The main reason people don’t trust economists is that they failed to predict the 2008 financial crisis. Not just British economists, note, but all economic advisors in every country failed to see it coming. When 2008 came not one western government was prepared. In the UK Gordon Brown was so unprepared he famously announced he had ended the boom/bust cycle completely. How is that even possible? The answer helps make clear why people hold economists in disdain.

Economists make predictions about the future based on models. The most popular model inside governments at the time of the financial crisis was called the “dynamic stochastic general equilibrium model”. This name makes it sound extremely clever. In fact, it is a pile of crap that gets laughed at by any non-expert to whom it is explained. From a 2008 research paper by the ECB:

… most workhorse general equilibrium models routinely employed in academia and policy institutions to study the dynamics of the main macroeconomic variables generally lack any interaction between financial and credit markets, on the one hand, and the rest of the economy, on the other …. These models … do not assign any role to financial intermediaries such as banks. But in reality banks play a very influential role in modern financial systems, and especially in the euro area.

Let’s repeat that. Economists failed to predict the financial crisis because their predictions ignored the existence of banks entirely.

Was this problem only obvious in hindsight? Er, no. From the DGSE wikipedia page:

Noah Smith, economics professor and author of the blog ‘noahpinion’,[17] observed that “DSGE fails the market test.” That is, financial modelers who would benefit directly from superior market returns uniformly do not use DSGE models, thus strongly suggesting that DSGE models are not useful for macroeconomic prediction.

So people making actual money completely ignored the academics and their models — probably because they knew it was groupthink-driven garbage. A few hedge fund managers saw what was coming and their story was eventually turned into a film. But the sort of experts that get cited by politicians did not.

A typical response to criticism of economics is this: “fine, but surely they can’t all be wrong”. Unfortunately, yes, they can. Like many academic fields, economics is an extremely small profession. There is really only one large employer of economists: the government. Whilst a few might toil in banks and hedge funds under different names (like “analyst”, “trader” etc), long-range macroeconomic prediction is really something only governments do at scale. Like all highly academic fields, your ability to get into it and thrive depends on building a reputation for respectability. In a market-driven field like consumer electronics a Steve Jobs type can tell literally the entire industry they’re wrong — and then prove it by minting money with the iPod. But in economics you can’t run experiments, only analyse tiny datasets which often yield inconclusive results. So your career success depends mostly on what your colleagues think of you and your theories. This creates a massive incentive to not rock the boat. The group starts hiring the in-crowd and rapidly self-selects for intellectual homogeneity.

Without a doubt the British do believe that leaving the EU will involve economic pain, because EU leaders have made it clear they would retaliate against an exit and trade barriers are the only tool they have. But you don’t need to be an expert to understand that trade is good. Beyond pointing out that trade is good, it’s unclear economists have much to add. Believing specific predictions about the impact on house prices or GDP 20 years from now would require us to believe that economists can accurately predict the future, despite their abject failure to do so … or even to correct the failures that led to their failures!

So people are right to assign low weights to what economists think. By treating the single-factor analyses of a discredited group as if only retards could argue with them, Remain has undermined their case from the beginning.

Politicians and journalists

The ill repute of economics feeds another problem: when politicians and journalists present economic predictions as cast-iron facts it undermines trust in them too. And it’s not like they had a lot to start with!

Here’s an example from the UK government website:

“Treasury analysis on the EU shows UK will be worse off by £4,300 a year per household if Britain votes to leave, the Chancellor explains.”

The same speech states, “The Treasury’s rigorous analysis of the trade and investment impact of the WTO option shows that after 15 years Britain’s economy would be around 7.5% smaller.”

These are extremely specific predictions by the staff at HM Treasury. But is the Treasury’s advice credible? I guess George Osborne would say yes … oh, wait. From a BBC News article in 2010:

Mr Osborne said the newly formed independent Office for Budget Responsibility would publish economic and fiscal forecasts, rather than the government — the first of which would come out before the Budget.
He predicted it would create a “rod for my back down the line and for future chancellors” but said the current system did not produce “good Budgets”, and Labour’s economic forecasts had mostly been wrong and “almost always in the wrong direction”.
[Alistair Darling] said: “The suggestion that Treasury officials colluded with us in perverting figures is just wrong…

In 2010 one of George Osborne’s first acts as Chancellor was to establish a brand new quango for economic forecasting, the “Office for Budget Responsibility”. His justification was that the Treasury told politicians whatever they wanted to hear, and was so overrun with yes-men that this was completely unfixable. That’s pretty bad.

Worse, it’s not really clear why the OBR was expected to be somehow “not government” or less likely to pander to politicians. Sure enough, if we check the organisation’s history, we find that the initial staff was made up of 8 former Treasury officials. So independent!

There are lots of examples like this. I’m not picking a narrow fight with the specific speech quoted above. I want to make a broader point — that when politicians and columnists pump up the opinions of people they know to be unreliable to benefit from their aura of authority, they undermine not only trust in themselves but in the wider notion of experts as well.

Gove is right. People have learned the hard way that when politicians repeat apparently authoritative arguments by expert economists, they are often wrong. The trust is gone … and both groups thoroughly deserve it.

Next Story — Kotlin Native
Currently Reading - Kotlin Native

Kotlin Native

A frequent question about Kotlin is if/when it will support compilation to native binaries that run without a JVM. Usually this takes the form of a more technical question like, “will the Kotlin compiler get an LLVM backend?”

I am not on the JetBrains team and I believe they’ve already made up their minds to do this, but I don’t personally think this feature would be the best way to solve the problems that are driving these questions.

Luckily, there are better ways that can. Kotlin doesn’t need an LLVM backend: the wider JVM community is producing all the needed puzzle pieces already. Someone just has to put them together.

Why do people want native binaries?

Here’s an entirely unscientific summary of the top reasons why people so often request a native backend for managed languages, based on conversations I’ve had over the years:

  1. They think the code would run faster or use less memory
  2. They want better startup time
  3. They think deploying programs that have only one file is simpler
  4. They want to avoid garbage collection
  5. They want to run on iOS, which forbids JIT compilers by policy
  6. They want better native-code interop

This list covers two of the three reasons cited by the Scala Native website for why it exists. The other reason is that Scala Native is extending the language to support C++ like constructs that are normally forbidden by the JVM, like forced stack allocation, but building a better C++ is already being handled by Rust. I suspect that trying to convert Scala or Kotlin into C++/Rust competitors wouldn’t go well.

Sometimes people think a ‘native’ version of a managed language would fix problems that have nothing to do with compiler technology, like this guy who believes it would change font rendering. I’ve seen ‘native’ become a mythical cure-all for any frustration or problem the user might have … which isn’t surprising, given that most people have no experience with managed-to-native technologies.


A ‘Kotlin Native’ would make slower code

It is a common belief that code compiled by an ahead-of-time compiler must be faster or less memory hungry than just-in-time compiled code. I used to believe this too. It makes sense: C++ apps run fast, and C++ compilers are very slow. It stands to reason that the compilers are slow because they’re spending a lot of time optimising the code they’re producing. Surely, a compiler that can only grab a handful of spare cycles here and there whilst the app is running can never do such a good job?

Unfortunately, performance is subtle and often unintuitive. C++ compilers are slow mostly due to the #include and generics model the language uses, not due to optimisations. And a team at Oracle is adding ahead of time compilation to HotSpot. They gave a tech talk about it last year called “Java goes AOT”. Plain old AOT seems to give around 10%–20% slower code, depending on the kind of app (in the talk this is the difference between tiered and non-tiered). The reason is that virtual machines that support deoptimization, as the JVM does, can make speculative optimisations that aren’t always correct. Programs written in higher level languages benefit from this technique more, for instance Scala code benefits more than Java code does. The AOT compiler they’re using (Graal) isn’t weak or amateur: it’s competitive with GCC when compiling C code. It’s just that these advanced optimisations are really powerful.

It’s for this reason that the HotSpot AOT mode actually supports using a mix of AOT and JIT compiled code. The app can be compiled ahead of time so the interpreter isn’t used, but in a way that self-profiles. Then the JIT compiler is still invoked at runtime to recompile the hot spots, winning back most of the performance loss. You get a much less severe warmup period, whilst still obtaining the best peak performance.

Android has learned the same lesson. The replacement for Dalvik that was introduced in Marshmallow (ART) started out as a pure AOT compiler. In Android N, it will start using a mix of AOT and JIT compilation.

Additionally, the shortest path to a Kotlin-specific native compiler backend would be LLVM. LLVM is primarily used to compile C and C++. Many optimisations that can really help high level managed languages like Kotlin simply don’t apply to C++, so LLVM doesn’t implement them at all. That’d result in an even bigger speed hit.

Sidenote: the .NET virtual machine does not have an interpreter nor does it support speculative optimisations. It’s really just a regular compiler that happens to run the first time a method is called. This is why Microsoft has tended to be more interested in AOT compilation than the Java space has been: they never exploited the potential of JIT compilation, so they have less to lose from abandoning it.

What about other forms of bloat, like memory usage, binary size and startup time?

Even those are complicated. Most of the memory usage of apps written in managed languages like Kotlin, Scala, Java, C# etc comes from their reliance on object identity, garbage collection, unicode strings and other things that native compilation doesn’t affect.

Worse, native CPU code is a lot larger than JVM bytecode. Larger code bloats downloads and uses more RAM, lowering cache utilisation. An AOT compiled “Hello World” Java app doesn’t start any faster than the regular interpreted version because even though the interpreter runs far fewer instructions per second than the CPU can, each instruction does a lot more and takes much less space in memory. Runtime of a hello world app is only about 80 milliseconds anyway, which is a relevant cost only if you’re making tiny tools for a UNIX shell.

And whilst hauling around a couple of JIT compilers and three garbage collectors adds bloat, that’s not inherent to using a virtual machine rather than compiling to native. It’s just that HotSpot is a one-size-fits-all program that is designed to run on everything from laptops to giant servers. You can make much smaller virtual machines if you’re willing to specialise.

Enter Avian

“Avian is a lightweight virtual machine and class library designed to provide a useful subset of Java’s features, suitable for building self-contained applications.”

So says the website. They aren’t joking. The example app demos use of the native UI toolkit on Windows, MacOS X or Linux. It’s not a trivial Hello World app at all, yet it’s a standalone self-contained binary that clocks in at only one megabyte. In contrast, “Hello World” in Go generates a binary that is 1.1mb in size, despite doing much less.

Avian can get these tiny sizes because it’s fully focused on doing so: it implements optimisations and features the standard HotSpot JVM lacks, like the use of LZMA compression and ProGuard to strip the standard libraries. Yet it still provides a garbage collector and a JIT compiler.

For people who want to use Kotlin to write small, self contained command line apps of the kind Go is sometimes used for, a much simpler and better solution than an LLVM backend would be to make a fully integrated Avian/Kotlin combination. Avian is hard to use right now — you’re expected to be familiar with native-world tools like GCC and make. Making a one-click JetBrains style GUI would yield programs that look to the user like they were AOT compiled: they’re single executables that only require the base OS. And using SWT you can build GUIs that look native on every platform because under the hood they are native. But you wouldn’t need to abandon the benefits of JIT compilation or garbage collection.

Losing garbage collection

Sometimes the people requesting a native backend want it because they want to avoid GC, and associate “native” with “not garbage collected”. There is no connection between these things: you can have garbage collected C++ and you can do manual memory management in Java (and some high performance libraries do).

The problem with extensively mixing techniques is that it forks the language. A library that assumes garbage collection cannot be used without a collector and likewise, a library that expects manual management becomes a lot harder to use from code that expects a GC. You’d have to introduce smart pointers and other C++ like constructs to the language to make it really convenient.

I wouldn’t like to see Kotlin splinter into two different languages, Kotlin-GC and Kotlin-Manual. That would hurt the community and ecosystem for questionable benefits.

And the benefits are questionable. Many devs who think they can’t tolerate GC are basing their opinions on old/crappy GCs in mobile phones or (worse) web browsers. This impression is heightened by the fact that some well known garbage collected apps are written by people who just don’t seem to care about performance at all, like Minecraft or Eclipse, leading people to blame GC for what is in reality just badly written code. But there are counterexamples that show it doesn’t have to be this way: Unreal Engine is written in C++ and has used a core garbage collected game heap since version 3. It powers many of the worlds AAA titles. They can hit 60 frames per second and they are using a very basic GC. Tim Sweeney’s secret is that he cares about performance and productivity simultaneously. If they can do it, so can you.

iOS and native code interop

The final reasons people want a native compiler backend are iOS support and to make native code interop easier.

iOS is a good reason. That platform bans JIT compilers because it helps Apple enforce their incredibly rigid policies. But doing a native backend at the language level is the wrong approach. RoboVM is a project that built a JVM bytecode to ARM AOT compiler, and although RoboVM is now a dead project due to being acquired by Microsoft, old versions of its code are still available under an open source license. It works for any JVM language and doesn’t really suffer from this generality: a Scala or Kotlin specific ARM compiler wouldn’t do much different.

But that’s probably not the long term direction the JVM platform will go in for iOS. As HotSpot itself is getting support for AOT compilation, and HotSpot has an ARM backend too, and there’s an official OpenJDK mobile project that’s already made an iOS (interpreter only) mobile version, it would make sense for them to plug these things together and end up with a mode in which HotSpot can generate AOT iOS binaries too. I wouldn’t be surprised to see something like this announced between Java 9 and 10.

The final reason is native code interop. If you compile to native, so the reasoning goes, it’ll be easier to use C/C++ libraries like the ones your operating system provides.

But the existence of projects like JNA seem to disprove this — you can have convenient interop without generating native code yourself. And there are some exciting techniques for working with native libraries coming up:

  • The OpenJDK Panama project is adding support for things like inline assembly, pointers and struct layouts directly to Java and HotSpot. Yes, if you check out the Panama branch of the hotspot repository you can actually define assembly snippets in Java and they’ll be inlined directly into usage sites, just like an __asm__ block in C would. Panama also provides a clang-based tool that parses C/C++ headers and auto generates the equivalent Java declarations. All this should be automatically available to Kotlin and Scala users too.
  • The Graal/Truffle research projects are creating Sulong, which is a way to JIT compile LLVM bitcode on top of the JVM. There is a simple C API that exists when code is run on top of Sulong that allows for near zero-overhead interop with managed code.

Conclusion

Kotlin doesn’t need an LLVM backend, and by extension I believe neither does Scala. Creating such a thing would be a huge and ongoing drain of manpower, take a long time, and end up duplicating work already being done elsewhere in the JVM ecosystem … very likely with worse results.

Instead, I think the right direction for the community to go is:

  1. Building a simple IntelliJ plugin that uses Avian to spit out tiny, self contained binaries, for when weight is more important than features. This would present some strong competition to Go in the command line tools space.
  2. Using the AOT mode being added to HotSpot to eliminate warmup times and (hopefully) support iOS, for times when AOT compilation really is the only solution.
  3. Waiting for JVM upgrades to reduce memory usage, support better native interop and so on. Don’t try to duplicate these efforts at the language level.
  4. Educating the developer community about how to write high performance garbage collected apps outside of the server context.

This approach isn’t flawless: the AOT mode being added to HotSpot is planned to be a commercial feature, and big upgrades like Panama are long term projects. But adding an LLVM backend to the Kotlin or Scala compilers would be a long term project too, it’d mean sacrificing other features that might be more useful, and it would likely never close the performance gap.

As always in engineering, there are no solutions — only trade offs. I’d rather have more features in the core Kotlin language/tooling than a native backend, and let other teams tackle the varying challenges involved.

Sign up to continue reading what matters most to you

Great stories deserve a great audience

Continue reading