2016 will be the year of concurrency on the web
The 2 main reasons are:
- Virtual DOM has given us a programming model on the web that works well with non-immediate access to native DOM APIs.
- The UI thread has gotten too damn busy.
I’m writing this post now, before workers become mainstream, because concurrency is hard. While we have a promising programming model (as outlined above), we still need to experiment with this a lot.
If you need additional convincing that worker-based concurrency is coming, some recent examples that are blazing the trail are Henrik Joreteg’s super interesting FeatherApp and this amazing article on the software design of the Pokedex web app.
The rest of this post are the slightly edited notes from my talk at JSConf US 2015 Last Call going a little bit deeper into how and why I think we need to invest in concurrency on the web. It explains things in the context of my current project AMP, but I believe the message applies beyond that narrow scope.
- We need to talk about the web.
- Things have gotten into kind of a bad state and this is even more true on the mobile web.
- To me reading content on the mobile web really is a slow, clunky and frustrating experience — and this is even true for content that is supposedly optimized for mobile.
- My personal pet peeve: The page finally loaded and I start reading the article and then some ad or whatever at the top of the page starts loading and pushes everything down, so I lose my reading position. Not very awesome.
- What we’ve seen is average page weight to increase at over 15% year over year for the last two years
- With just some casual manual checks it is not hard to find mobile news sites that download over 10MB for a single page.
- This is why we’ve created the AMP Project.
- And by we I mean: Google, Twitter, Pinterest, LinkedIn and a whole lot of publishers
- We do 2 things at the core:
- Optimize content for prerendering and thus achieve instant loading for the web
- And even without prerendering achieve reliable, fast performance.
- HTML + CSS + Extra validator
- Makes things that might be slow invalid
- Focus on load time and consistent consumption experience.
- Still very webby: You can literally deploy it with FTP. Every common web browser renders AMP files without modifications
- Actual restrictions are very narrow. Basically all HTML (or equivalent custom elements) and all of CSS (with only limitation on animations)
- But there is a big BUT
- I’d like to spend some time on the why.
- The reason is what we call the coordination problem.
- A modern web page has many things going on.
- Lots of these things come from various third parties.
- And they are uncoordinated.
- On the other hand there are some hard limits to making a performant web page:
- The RAIL model: 16ms per frame and 50ms from idle to response.
- It is practically impossible to hit this when 20 random things on a page might take up CPU at every instant without any type of coordination.
- Initially AMP took the easy way out.
- No JS, no coordination problem.
- It still allows third party frames, though. So in some extent the problem was only removed from the critical load path, but it still exists at runtime.
- Not as visually interesting, but I’d like to talk about analytics:
- Who has been through the following scenario:
- Business person to engineer: “Yo, we’d like to switch to this new analytics provider. They have much more awesome metrics.”
- Dev: “Cool, looks easy enough, I’ll put it in”.
- 3 days later, business person: “Uhm, on this one metric that is important to my manager we are going slightly down on this new analytics thing. Can you put the old one back in, just for this one thing?”
- Dev: “Grumble. Whatever.”
- And just like that your site ends up with literally all of them.
- For AMP I was like: “What if we instrument pages just once and allow configuring the collected data to be sent to N analytics providers?”
- So, we went and talked to many of them. And every single one was like: “That sounds like a great idea”.
- With this launching really soon in AMP, if you have 15 analytics providers your page will be exactly as heavy as if you only had one.
- With this we got analytics under control, but there is so much more to the page.
A general solution for the coordination problem
- I’d like to talk about my vision as to how to solve the coordination problem in a more general fashion:
- Do some of you still remember this:
- Until the late 90s (Before OSX), Macs ran with cooperative multitasking.
- The thing about cooperative multitasking is that it can be super fast.
- The currently active process gets all the CPU for as long as it wants.
- That can be awesome. E.g. for Games. No chance of some stupid background service slowing things down.
- But sometimes your program might rely on something else running. Or you just want to be nice to other programs.
- So, you yield.
- Now you won’t get back the CPU until something else yields. Which in the worst case is never.
- In practice this did not work very well.
- Which is why modern operating systems support preemptive multitasking where a scheduler controls who gets how much CPU.
The web works exactly like old MacOS and Windows 3.11.
- If an ad says: I’m gonna be all fancy and ray trace myself. It can do that. And stall your UI thread for seconds.
- So, we’ve solved this problem before. In a preemptive multitasking system a h264 decoder can totally run while an app remains responsive.
We need to bring the equivalence of preemptive multitasking to the web.
- There have been many talks at JSConf that talk about threading and how we need it to get AAA games into the browser.
- I don’t know about you, but I’d be pretty happy if I could just read a simple article without my browser falling over.
I believe that we’ve seen concurrency on the web in the wrong light so far.
It isn’t only needed for fancy games and graphics.
We need concurrency to make the web not suck.
- At this point I do not have all the answers, but I do think that we need to re-think the concurrency primitives that we do have on the web in the light of using them for, you know, every day web pages.
- They aren’t used much.
- And they are still lacking a model for things to work together.
- My plan is to use the AMP Project to bring a feasible concurrency model to the web and at the same time use this to bring interactivity to AMP.
- Want to animate the page? Cool, write code in this worker.
- If there is an ad on a page and wants to run JS? Cool, run it in a worker.
- AMP can start being a scheduler. Basically it would tell a worker: Hey we want to draw a frame and then the worker can do that.
- If it uses too much CPU the scheduler can just decide: I’m going to throttle you to 30 FPS. Or: I’m going to suspend you altogether.
- If an ad is misbehaving or there are too many ads on a page for a particular device: Again they can be throttled.
- But on a high end device everything runs at 60fps.
- We still have big question marks with respect to the programming model that would be used inside those web workers.
- We’ll be conservative about it.
- Most likely initially we’ll allow workers to send a set of mutations once per animation frame.
- And we’ll limit what things can actually be done to things that can be accelerated on the GPU.
- But eventually as we learn more about concurrency on the web we can open things up.
- Eventually I could totally see, that we’ll have AMP pages that get updated through React components living in web workers.
- So, here is my pitch:
- With AMP we brought the web to a known basic state that cannot be slow.
- We reduced the capabilities while doing it.
- Over the coming months (and I assume it’ll be more like 12 months) we’ll bring it back.
- But we’ll do so in a fashion that will not regress performance.
The solution is coordinated, scheduled, preemptive concurrency for web components.
- It’ll be awesome.