WASM and Hamsters: A journey to accomplish front-end performance
Hey everyone! This isn't another post showing how amazing new techs are. Actually, this is a post to share the challenges of applying such new techs in a real-world scenario and how frustrating it can be.
As João already mentioned on his post, this month we've started the culture of the Mad Science Weekend. The idea is quite simple: The company gives us some time out of the workload to work on personal and innovative projects and we do some crazy stuff to post here and entertain you guys.
Some context first
First of all, you should know that we work with a considerable amount of data in some of our applications and we're always trying to make things faster and less network-dependent. That being said, one of our last challenges was somehow replicate the elasticsearch service on the client-side so the users can have fast results even with poor connection or without it, elasticlunr solved our problems with that.
elasticlunr fitted our needs gracefully (with some perks, of course). Searches were working fast and offline. But once we tested it with our real-world data (I'm talking about over 40k entities), we found a problem: the index time.
The thing is: to be able to perform that well on searches, elasticlunr indexes all data using lots of for loops to tokenize every single field, and that can take some time… And CPU. So you can see the problem here, right?
The problem is that, in the end, we still have a promise that takes ~20 seconds to resolve and the user can't search for anything in the meantime.
This is the part where I should say that I've managed to decrease the index time by 50% after compiling elasticlunr to Web Assembly, but the truth is that I can't. You can check my research results here.
Here are some things that you should know about Web Assembly:
- the performance difference is not that big nowadays unless you're dealing with tons of data. I've created a repo on Github comparing a fibonacci function using C with Binaryen, AssemblyScript and vanilla JS.
As a lower level language, C manages to be 4x faster than vanilla JS, but it's hard to write if you're not familiar. Meanwhile, AssemblyScript is more familiar for front-enders, but it's only 2x faster;
- there is no support for most of the mainstream functional languages yet;
- C has the best support to date;
- AssemblyScript has a good support and community, but not many features.
That last item is why I gave up to compile elasticlunr (at least for now). There is no support for most of the JS features like the JSON object, forEach loops, etc. It's like programming in C with a TypeScript syntax.
And finally, The Hamsters
After giving up of m̶y̶ ̶l̶i̶f̶e̶ ̶a̶s̶ ̶d̶e̶v̶e̶l̶o̶p̶e̶̶r the Web Assembly approach, I remembered that Abraão and Matheus tried to use parallelism with web workers with no success due to a bug on Chrome. I had no options but try to reimplement things like the entire JSON object in AssemblyScript, so I figured "why the hell shouldn't I try it again"?
After some research, I've found a lovely library that helps you create and use threads with web workers. The idea is simple: if you're about to do a for loop on a 40k-sized array, you split it on 4 x 10k-sized arrays, do the loop in parallel and then concat the results. Hamsters.js is the reason why I'm using hamsters gifs on this post.
Sounds simple, right? It isn't. The Hamsters library uses web workers and they have separated contexts from the main thread. That's a problem because even if you use the .toString & eval method to send and execute a function on the web worker, the execution will fail if it needs anything from the main thread context like another function or even a variable. So how could we solve that?
The thing is: Webpack is hard, but it's powerful. With that in mind, I’ve managed to create a .config file to bundle that function I was needing and all it's dependencies on the import tree. It was a pain in the ass to create that file, but in the end it worked and I was able to run the loops properly. There was still a problem, though…
The performance didn't change much and I was like WHY THE HELL ISN'T THIS WORKING?
Hope never dies
After the failure with the Hamsters approach, I've decided to go back to Web Assembly and do things the hard way. I've been searching for polyfills and ways to isolate just the slower and simpler parts of the code so I only need to migrate that to Web Assembly. It won't be easy, but I think its possible. I have opened an issue on the AssemblyScript repo and after some talk with Daniel Wirtz, the creator, I've discovered that I might be able to do what I wanted with the older compiler (He is working on a new one right now).
As I said, this is not a post to talk about how amazing new techs can be. It is a post to talk about the challenges that you will face trying to solve real-world problems with such techs. In the future, I should be posting more about how to use the Hamsters on a case that is worth it. And if I succeed/fail with the elasticlunr migration to AssemblyScript, I'll let you guys know exactly how and why.
And by the way, Daniel Wirtz is reimplementing the AssemblyScript compiler in this repo and he needs help with lots of trivial and simple things like implementation of basic features and tests. Feel free to contribute!