There is a lengthy ongoing debate on how the Web could compete with the onslaught of native apps on mobile platforms — heck, there is debate on whether it even should.
If you agree with Alex Russell, you might agree that transitioning away from the safety of our wired broadband and spacious desktop CRT-s has made our web browsing experience… how should I put it… rather “promiscuous”.
Clicking a link or button in your mobile browser while sitting on a train is not that comfortable, rewarding experience any more: compared to desktop, where errors and blank screens were a relatively rare occurrence and mostly the fault of the services themselves, the mobile experience is plagued by fear and anxiety:
Will my mobile network hold up?
What if there is a tunnel coming up ahead of us?
I should have opened this in a new tab so at least I could have read the remainder of my feed while (if ever!) this loads!
Workers back in service
So we then put our trust in AppCache, hoping it could save us from this particular hell, and solve our connectivity problems once and for all.
Clearly, AppCache has failed to live up to our expectations, but at least it wasn’t all in vain — we learned a lot from our fumble. When the declarative approach failed to quench our thirst for a flexible and powerful tool in making our web properties tolerate network failures, we went the imperative route.
Navigation Controllers were born.
Never heard of those? Well, don’t blame yourself, as before long, it turned out the web had a lot more issues to solve related all these new usecases than a scriptable cache could possibly cover. But it was a great start, nonetheless, so they made it a new primitive, and navigation controllers morphed into something we tend to call a “Bedrock API” — the Service Worker API became a thing.
It was hoped that these distinct scripts, not associated with any window per se, but bound to an origin could not just help making pages work offline (as scriptable caching proxies), but — maybe through some extensions — could be used in solving other use cases: push notifications, background synchronization, geofencing…
The idea of building an extensible web is not all that old. Adding primitives to the web that could be used then for polyfilling (adding newly drafted features to unsupporting browsers) & prototyping (drafting a feature and refining, iterating it before it is finally specced) is a remarkable idea — and service workers fit the bill perfectly. By putting new tools in the hands of authors, browsers get better feedback on usecases and can decide on what issues and features to solve on a lower level — to increase adoption, performance and ease of use.
The BPG2JPG transparent service worker polyfill is a proof-of concept. It was created to demonstrate the above idea: service workers could be used to build much more powerful features than just providing offline support to your app. While offline support could be thought as “extending your browser with a new feature” — that is, being able to run your webapp when there is no connectivity — this is just one of the multitude of possibilities.
How does it work?
The original browser polyfill uses XMLHttpRequest to download the images, then replaces the <img> tags in your document with HTML5 canvases and decodes and renders the images onto them. This of course results in several inconveniences (like having to deal with manually sizing of those canvases later on, etc.). Also, while decoding uses ASM.js, the image conversion blocks the UI thread, which could be a big issue on limited hardware (like mobile) and on pages with lots of images. Also, every image has to be decoded each and every time the page is reloaded, even if HTTP caching kicks in and the XMLHttpRequests are themselves cached.
It might still be worth doing, though, since a 40% improvement on file sizes is a huge win in terms of server load and client data transfer. You can check out the demo page to see the gains for yourself — note that image-quality-wise JPEG-encoded images are practically indistinguishable from their BPG counterparts, even when JPEG images are encoded in the next-best (and thus, larger) encoding setting (large vs medium, medium vs small, etc.)
What the transparent service worker polyfill does, is to move decoding into the service worker script, and instead of displaying raw bitmap data, re-encodes decoded BPG images into JPEG ones. These images are then cached and returned to the browser for display.
As a result, all conversion happens in a thread separate from the UI thread, no UI blocking occurs*, and the webpage itself doesn’t need to be changed (beyond including the service worker), nor is manipulated afterwards — the image tags will transparently receive a JPEG-encoded image in place of the requested BPG ones.
Decoded/re-encoded images are also cached, so on subsequent loads images appear instantly — no re-decoding has to take place.
…how is this an improvement again?
One could argue whether downloading a few hundred kilobytes of script would offset the gains from images, but as I said this is just a proof of concept library.
It is easy to see that for sites that use lots of images (think Facebook), using a better image compression algorithm could be a huge win — no wonder Facebook was experimenting with the WEBP format on webkit-based browsers.
The current implementation also doesn’t account for the input image quality, and re-encodes images at 80% JPEG compression. This might not be desirable, as may result in loss of detail for some images, but that could be easily overcome with building adaptive encoding quality (input quality detection), or switching to a different output format (i.e.: PNG.) This all is a matter of sophistication of the lib, and balancing storage (cached size) and quality.
All in all, this is not a one-size-fits-all library that you could drop in your site and use it in production (please, don’t use this in live production!), but it does demonstrate that such library is possible, and even worthwhile to create.
A word on Progressive Enhancement
While you shouldn’t be using the above script in production, you could very well be using service workers in production, as at least Google Chrome has support for them for several versions now. You can progressively enhance your app experience and reduce bandwidth and server load for millions of users of your app.
If you wanted to use something similar to the above code in production; you could simply include the .JPG links in your HTML and link the service worker in your page to let it do all the hard work! On platforms that don’t support service workers, your browser would end up downloading and displaying the JPG images — done! On supporting platforms, though, you could modify your service worker to intercept image requests and automatically request BPG images instead, then transcode them accordingly. Note that this solves the initial load problem as well, as for the very first site load (when your service worker is not yet loaded/installed), regular JPG images will be downloaded and shown.
How about other browsers?
Developers at Mozilla are hard at work on adding service worker support, you could already try the above example in Nightly versions of Firefox, but you might need to set some configuration entries manually for it to work. By using the progressive enhancement techniques outlined above, your code will work with new browsers’ service workers out of the box as they are released.
…or “How did we end up here?”
I have been following the evolution of service workers from the very beginning, and wow, it has been some journey! Service workers are becoming quite a hefty specification (especially with all the linked parts, like Fetch or Push and prerequisites like Promises) so it can be quite a journey to wrap your head around all this shiny new stuff that’s going into the browser.
For service workers, this could have been even worse — things are starting to settle down now, but even up to a few months ago the API could change from right under you, implementations were in alpha state, bugs were abundant, documentation was scarce and possibly outdated. In an environment like this for someone to be able to keep sane it was essential to reach out to the experts: people living and breathing the specification, like Alex Russell, Jake Archibald or Anne Van Kesteren, or working on the browser implementation like Ben Kelly or Matt Falkenhagen etc.
It was a journey in and of itself!
In such a volatile environment, the MDN Fellowship turned out to be a national treasure.
For the MDN Fellowship five fellows were selected from five different functional areas (from testing and teaching, through performance to WebGL and service workers), who have been then working together with Mozilla experts in pushing their respective areas forward and developing teaching/learning materials to be used later in spreading the knowledge in those fields.
As one of those lucky fellows, I positively couldn’t thank enough the help of Diane Tate, Chris Mills and all the others — especially my mentors Anne Van Kesteren & Brittany Storoz, as their help was invaluable in getting here.
MDN Content Kits
Teaching the web — one repository at a time
So what was it that came out of the 7 weeks of the fellowship, besides the lifelong friendships formed and service worker demos emerged?
Kick-ass learning materials!
One of the goals of the Fellowship was for the to fellows and mentors to work out an outline for teaching/learning materials for their expertise in the MDN Content Kits format.
MDN Content Kits are condensed sources for not just learning, but teaching topics of various types and origins. They are aimed to serve as collected sources for teaching their respective topics, containing slides, resources, demos and information needed for someone who is (at least vaguely) familiar with those topics to be able to give talks, workshops or in other ways teach those topics to individuals or groups.
During the Fellowship with the help of my mentors I have managed to put together the baseline of an MDN Service Worker Content Kit, that is intended to collect service worker related learning and teaching materials as well as resources for organizing workshops based around service workers, like the one based on above library “Implementing one’s own service worker”.
Content Kits are also collaborative, in that they encourage reuse and contributions from various fields — so if you feel competent, or used them and have some feedback to add, be sure to contribute and send a pull request with your edits, resources, presentations etc. or file issues to follow up!
Dear Reader, if any of the above inspired you to do something cool and/or useful with service workers, yay! Please drop me a tweet (@slsoftworks), or just write about your project and include it as a response here on Medium!
Also if you have time, pull requests are welcome for the demo repository, as well as the Service Worker Content Kit — file issues, share your feedback, teach the web and generally don’t forget to be awesome while doing it!
* please note, this still blocks the service worker thread, which is not at all ideal, but we would need to wait for workers-in-service-workers until that could be resolved satisfactorily.