Issue 4: Offline badging, DevTools, Testing, Travis, Web Storage, Service Worker Scopes, Data-driven Development, Compute Engine

Addy Osmani
Totally Tooling Tears
11 min readApr 15, 2016

Totally Tooling Tears is a companion to Matt Gaunt and Addy Osmani’s YouTube show Totally Tooling Tips. It’s a raw weekly brain-dump of notes, issues & workarounds we’ve found building apps and libraries over the last 7 days.

Do we need a consistent way to indicate ‘offline’ support to users on mobile? — Addy

Once you have Service Worker setup for offline caching, signalling to the user that sites and web-apps are available without a network connection is still an on-going educational challenge. I don’t think we’ve really quite hit the sweet spot for what the best UX here could look like. On our team, we’ve been using Material Design snackbars to try offering some indication of offline support (see Smaller Pictures, Voice Memos, Offline Wikipedia):

But it’s clear that user education about this offline expectation is going to require a little more work initially. Something we would personally love to see is better “offline badging” at a UA level for PWAs. Something that consistently indicates a web app works offline. In our research into whether the Chrome UX team have thought about this, it looks like badging has been an on-going exploration albeit for static content sites (for simple pages) as part of the Offline Bookmarks widget on Chrome Dev for Android.

Offline Bookmarks support

Let’s take the recent Material Design overhaul of this Bookmarks widget. When I bookmarked an item and launched it from here in recent Chrome Dev builds, I was presented with a nice Offline workflow. This allowed me to view pages previously saved offline, check out content that I regularly view and it actually worked quite well:

Offline Bookmarks in Chrome Dev. Notice some of the treatment includes indication of total size for the cached page/site.

One thing in particular that caught our eye was the use of the Omnibar for an ‘Offline’ badge. Not working on the UX team, I obviously can’t speak to their plans here but I would love to see browser vendors exploring how we can use signals (served from a secure origin, registers a Service Worker etc.) to show something similar for PWAs.

Offline badging for sites ‘saved’ using the Offline Bookmarks feature. Not currently a feature targeted at PWAs.

Where this becomes potentially less useful were it applied to PWAs is when launched in fullscreen mode from the homescreen. You no longer have the omnibar present there and one could argue that native apps don’t have to be concerned with consistent UX badging for ‘offline’ as users already have a certain expectation they’ll just work in certain network conditions. I guess we’ll have to wait and see how these explorations evolve :) In the mean time, we’re likely to continue using variations on the snackbar approach.

Testing. Testing. Testing — Matt

Last episode Matt found a lovely snippet to alter the notification permission in Firefox (i.e. set the current pages notification permission to granted, denied or prompt). This prompted more research into Chrome and he eventually got there.

In case you were wondering, Matt did this by looking at the profile on your linux machine gedit ~/.config/google-chrome/<Profile Name>/Preferences, running that through JSON linter (Most text editors get cranky when you perform a search on a single line file) and then searching within that for a known URL that had notifications enabled. We mention in case someone wants to try enabling other permissions in Chrome.

I wish we had better Web Storage debugging tools — Addy

I’ve been working on some offline improvements to an app using React, Service Workers, localStorage, sessionStorage, WebSQL and Firebase this past week. If you’re debugging storage in Chrome DevTools when using a few different storage mechanisms, you might be in for a fun time. Let’s talk about Resources:

* Firetruck is a WIP Firebase offline caching adapter I was hacking on for this project.
  • There is no way (I could find) to clear out all possible storage mechanisms for a specific origin in one click. So if I want to clear out IndexedDB, localStorage, sessionStorage, WebSQL and the Cache API, I have to do these either 1) manually per reload (ugh — imagine doing this during development) or 2) manually write a programmatic helper for doing this in a specific set of conditions. A lot of the extensions that tried to offer an alternative storage debugging mechanism on the Chrome Web Store are busted. This is different to wanting to clear the browser cache — I specifically just want to clear out the above offline storage mechanisms. Nolan Lawson suggested using the ClearBrowserData extension, which clears IndexedDB, WebSQL and localStorage but also clears out all browser data and doesn’t work against the Cache API. I’d ❤ if we had some better built-in by default.
  • Proactive performance insights into your use of storage are not available. Writing and reading hundreds/thousands of records to IDB or one of these other storage mechanisms can lead to some unexpected performance pitfalls cross-browser (e.g on mobile writes taking 1500ms+ for 10K records). Tools like Nolan Lawson’s browser database comparison are useful here and the tl;dr is that we should probably be doing more of our work in Web Workers. I also wanted to call out IndexedDB: the good parts (also by Nolan) which is worth a read. A thorny issues here is that IDB sadly still has a ton of performance gotchas which the PouchDB community and beyond run into everyday:
  • Nolan also brought up that it’s difficult to understand what’s happening under the hood in IDB. It’s possible to set breakpoints, but if you’re in the middle of a transaction it’s hard to see what’s happening. Occasionally outside of the transaction it just won’t update in DevTools (even after hitting right click -> refresh IDB). I agree with his idea that having something like the Network tab would be very useful here. Something that lists all gets/puts/transactions in chronological order.
  • IndexedDB in Resources allows you to only view a fixed number of items in storage at a time in a view that doesn’t make it particularly easy to tell if a stringified version of data has stored all keys correctly or just stored some of the data. If you want to do this you need to right-click and copy the contents of the key-value back to your editor.

We hope to have a chat with the DevTools folks with our feedback some time soon ❤

Travis + Chrome — Matt

There are a few libraries that we have set up for testing service workers for push notifications and offline support.

In a recent update of Chrome stable there is an unmet dependency for anyone trying to get Chrome running on Travis.

If you are one of the few using Chrome, the fix simple, just switch to the trusty VM on Travis by adding:

More info can be found on the rather awesome Travis blog.

DevTools Service Worker Updates — Addy

One change to our DevTools Service Worker debugging workflow this week was the addition of a ‘Show all’ checkbox. It’s aimed at helping us debug Service Workers before opening up a specific page. So..let’s check it.

and..here’s what happens. It displays all registered Service Workers so we can debug them from the same view in Resources that we’re already in.

Data Driven Development — Matt

Brain dump from Matt of an observation from just web discussions in the office.

When if comes to new features on the web, one of the first questions people ask is “What is the browser support like?” or more accurately “What about browser X?”, especially when it comes to some of the latest service worker features.

It’s a great question, but it’s often followed by, “Well if browser X doesn’t support it then its not worth thinking about for now”. Should that statement be followed up with — are you sure?

If you have a user base where 80% of the audience is on a particular browser, then investing tonnes of time and effort into a feature that they’ll not see anytime soon doesn’t make sense, especially if you have a backlog of other work. But if a feature is support in that browser and it’s a progressive enhancement (i.e. it doesn’t degrade the experience on other browsers) then why not implement that feature for the 80% of your audience? Do you really need the exact same experience across all browsers?

The on-going debate about AppCache and Service Worker — Addy

A post by @firt was making the rounds this week entitled “Service Workers replacing AppCache: a sledgehammer to crack a nut”. While many of us discussed it on Twitter, the central thesis was he felt we needed a higher-level API on top of SW (with the ease of use of AppCache), without which we may not see the masses getting into SW (the jQuery of Service Worker as he called it). We think this may have missed out that the reason browser vendors are investing in low-level APIs is to enable developers to build abstractions in library-land that solve real-world use-cases. Standards bodies can then take these patterns and try to write specs for them later on. That’s what the extensible web manifesto is all about.

AppCache is broken and we also need to recall that if you’re going to try shooting for any sort of sanely constructed app with AppCache, you need to work via FALLBACK — i.e serve up the same offline page to each navigated URL, wait till it’s awake, figure out what to do and forward the true URL onto a client-side redirect from inside the FALLBACK document. This is the opposite of fun thanks to Master entry issues. We’ve personally found Service Worker a lot more pleasant to work with.

That said, @firt raised some valid points about ease of use — I think we need to ensure that low-friction, just-as-easy-to-grok libraries and tools around Service Worker that help transition folks away from AppCache need to exist to help move the masses over to using them. This is one of the problems our team have been working on this year and I can recommend checking out sw-toolbox, sw-precache, push-encryption-node and Propel for early fruits of these efforts (we always appreciate feedback!) :)

Service Worker Scopes 101 — Matt

A cheeky few notes from problems developers have been hitting:

Lets say we have a SW File at:

/scripts/serviceworkers/blog/blog-sw.js

The minimum scope for this file would be:

/scripts/serviceworkers/blog/

What does that mean?

Well a service worker can only “control” a page whose URL starts with that scope.

The following URLs ARE NOT controlled by blog-sw.js

The following URLs ARE controlled by blog-sw.js

The reason ‘/scripts/serviceworkers/blog/blog-sw.js’ will have a scope of ‘/scripts/serviceworkers/blog/’ is because if you register a service worker like so:

The default scope is the location of the service worker file.

Still with me? If so, let’s discuss how we can change this default scope.

When you register a service worker, you can pass in a scope parameter. Let’s say we take our list of controlled pages from above:

And let’s say we wanted ‘blog-sw.js’ to only control pages under ‘/scripts/serviceworkers/blog/posts/’, we’d do that by registering the service worker like so:

That changes the scope to match any url starting with ‘/scripts/serviceworkers/blog/posts/’.

A more common (and useful) example of the above would be to register the service worker with a scope ‘/blog/’:

This would say that ‘blog-sw.js’ can control anything starting with ‘/blog/’. However, this will throw an error :(

The reason it throws an error is that you can’t set the scope of a service worker to something shorter than the location of the service worker file.

Hopefully this all makes sense so far. There is one last thing you might hear about and that’s the ‘Service-Worker-Allowed’ header. If a server returns this header with the response of a service worker file it can define a different, allowable scope.

With this header, you can call the register method like we wanted before:

Compute Engine Tears — Matt

The backstory, I got https://staging.gauntface.com/ up and running a few days ago.

Essentially, I had an issue with the php backend that caches the rendered HTML for a certain period of time, BUT it treats the following urls the same:

/blog

/blog?output=partial&section=page

The first of those is the full HTML page and the latter is an API response (i.e. JSON). Since they were cached the same, it would start to always serve just the HTML or just the JSON version.

That was fine, the fix was simple, however, to make changes and spin a new VM on compute engine, I realised that the new VM wouldn’t have access to Cloud SQL, I would have to give the Compute Engine’s IP address access to the SQL VM.

What I wanted to do to remedy this:

  • Give all VM’s in my project access to the SQL VM

What I got:

  • Option 1: Pay for a static IP address and assign that to any Compute Engine instances
  • Option 2: Use Cloud SQL Proxy:
  • Cloud SQL Proxy only works with second generation SQL VM Instances (I wasn’t on)
  • Export SQL DB from old database
  • Create new second generation SQL VM
  • Import SQL DB
  • In my Compute Engine instance do a dance to get the Cloud SQL Proxy executable and run it
  • Figure out why my instance couldn’t access the Cloud SQL Proxy (It was a permission issue — I hadn’t set a scope of ‘sql-admin’. How ‘sql-admin’ differs from ‘sql’ scopes — not a scooby doo).
  • Then figure out how to get Docker using that SQL socket
  • Then figure out how to make my php framework use a SQL socket rather than a hostname.

Anyway, it just sucked having such a long, drawn out and painful experience to set this stuff up and running when all I wanted to do was say, “Hey, Compute Engine, you own these things and they are talking to that thing that you also own, can you just make an introduction and keep em working together -thanks”.

Cool links this week

New Totally Tooling Tips: WebPageTest

Last and almost certainly least we just published a new mini-tip covering how we both use WebPageTest to performance profile our apps. Check it.

--

--