Engineering update on Tofino

Joe Walker
Project Tofino
Published in
6 min readNov 15, 2016


We’ve spent several months testing UI concepts, understanding Electron’s relationship with the web, testing some architectural ideas, like a separate user agent service to handle browser data, and the best way to create a browser UI using web technology. During the Tofino project timeline, the Firefox/Gecko team has also outlined an ambitious effort to really push the web platform itself forward with Project Quantum! Given all that, it’s probably time for a quick update on what we’ve learned building browser concepts outside the constraints of the current Firefox implementation.

Any list of learnings has the potential to sound very negative (as a list of battles it’s likely to focus on imperfections); but if this was just a list of the obvious things that we tried and which turned out to work fine, there would be little point in reading or writing it.

With that in mind, here’s what we’ve learned so far.


As an application platform, Electron is fantastic for building small simple client applications that involve a single main window. For larger applications it’s likely that you will need to fork Electron in order to get what you need. Project Tofino runs on a fork of Electron already and Brave does too.

A few examples of issues that we have run into:

  • By default Electron ships with video and audio codecs that require licenses for use. We’re not lawyers, but it might make sense to consult one depending on your use-case.
  • Electron is not designed to build browsers. There are some cases where web behaviours are broken and can’t be fixed without breaking Electron functionality (e.g. not returning values that web content would expect).
  • The Electron process model uses one process per window and another process per tab. This leads to many processes when used at the sorts of scale we’ve seen from Firefox users. Since each process requires significant OS-level resources a browser is forced to do non-trivial process management (like clever shared forking) to keep overhead low. That’s likely to be hard with Electron.
  • The main test harness for Electron, Spectron, has proven to be unreliable for us. On OSX we had to disable our minimal application tests since they were failing intermittently far too often to be useful.
  • Because Electron uses a fork of Node, any native addon modules have to be recompiled to run correctly. This causes problems when running unit tests outside of Electron.

We’ve also found a couple of issues with the Node ecosystem, which are more obvious when delivering client applications:

  • If you’re shipping code built with npm, you should really check that you are OK with shipping your code in a bundle that could be considered to have been “compiled” with GPL code. License checker can help.
  • The problems of fragile transient dependencies and difficulties with npm-shrinkwrap are well known.

Electron is excellent for porting websites to the desktop, and is also great for for prototyping a new browser. It’s clearly also possible to ship a browser to many people using Electron, but at heart Electron is designed around use-cases like Atom (obviously), VS Code, Slack, etc, so it might not be the correct platform for a long term future-browser.

User Agent Service

Firefox, like the Mozilla suite before it, is component-oriented. Chunks of code like the history store, the cookie manager, and the network library are each wrapped up in a classic COM-style interface and made available to the rest of the system in a language-agnostic way. Each of these parts — UI-centric or not — is connected in a dependency web. For example, the “new tab” page, the history view, the preferences window, and Firefox Sync all talk to the same read-write history API… and classic Firefox add-ons can talk to all of these components, from the clipboard through to preferences.

Tofino evolved into a different kind of architecture, one that reflects the different challenges we face around managing change and complexity. This architecture is layered. The web rendering engine itself is self-contained, with narrow, well-defined points of integration for the rest of the application to see what’s happening — page title changes, for example.

Similarly, storage and exploration of the user’s data is contained within a user agent service — a separate chunk of code that exposes a profile data storage service over https and websockets.

This layering gives us flexibility to explore new interfaces without tying ourselves in knots. It might enable add-ons that look more like vanilla web properties which use the user agent service instead of privileged JavaScript APIs. It also enables us to work on new kinds of data storage without the complexity of multiple direct consumers.

Creating a Browser UI using Web Technology

From a UI standpoint, we’re using modern techniques for frontend development. We’re not the first to do this by any means. Browser.html, Vivaldi, Min and many others have beaten this path. In many ways Firefox itself is a precursor to this way of doing things if you squint and pretend that XUL is a widget library for HTML.

JavaScript modules and Webpack’s filesystem watching have made for quick iteration cycles, and it means we can use through Babel. This led to a world where we could write very maintainable code thanks to async/await, classes, standard imports/exports etc, all while just using F5 to reload the whole browser just like one would in a normal webpage. Hot module reloading was also useful while writing state-dependent code, like our “overview page summaries”.

We’ve chosen React and Redux for writing our UI and managing our application state. We found it good for easily writing maintainable code and quickly prototyping different views and the interactions between them. Compared to the XUL code that much of Firefox uses we think that React+Redux strongly encourages multiple developers to code with a unified style and means that our views, stores and actions are immediately alterable by someone unfamiliar with the codebase.

We discovered that we needed to be proactive about performance by automatic testing and/or careful reviews.

Furthermore, developing with React outside of the standard predefined and essentially carefully tailored environment of a web page was difficult at times. Managing non-standard DOM nodes that required non-standard attributes was unfriendly, and required either us writing custom wrappers, special prefixing with “data-”, or hacking our way through using the magical “is” component property. Subsequently this led to potential confusion due to component properties not being magically massaged anymore when they were mapped to DOM attributes: for example “className” had to be written as “class” instead, leading to easily fixable but frustrating bugs. Therefore the biggest problems in this department arose when having to deal with <webview> nodes in Electron.

Strictly adhering to the Redux model in Electron was also difficult. Single-store application state assumes a single process per application. When writing a web browser (in Electron or otherwise), dealing with multiple processes is a necessity, so the easy abstractions and “best practices” that worked in webpages were, in practice, much harder to respect. This led to an initial architecture where there were multiple application states, one for each process, with communication happening over IPC. Our final approach was to try and mimic the web and use web sockets in order to synchronize the multiple application states: after all, synchronizing multiple instances of the same application written in React and Redux is a known problem on the web and multiple solutions exist. However, it is still not clear which is the best way to handle this issue.

Next Steps

We’re currently working on two things. Having spent several months hacking on several different UI ideas, we’re changing focus slightly to investigate some more foundational problems like “remaining performant with many tabs open”, and so on. Once we have something that we feel covers the bases of at least a subset of users then we’ll return to UI experiments, so we’re shooting for a v0.1 which we’re all committed to using as our daily driver.

While we’re working on that we’re also evolving our next generation UI. It has an overview tab, allows collections of pages and shortcuts as better version of bookmarks, and allows for smarter searching in your personal history.

Often posts say things like “I’d like to thank X, Y and Z for reviewing this post”. In this case I’d like to thank Dave Townsend, Richard Newman and Victor Porof for actually writing it. The rest of the Tofino team did the reviewing. I just ran it together and tweaked.