React Europe 2019 Takeaways

Edward Mortlock
Sainsbury’s Tech Engineering
12 min readMay 31, 2019

I was lucky enough to be sent by Sainsbury’s to the React Europe event held in Paris recently. Coming back, I thought by doing a round up of my take on the talks that resonated with me would help head off giving the same answers to everyone on my return to the office. Plus I forgot to bring back the obligatory foreign biscuits, so I’ll need to keep my head down for a while anyway.

Disclaimer (aka don’t sue me)

To give a bit more context to what interested me at the event, I am the lead developer on the Luna Design System at Sainsbury’s. Within this, it provides teams a React component library and Sass styling for use in products across the group. This means my raison d’être is UI development, library authoring, and management so the talks I highlight below will be more on that side of front end development.

All the talks were great and the speakers did an excellent job presenting interesting content in engaging ways. The choice of which ones I highlight is in no way meant as any kind of judgement. Check out the conference’s YouTube channel for the full series of talks.

As a final caveat, there was a generous amount of free beer available, so feel free to help me out if my recollection of a particular talk got affected.

Completely unrelated picture…

With all that said, let’s get on with it!

Or jump to a particular talk

State of React — Jared Palmer

The event kicked off with Jared Palmer talking about what we can expect to see from React in the future. The key message of this being that recent developments have been bringing React closer to core JS and we can expect that to continue. A big instigator of this is hooks, which played a key role in many talks at the event. This allows you to bring logic back into the render method as standard JS functions rather than relying on framework specific solutions like lifecycle events.

Framed around the aim of producing a UI that is both fast and pretty, Jared gave an exciting demo on how using React Suspense in combination with a caching layer will be a game changer for progressive app loading in the future. With current approaches to loading optimisation resulting in spinners for each block of content, you can use Suspense to coordinate these separate data fetches into a single loading state. Using try/catch to attempt to retrieve from the cache you can return the fetch promise from the catch which a parent Suspense element will then manage the loading state for. Unfortunately, this is still experimental, with key functionality such as SSR support in progress, but I hope we’ll be seeing more about this in the future.

Brandon Dail gave more information on the Schedule in his talk the next day, where he covered the science behind why waiting for the page to be ready can be better than incrementally popping in content. The just-noticeable difference threshold for content changes is much lower than any kind of time judgement, leading to a perception of something being slower than it actually is. An extra feature Brandon covered was the priorities that are available allowing for tasks to be run whenever there is idle time, allowing for more intensive (but non-critical) tasks to be run without affecting performance.

As a bit of a rapid-fire round he highlighted some of the overhauls that are currently going on within the React space:

Jared finished with a great point: as React is now the incumbent project for UI we need to support developers starting out because of React, who may not have prior JS knowledge. We also need to encourage all contributors to the ecosystem whether they are working on functionality, raising issues, or correcting typos in documentation (or my post for that matter).

Fun fact; instead of saying “party on” Keanu was supposed to say “write more documentation”

Saving the Web, 16ms at a Time — Joshua Comeau

Set against the bleak prospect of mobile apps making the web redundant for a lot of people, due to the more integrated and snappier experiences they provide, Joshua made the case for setting the bar higher for web animations. Making the obvious but overlooked point that, for a lot of people including myself, optimisation effort focusses almost exclusively on load times rather than what happens once the page has loaded. This should be where the user spends the overwhelming amount of time, if not you should have a look into your load times.

Whilst I am a firm believer that “JavaScript all the things!” is the right choice regardless of circumstance, Joshua did highlight that by using JS to control animations they will run on the main thread along with all the other computations. This can lead to janky motion if other computation is happening.

When you start testing out your set of swanky slideToggle animations

The most obvious alternative is to use CSS animations, but animating properties that cause the browser to update the layout or repaint elements can also be costly. By limiting yourself to opacity and transform where the GPU can assist you can prevent this. Sticking to this can cause compromise though, with an example given of an accordion where the container snaps to full size but transform is transitioned on the contents to give the feel of motion. So whilst this is more performant you should judge in context as it may not be worth sacrificing a more pleasant animation on something where the browser layout & paint overhead is going to be small. Another aspect of revealing new content discussed was the standard approach of using isOpen && children within your component where in many cases using CSS display to control visibility would be more performant as it avoids remounting.

Another way around the paint overhead that passed me by is through the use of OffscreenCanvas (canvas API that is decoupled from the DOM) within a Web Worker. The Worker handles calculations on a separate thread and then those operations are rendered on the source canvas smoothly. It seems like this is a Chrome only option still, so whilst interesting this is probably only useful for very specific circumstances for now.

In what was a bit of a throwback for me, he promoted sprite sheets as still being the ultimate low-cost approach to animation. So, in cases where control and flexibility aren’t priorities, using tools like Giphy Capture and EZGIF could be the best option.

Josh’s talk then dovetailed nicely into Alec Larson talking about react-spring. This animation library takes a skeuomorphic approach of tying motion to the real world behaviour of springs and how aspects such as tension, mass, and velocity impact its movement.

Move fast with confidence — Paul Armstrong

Paul spoke about how they’ve been working in Twitter to detect issues as soon as possible. A key motivator behind was reducing dead time from waiting for a pipeline build to finally finish so a colleague can review your PR, only to discover you indented something too far.

“No John, I don’t think there’s too many line breaks”

He broke these down into the key parts of the development lifecycle of a particular change.

Locally

On your machine you can use the following to avoid errors before they are even committed into git:

  • Linting — bit of an obvious one to start, but ESlint, when hooked into your editor, can be a great first line of defence against problematic code
  • Formatting — when I first saw prettier at React London I was super excited, but then saw you couldn’t turn off semi-colons which killed off any goodwill I had towards it. Thankfully that option was added and I absolutely can’t live without it now (along with everyone else at the event it seemed). I now can’t remember the last time I had a PR held up over whitespace — use it.
  • Typings — within Twitter they are using Flow, but from how the other talks went TypeScript came across as the runaway winner in this category. Regardless, any kind of typed language option will help prevent any kind of invalid data being passed around your application before you even run it.
  • Git Hooks — through a combination of Husky and lint-staged you can automatically lint any changed code, reformat it and run your tests whenever you attempt a commit, either bailing if manual intervention is needed or simply updating the commit if the CLI can figure it out. To top it off, I never knew the --findRelatedTests option existed in Jest, so you can limit the delay to just relevant tests.
  • More Git Hooks — in addition to using hooks for testing, Paul had the nifty set up of automatically running yarn install on checkout to ensure that any new dependencies install avoiding any confusion that may happen.

Review

  • Demo Sites — when creating a PR a short life staging instance gets spun up which a reviewer can use to view the visual impact of the changes without needing to checkout and build the branch. They even used these to direct traffic to as part of A/B testing for more impactful changes.
  • Minimal Build Times — to reduce the delay caused by (their initially mammoth) waiting times for builds, they had a deep dive into the webpack build using node --inspect-brk to see what was causing delays. Fixing internal issues (or changing dependencies to work around them) resulted in getting close to a tenth of the original time taken.
  • Bundle Size Tracking —Paul produced build tracker, which they use in Twitter to visualise the file size impact of changes per revision. We currently use size limit in Luna to get basic protection around this, but I’ll definitely be looking to see if the increased granularity of this tool would benefit us.

Staging

  • Pedigree Chum — not exactly an option that’s mimicable by everyone, but dogfooding is a great way of discovering problems before they get released to the wild.
  • Error Catching — through the use of React’s error boundaries Twitter was able to track the rate of particular errors over time by reporting into Sentry with alerts coming into Slack for immediate awareness.

Coders are the new Rock Stars — Dan Stein

Dan Stein, aka DJ Fresh, shared his amazing journey from being an electronic musician & producer to becoming a machine learning engineer. Whilst his forays into music theory went completely over my head I was still able to enjoy his message that ultimately all art is just data, so the innovators in the coding space producing beautifully structured and performant code are artists in their own right.

Some say that his greatest hits came from early forays into machine learning¹

CodeSandbox — Ives van Hoorne

I’m a massive fan of CodeSanbox, it’s completely changed how I’m able to communicate with consumers of the library by being able to set up complex demos showing how to achieve their goal / resolve their issue rather than relying on my limited ability to explain things. So, whilst it made me question my life choices that someone [redacted] years younger than me just got a couple of million in funding, it was great hearing about the project from Ives.

CodeSandbox is now no longer the web version of VSCode, it’s VSCode in the web. This has allowed for new functionality around settings, themes, and even extensions to bring the expected experience of local IDEs to cloud-based development. Super excited about where this project is heading with the dedicated team it now has behind it.

Accessible Experiences at Scale — Jonathan Yung

Jonathan showed off a really interesting tool that’s in use at Facebook which detects access issues during development. Using the Schedule to do the in-depth DOM scanning required to discover issues it runs in browser overlaying the screen with violations and their potential fixes.

Fingers crossed this will be open sourced at some point in the future as it would be of massive benefit for the work we’re doing with Luna.

Yarn 2 — Maël Nison

Maël took us through the upcoming features coming to the next major version of Yarn, which was of particular interest as we make use of the tool’s workspace functionality within Luna for our Lerna managed monorepo.

  • DLX — following on the coat tails of npx, Yarn are introducing their variant to achieve this.
  • Plugins — will be very interesting to see how this evolves, as the new extensible architecture allows for extra commands, custom resolvers & fetchers, and more through third-party plugins.
  • Constraints — dubbed as the eslint for package.json it uses Prolog to enforce rules across packages in a workspace so that multiple versions of the same package aren’t required for instance.
  • Up Topyarn up triggers dependency upgrades across packages within a workspace. This can be combined with the new --interactive flag in order to get more granularity in terms of what is upgraded and how.
  • Order 66 — taking a leaf out of TypeScript’s book Yarn have revamped their error handling in order to have consistent error codes to facilitate easier searching for resolutions.
  • Zero Install this announcement raised more questions than it answered within the group I was with, but essentially by leveraging the plug ’n’ play strategy you can feasibly commit the node_modules zip archives into version control meaning you don’t have to run yarn install any more on checkout. However, what impact this has on lockfiles, how binaries / OS specific dependencies are managed, and how commitish behaviour is handled is up for grabs as far as I’m aware.

Next.js — Tim Neutkens

Unfortunately, I’ve not had an excuse to use Next in a commercial environment, but practically every side project I’ve worked on in recent years has been built using it, so was really excited to see what was incoming.

Whilst there was a load of great stuff talked about it was the API route management that stole the show for me. By simply creating a ./pages/api/posts.js file, you’ve created an endpoint for your front end to hit using the standard req, res Express syntax. Ultimately meaning that there’s potentially little need for custom servers going forward.

As an honourable mention, they have also added the ability to have dynamic route segments by prefixing your directory with $. As an example, ./pages/users/$id would give the equivalent /api/users/:id endpoint.

Potential Improvements

Whilst I had an awesome time at the event I would like to flag one area where I personally felt it could have been made even better (plus a bonus, tongue in cheek, one for the event space).

Lightning / Sponsor Talks

Obviously being told “We’re hiring / selling!” by various companies isn’t exactly the main reason for people attending these events, but the buy-in from these companies is what makes them possible, so just pushing them out on the stage with no fanfare to get the audience to quiet down and pay attention seemed counterproductive. I did feel especially sorry for the representatives here as they had gone to a lot of effort to make their content engaging.

A similar issue was had by the speakers doing lightning talks, as they were packaged up at the end of the day on Thursday & randomly placed in the breaks on Friday. For the first day, this resulted in people leaving in droves during the talks to beat the rush on finding something edible and partaking in the free alcohol and on the second the speakers having to compete with private conversations going on across the hall.

Grouping all the talks at the end just didn’t work, as no one wants to see hoards of people leaving while they’re on the stage, and not getting the room to quieten down made it difficult listen and no doubt more difficult for the speakers, a lot of whom seemed very new to it. It’s a common problem and one that brings downsides for each of the solutions but I think integrating them into core line-up would have helped and perhaps the situation was just aggravated by the food…

Food

Catering for these sorts of events is always going to be incredibly difficult, but given how seriously the French take their food I had higher hopes than usual. Unfortunately, we ended up with a combination of imitation wax food rounded up from a local furniture store and leftovers from the Fyre festival. A tomato slice in a hot dog bun does not an hors d’ourve make.

The single slice of cheese used to spread across the hundreds of sliders²

The quantity was also an issue for those willing to partake in the cuisine with people leaving during the lightning talks early hovering up whatever was available meaning there was practically nothing there by the time I got out. Although the next day made me realise that was a blessing in disguise.

However, the beer was free, the talks were interesting, the company was great, and did I mention the beer was free? So I definitely can’t complain. A bientôt Paris (Brexit permitting of course).

Merci!

  • Massive hand to the organisers of the event and all the speakers. Fantastic job by all.
  • Thanks to the patient Parisians who put up with me butchering their native tongue.
  • Cheers to my fellow travellers; Bertie, Chris, and Dani for putting up with me and helping with the article.

Header image from https://www.react-europe.org

[1] Ok… only I said it, and I may have made it up³

[2] Probably not — image by Amin [CC BY-SA 4.0]

[3] I made it up

--

--