Web Rebels 2018

Espen Henriksen
Shark Bytes
Published in
35 min readJun 5, 2018

Highlights from this year’s Web Rebels conference

The entrance had a cool homemade WebRebels logo!

It’s that time of year again! Web Rebels, a wonderful conference in Oslo for everything JavaScript, just concluded after two days packed with amazing talks. We were lucky enough to be there and enjoyed every moment of it.

As developers we get so much for free from the community, so we thought this would be a great opportunity to summarize the talks for everyone who couldn’t be there and share it with you guys. To everyone out there writing awesome libraries for free, this is for you ❤️

This post has been updated with links to the YouTube videos.

I’ll also add that this post was inspired by Stéphane Bégaudeau’s React Europe summary.

The beautiful venue at Vulkan, Oslo and the neighboring food hall “Mathallen”

Day 1

Reading other peoples code

By Patricia Aas@pati_gallardo

Patricia introducing her talk

Patricia opened this year’s Web Rebels conference with her talk on tolerating (I mean reading) other people’s code. Her first, and perhaps most important point was that reading other people’s code is NOT a code review. You need to get into the mindset that you are learning, not that the code you are reading is wrong somehow. She then went on to talk about a set of techniques for getting the correct mental model for working with other people’s code.

The 10 techniques

Grepping: Find strings in the GUI and search for them in the code. From there you should be able to figure out roughly where they are placed in the logic of the code and work from there.

Where is this button: If you are curious about what a button does, you can grep for text in them and set a breakpoint in the onClick handler using the built-in browser debugger. That way you will be able to see the code it executes, what is in the call stack and what variables are declared in the various scopes of this execution. That helps you find a place you can start digging from.

Following input events: This was more intended for the people who wanted to dig into how how a GUI framework works, for example for debugging purposes. Most frameworks have events and event loops. Tracing them you can figure out how events go into the framework and how is the final frame is created after it is processed.

What the tests do: If you write integration- or system tests, you already have a great point of reference to start your investigation. Following these tests can teach you a lot about the interface of the application. You can probably get some idea by reading unit tests as well, but they are usually closely tied to implementation details, so they are less than ideal to get an overview.

Refactoring: One way to learn about the code is to rewrite it! This forces you to learn about how the original code worked so that you can re-implement the features it provided. This is risky, however, because refactoring is very opinionated. Don’t rewrite just because you don’t like the style it was writ!

Reading “main”: No matter what your application is it will have a “main” function somewhere. To clarify, this is whatever function drives execution; it is not necessarily called main. This gives you a pretty good idea of you how “the machine starts”.

The graphical layout: Usually the application changes it’s UI based on some state. It works like a state machine, moving from one state to the next. If you can figure out where the graphical layout changes, that will eventually help you internalize the mental model of the state machine.

Runtime investigation: Applications are different. Some are event driven, some deal mostly with request handling, other are command line tools with a main loop. Try to add a feature as a learning experience and be ready to throw it out if it does not work out.

Reading a class: Find a class you are interested in and look at its’ methods. What are the public functions of this class? How do others use it? Find the class’ equivalent “main” method and work from there.

Retelling or Rubber Ducking: Write a (fictional) blog post. Write some documentation. Make an internal presentation. When you explain something you have to serialize the mental model in your head, and that helps you figure out where you went wrong.

Conclusion: Code should be different. Good code should be personal. Learn to appreciate other people’s code. Make everyone feel safe to be themselves.

The YouTube video

A testing story in 3 acts

By Mathieu 'p01' Henri@p01

Mathieu showing getting ready to tell us his story

Mathieu’s team writes the profile card in Microsoft. You might have seen it in office365 and other places around the Microsoft-world. He told a story about how they went about developing it from a first-party to a “third-party” app that could be consumed by a bunch of other teams in Microsoft.

He divided this story into three acts. Editor’s note: This is apparently very common in narrative fiction!

Act I — Getting started: We’ve all been here. This involved setting up the service, adding CI, preparing monitoring and adding TypeScript. At this point, very few teams were using their module.

Act II — Taking off: At this point they were scaling to allow more teams to use their service. They switched from npm to yarn and flux to redux.

A huge pain for them at this phase was manual testing. They tested in six (i believe) browsers for every change, and this was really getting tedious. At this point they knew they needed to look into automated testing to prevent visual regressions. At first they considered Selenium, but it is unfortunately flaky and very slow. Cypress.io is good, but unfortunately it is currently Chrome only.

In lack of a better choice, they decided to use the Chrome devtools protocol (CDP). It is great for Chrome and has bridges for other browsers. After this, they wrote a small test framework on top of the CDP. They used pixelmatch to marks visual changes in their components in red. This meant that manual signoff for changes were reduced to four browsers (no more Chrome).

Act III — Up and up: At this point the card component was pretty widely deployed. They converted the component to a Lerna monorepo in order to split up the code into different packages. They also focused on easier onboarding of new developers.

One really amazing improvement they did was improving the testing library futrther with a side-by-side visual “git diff” on a pixel level. They also built a website to go through the visual diffs and sign off on them using the arrow keys. This enabled them to quickly inspect changes and further reduce their testing pain. Watch the talk to see it in action, it’s pretty rad.

Another cool thing they did was use Swagger (a REST documentation generator) to generate REST code in JavaScript using a Swagger API. Combined with TypeScript this is really powerful, as it allows you to always be in sync with the server code and refuse any drift compile-time.

Their story is continuing.

The YouTube video

Service Worker: taking the best from the past experience for the bright future of PWAs

By Maxim Salnikov@webmaxru

Maxim is once again on stage to champion service workers!

Maxim is back, and boy does he want everyone to use service workers. Jake Archibald would be proud. If you’re still not onboard the PWA train, go watch his talk.

So it has finally come; 2018 is the #YearOfPWA. It is finally supported in all major browsers and all platforms as well. This means using Chrome or Firefox they work and are installable on mobile and desktop. It’s not quite there on Linux and Mac OSX yet, but it is in the pipeline somewhere. PWAs were designed to be installed without an app store, but sometimes it is nice to see what is available in one place. PWAs are therefore coming to the Windows app store soon.

But when is the proper time to register your service worker? It is important that a PWA is a progressive enhancement, not a regressive degredation. Registering it too early can interfere with the loading of other assets, as the precaching will request a lot of assets. So the later the better to not affect the app negatively. It is usually fine to run it on the on window load event.

When using service workers, it is important to not cache the worker file. This is because it may negatively affect the loading of the service worker. You can also set updateViaCache to none when registering, but this is only supported in Firefox for now.

When caching large assets, you can use the Storage Estimation API to see if you have enough available space.

When auditing your PWA, you have a few good tools you can use. You probably heard about Lighthouse, which is now integrated into the Chrome devtools. There is also something called Sonarwhal, which helps you audit many things from accessibility to PWA.

When visiting a website, it will use heuristics to determine when the user is interested in the website and show an install app banner when it determines that they are. Developers can make the browser show their own install UI by listening to the beforeinstallprompt event and calling event.preventDefault(). No word on showing the install prompt programmatically and not via heuristics, though. Browser vendors may want to keep this feature as something under their control to prevent abuse.

At the end, Maxim showed us a really awesome way to add support of new formats to browsers with service worker support by using a wasm module in the fetch event!

If you’re interested in PWAs, you can come join the PWA slack, where a lot of PWA-related discussion takes place.

The YouTube video

How to make accessible web when the ideal does not match reality?

By Tor Martin Storsletten, Tom Widerøe and Lotte Johansen — @twidero @lotte_johansen

Tor, Tom and Lotte getting ready to speak

Tor, Tom and Lotte from Finn kicked off this really interesting talk on accessibility by introducing themselves and what they do. At Finn they have an accessibility-group that have become domain experts on accessibility and teach colleagues about how accessibility is implemented. Interested in setting up an accessibility group at your workplace? They pretty much just did it on their own accord and eventually got the blessing of their managers.

So how do we write accessible web? The ideal is to just use plain and semantic HTML. Browsers have worked for years to make button tags “just work” in terms of usability and accessibility. Writing a js-only SPA is probably the worst thing you can do in terms of accessibility, but it is possible to make it accessible by following best practices.

In terms of layout, a body should always be divided into a header main and footer, where the main tag contains anything that is unique to a particular page.

Headings are really important, being the preferred method for navigation in many screen readers. Not all pages are designed with headings, however, so this becomes an issue when you want to make your website browseable with a screen reader. One way to solve this is to add a hidden heading which is only visible to the accessibility tree. When doing so, you cannot use the hidden attribute in CSS because that hides it from screen readers as well. You have to use something like absolute positioning to place it off-screen. Editors’s note: in bootstrap there is a class called sr-only that does this. Copying this class into your project is a good solution.

In forms you want to connect labels to inputs using IDs. When doing so you must use unique IDs in forms, otherwise screen readers may get confused as to what is the correct text for that input element. Finn struggled with this when they had a hidden form designed for mobile with the same IDs that overrode the main form and messed up what the screen reader said.

When using the keyboard to navigate, it can get very annoying having to tab through the header every time you want to get to the main content. Adding skip links is a good solution for this. This means placing a hidden element in the beginning of your document that becomes visible on focus. When clicked, this element moves the focus into the main tag. This enables keyboard users to go directly to the main content.

Design can sometimes conflict with accessibility. Designers are often oblivious to this, and this can be a challenge. For example placing a button inside a link will make a screen reader ignore the nested element. This particular issue can be solved by adding an empty div below the outer element and adding the aria-owns attribute to it. That essentially pulls the button out of the outer element in the accessibility tree.

Some screen readers may use first letter navigation. For example “6 varslinger” is bad, as you would need to know the first letter to navigate to it, but “varslinger (6)” is better.

Numbers with thousand separators should use aria-labeland role=text to tell the screen reader how to pronounce the number. role=text is not supported in all browsers, however, so don’t put important content inside it

Images must always have an alt attribute. If it is decoration, then you can put alt="" to make the screen reader ignore it. SVGs do not have alt, but you can use a title tag as the first child of the svg element. But that is not enough, you also need aria-labelledby and role="img"to set the correct semantics for legacy screen readers. In SVG you can also label individual elements in a graphic to add more contextual information. However, if you can, just keep it simple with plain HTML.

For live updates, for example updating search results when filling in a search box, you can use something called aria-live. When content inside the region changes the screen reader will read the updated contents, no matter where the screen reader is on the page. Using aria-live=polite on “30 treff” for example would inform the user that the result list has been updated and that there are 30 results.

The slides can be found here.

The YouTube video

Inside V8: The choreography of Ignition and TurboFan

By Sigurd Schneider — @sigurdschn

Sigurd showing how TuboFan uses type feedback to assert optimistic assumptions to create optimized code

Sigurd is part of the V8 team which develop the internals of the JavaScript engine in Chrome. He came to show us how it works internally, and boy is it complex! Good thing he’s great at explaining things — go watch the talk when it’s up, he’s a lot better at explaining stuff than me.

There are two components in the V8 JavaScript engine that handle the JavaScript execution pipeline: Ignition and TurboFan.

Ignition is an Interpreter. It has a very fast start-up time and produces very small bytecode. It excels at compiling code executed at page load and infrequently executed code

TurboFan is the optimizing compiler. TurboFan generates very fast machine code, but has to make optimizing assumptions in doing so. It excels at code executed after page load or frequently executed (hot) code.

If we did not have am optimizing compiler, which we did not have for a long time in the early web, JavaScript would be extremely slow. The EcmaScript spec requires a lot of extra code because of the dynamically typed nature of the language. An optimizing compiler can assume types based on observed usage and optimize away a lot of extra code.

JavaScript must be a safe language, which means that no matter what you do, the VM should never crash. That means a lot of runtime checks. Runtime checks add overhead, however.

Sigurd went on to talk about internal parts of the optimization pipeline like variable shapes. type feedback and deopt loops (specifically what they are and how the V8 team attempts to avoid them).

He then went on to talk about a new feature in V8 for marking a function call that stops speculative assumptions for the next compile and thus prevents deopt loops.

Historically array builtins like reduce and map have not been as fast as the naive ES3 implementations like for loops. Browser vendors have been doing a lot of work in this space, however, and most are now on par with similar ES3 implementations. They are still not quite as fast as for loops, but that’s no reason to stay away from them. If in doubt, choose readability over premature optimizaiton. Let the browser engines worry about performance.

Editor’s note: In the questions after the talk, Sigurd mentioned that he was pretty new to JavaScript and had just learned about node and npm. It’s refreshing to hear that even someone that is as insanely smart as Sigurd are also constantly learning! 😊

The YouTube video

Thinking Reactive in JavaScript

By Ivan Jovanovic@ivanjov96

Ivan introducing his talk

Ivan wrote a blog in 2017 that explained some downsides with React.js and why he was leaving it in favor of a more reactive library called Cycle.js. This blog caused a flame war in the comments, with few people actually discussing the merits of reactive programming; most didn’t read it and simply assumed he was an idiot for ditching React. Now he’s back to make it right and kick the discussion off in the right direction.

What is reactive programming? It is “Programming with asynchronous data streams”

Ivan started off by taking us into the wild world of reactive programming. It may be a bit foreign to most programmers who are not familiar with it, so I recommend to watch the talk when it’s up to get the full explanation. In essence it involves, as Ivan said: “An endless sequence of digital signals”. Basically streams of data of any type. Observables is the official terminology of a stream, so when you see observables, think reactive programming.

Once you have set up a stream, you can call functions on that stream like map and filter to affect the buffers of data in that stream as they pass by.

Why use reactive programming? It has good performance and it is highly testable. In order to test a stream you can pass some data into it and test what comes out the other end.

Another advantage is that everything async can be handled using has the same API, as it is an abstraction over async processes.

Many streams from different sources can also be merged into one stream which makes it easier to work with.

A common library that is often used in reactive programming is RxJS. It is one of the larger libraries, but it has a pretty comprehensive API. Other alternatives include xstream, which is tiny but has a smaller API, and Bacon.js.

React has a lot of cons that are solved by reactive programming, read Ivan’s blog for more about this. One way to get around this is to include something like RxJS into your react app, for example via an RxJS middleware in redux. This helps with async actions where you ordinarily would have to use thunks.

Ivan finished off by recommending Cycle.js again and talking about how it has a reactive dataflow, going into some depth about how that works exactly.

The YouTube video

Micro Frontends — Think Smaller, Avoid the Monolith, Love the Backend

By Michael Geers@naltatis

Michael introducing the micro frontend

We’re starting to see a trend: The technology in the frontend world moves fast, and as it does, the frontend gets bigger. The backend used to be 90% of the app, but now modern frontends are rich single-page applications.

Backend-developers have also had an issue with their codebases growing into monoliths. They solved this by scaling back the monolith and embracing microservices.

This is where Michael introduced micro frontends. Fear not, it’s not microservices in frontend! Micro frontends is like taking a slice of backend and frontend and grouping them together. Each of these cross-functional groups is assigned a specific mission like “search” and several of these compose together into an app. See the above image for what such a “Search” team could look like.

This has some clear advantages: Each team has a clear mission. There is no question what they should be focused on. They are customer focused, that is to say there are no pure backend teams that are out of touch with what the customer gets to use. And last but not least this means each team has a greatly reduced and more manageable scope.

Now imagine you have an application where you wish to compose a set of these micro frontends together. How would you go about sharing components between teams? iFrames? There are issues with iFrames that are non-trivial like accessibility and SEO, so that’s not an ideal solution.

Micheal’s team went with a new friend called Web Components, specifically using custom elements to encapsulate functionality that is to be shared between teams. The support is.. Not great. But there exist polyfills that also enable you to use them in browsers like Firefox and Edge.

If you’re curious about how well your favourite library supports custom elements, you can have a look at the Custom Elements Everywhere website.

Michael then went on to talk about what he liked to call resilient web design. This is the idea that your app should work even if the user is running ad-blockers or has JavaScript disabled. This means we can’t use fancy things like custom elements — we need to use fundamental web features like HTML to output something, so we need to render on server. The easiest way to do this is using Server Side Includes (SSI).

When you are using micro frontends to get a plethora of different teams working on different pages that compose together to create a single experience, you will run into issues with connecting these together. The questions is; how do you avoid hard reloads between page navigation? The easiest way is to create a very thin app shell that handles routing using soft navigation.

An issue that was on several people’s minds was how can you expect to have consistent design system when everyone is doing their own thing? It is possible to use a global stylesheet á la bootstrap, but that makes introducing breaking changes hard. You would have to coordinate with every team on the website in order to do so. A better solution is to create an npm package that contains either the components or the stylesheets. This is better, but does of course require a build process.

Michael ended off by talking a little about performance. Naturally it is a concern that when many teams all run their apps on one site you can bloat pretty quickly. The main takeaway here is be reasonable. Do you really need React.js and redux for that button component? Setup a performance budget and audit the performance regularly. #PerfMatters.

The YouTube video

What the v…DOM?

By stefan judis@stefanjudis

Stefan introducing his talk

I think most of us can agree; declarative UI is nice. It means we don’t necessarily have to worry about how, only what. React uses a declarative model, in particular in the render method, and thanks to that it has mad a lot of things in the frontend really easy. But how does it work?

Let’s first talk about babel. Babel plugins parse source code into an Abstract Syntax Tree (AST). (Check out AST explorer to try out how that works). After the source code has been parsed into an AST, you can use babel to transform it. Babel uses the visitor pattern to visit every node in the tree expose methods to run transformations on them. The visitor pattern is an old and well known pattern and ensures we will touch every node in the tree. JSX is just another transform like this.

Recently (especially after react) there has been a shift from the imperative to the declarative. This is possible thanks to react’s use of virtual DOM (vDOM). But how does it work?

Virtual DOMs allow only the DOM nodes that have changed to be rendered. This saves a lot of work for the browser, which in turn results in better performance. Virtual DOM nodes are just objects with a few properties, but they look a lot like real DOM nodes in order to facilitate easy diffing.

Jason Miller has a great talk on optimizing the diffing algorithm of the vDOM.

One of the final questions Stefan tried to answer was: Is vDOM going to be around forever? Maybe? It’s impossible to predict the future. vDOM works great for now, but there are other interesting approaches out there like hyperHTML, which uses tagged template literals instead of babel transforms.

Editor’s note: Change detection is hard. There frameworks are other frameworks like Svelte that have interesting approaches that don’t include vDOM. There may also be further improvements by investing further in immutability, see this talk from Lee Byron at React Europe 2018

The YouTube video

JS in parallel — heavy duty in the browser

By Martin Splitt@g33konaut

First slide: TBA by Martin Splitt

Announcing a new library called TBA: Taco Based Acceleration.. Just kidding.

This was a very amusing talk, Martin really owns the stage. His objective was to introduce us to parallel processing in the context of edge processing of images. Edge processing is the first stage of computer vision, it means computers are able to discern edges like walls and monster trucks in images.

As we know, JavaScript is “single threaded”. This means that if we start processing heavy workloads, like edge processing of a FHD image, we will lock the main thread and block rendering.

Web workers allow us an escape hatch from the “single threaded”-ness of JavaScript by spawning a worker. This worker runs in its own thread, and thus does not interfere with the painting in the main thread.

Initializing a worker can mean passing the instantiation function a script file which contains the code it will run. Data is then passed between the threads using postMessage. Using a “transfer list” (i.e. an array as the second argument) in the postMessage call we can transfer the data instead of copying, which is faster. This means the passed data will be undefined in the main thread after passing ownership to the worker.

Using workers makes it possible to chunk hard work into n slices, every one of which can run on one core. So that means we can effectively run seven workers on a four core machine with hypertheading, for example (leaving one thread open for the main thread).

Watch this space: Interesting things are coming. Some of these are like SIMD, SharedArrayBuffer (which is disabled until we get over the whole Spectre thing), Canvas access in workers and Shared Workers

Using WebGL you can use shader code to run computation on the GPU. The GPU is really good at parallel code, so it can be used for heavy multiprocessing. The example he used was reduced from 2.6 seconds to almost instant using this method! (Editor’s note, this is what WebRender does!)

Turbo.js is a library that encapsulates writing shader code to make it easier to run code on the GPU. WebAssembly may also make a splash in the heavy duty space, but that remains to be seen.

Martin was also kind enough to give us a bonus talk because we had plenty of time before the after-party!

The bonus talk
The YouTube video

After party

At Ingensteds

The after party at Ingesteds — Packed!

Afterwards we went to Ingensteds to have some food, drinks and chat. The atmosphere was really great — at least until they turned the volume up so it was hard to talk 🤷. I simply prefer to be able to chat over the music, that was pretty hard where we were sitting. It was fine when we went outside to enjoy the warm Oslo summer evening though!

The food, however, was amazing! It’s not often a vegan like myself has the option to eat over 50% of what was served, but today was the day. Hats off to the organizers, you did a great job! 👏👏

Then it was off to sleep off the booze and get ready for the next day.

Day 2

Readable Code

By John Papa@John_Papa

John Papa — “Readable code”

Most of our time is spent reading code, not writing. The ratio of reading to writing is well over 10 to 1, so it’s therefore really important that we communicate properly when writing code. You should care about the next developer that has to read your code, because it may be you. Code is intended for people, it just happens that machines execute it.

How do you make code readable?

Start with a style guide. “This is how this project will look and feel”.

Make intentions clear using clear and meaningful naming.

Organize classes and functions for readability. Smaller functions are more readable. Abstract methods into other modules and functions. This makes it easier to reuse, test and refactor.

Cute and clever names are not a great idea, especially when you want to search through the project for something.

Provide context in code when possible. As a rule of thumb, if you need to comment something for clarity, there may be a better way to write it. Comments also have a tendency to go out of date when the code changes and the comments are not updated.

In general: Use comments when explaining why something was done, any consequences and of course JSDoc.

It’s a really good idea to use automated formatters like Prettier to avoid a lot of these issues. Just make sure the team is onboard with adding a formatter, as springing changes like this on an unsuspecting team can take the most seasoned developer by surprise.

For more tips, check out the book called “Clean code” by Robery C. Martin.

The YouTube video

Hand-crafting WebAssembly

By Emil Bayes —@emilbayes

Emil showing off his “hello world” in wasm

It’s time to get low (level)! Emil came to talk about WebAssembly (Wasm) and his hobby of writing it by hand. Writing it by hand is a great learning process, and learning Wasm is useful because it is what the browser debuggers show you when they decompile the bytecode.

First of all, “Web Assembly” may not be that great of a name. This is because it’s actually not very web-like since you don’t have access to the browser APIs, and it’s not very assembly-like because you cannot interact directly with the machine and its’ registers.

There are also some other cons to mention. There are no syscalls, meaning you cannot talk to the Operating System. There are no new hardware access APIs, so you cannot compile device drivers into Wasm. Basically there is no magic, just raw computation.

Wasm has some great upsides however. Wasm has 64 bit integers, which is very useful in cryptography. JavaScript does not have this, which means cryptographic functions that deal with big integers (as they frequently do) get a nice performance boost from being written in Wasm. JavaScript can also be hard to predict what it will compile down to. With Wasm you have surgical presicion over your compiled code. Wasm may soon run anywhere. For example it has been successfully run in ring 0, which basically means it runs as a kernel module!

Unlike JavaScript, it is designed to be a compilation target for languages like C/C++/Rust. This opens up a ton of doors like running games and native apps in the browser.

Wasm is a binary format, which makes it very hard to read. What you will most likely see when “reading Wasm” is something called WAT. WAT is the WebAssembly Text Format. WAT is meant for helping debugging by giving the programmer something they can understand without having to trudge through binary code.

An example of some Wasm code represented using S-expressions

WAT uses S-expressions like lisp, which looks something like (op x y z) . Here the operation is the first part of the expression, and the arguments follow separated by spaces. Labels are denoted by $, and they are human readable strings that identify variables which are replaced by numbers post-compile. Everything is typed to avoid undefined behavior. Operators follow a type.op convention, where you need to specify the type for every operation.

When working with this, you may experience some Lisp flashbacks when handling all these parenthesis. This is to be expected. If you need some help with that you can look into using the parinfer plugin to keep track of all your parenthesis.

After demonstrating some Wasm, Emil showed us how to compile it. He used The WebAssembly Binary Toolkit to install some binaries which made this a lot easier. Using WABT it was easy to compile WAT to Wasm using wat2wasm.

He also showed us how to create a wrapper script which did all the heavy JavaScript lifting of compiling, instatiating and calling the Wasm module. This was also a part of WABT called wasm2js. If you use Webpack, there is also a loader for wasm called wasm-loader which does the same as wasm2js.

Finally, Emil mentioned that the reason they wrote Wasm by hand was to write a library that exposed bindings to a C library called libsodium. Libsodium, is a modern, misuse-resistant cryptography library. It’s recommended by many in the security world to avoid breaking rule number one of cryptography: “Don’t write your own crypto”. The library they wrote is called sodium-universal.

The YouTube video

Dodging Web Crypto API Landmines

By Ernie Turner@erniewturner

Ernie showing off how to generate Pseudo-random numbers in JavaScript

Ernie came all the way from Montana to Oslo to talk about the Web Crypto API! That’s dedication right there.

The Web Crypto API is a Javascript API for basic cryptography such as hashing, encryption and signature generation. Using these APIs allow us to keep data encrypted both in transit and at rest.

As you can see in the image above, the Web Crypto API exposes methods for cryptographically strong psuedo-random number generation (PRNG). This is essential for cryptography, without it secure cryptography cannot exist. (And no, Math.random does not count!)

There are some things to be aware of when using this API. It only works over HTTPS for security reasons. There is no way to feature detect what algorithms are supported. Browser support is decent, but IE requires a polyfill. And there is obviously no “forgot password” support, as that would be the same as a cryptographic backdoor and render the entire system vulnerable. Any “forgot password” feature would have to be implemented some other way, using for example a second key encryption key.

Ernie then showed an example of using key derivation and encryption to store data. This approach would be useful if you were implementing anything that needed End-to-end encryption.

If you need to support browsers that don’t support the parts of the Web Crypto spec you need, there is a polyfill you can look into. But be aware that it is impossible to polyfill secure PRNG, so if the platform does not support that, you’re out of luck. The polyfill is slower, but that is expected because it is running in the JavaScript VM and not running on native code.

Editor’s note: Cryptography in the browser is still a pretty new field, and there are concerns that it’s impossible to create secure cryptography in such an environment. Be aware of the challenges before you implement it. Link 1 Link 2

The YouTube video

Cracking JWT tokens: a tale of magic, Node.JS and parallel computing

By Luciano Mammino@loige

Luciano showing the three main parts of a JWT

Luciano wanted to present a technology called JSON Web Tokens (JWT). To kick things off he gave us a definition:

JWTs are a URL-safe stateless protocol for transferring claims

This was a bit of a mouthful, so he explained what that means.

URL-safe means that the token can safely be put into a URL because it consists exclusively of URL-safe characters.

Stateless means that the session data is in the token instead of a storage medium. Token validity can therefore be verified without having to interrogate a third-party service like a database.

A claim is simply some information you want to transfer, like the identity.

The token itself is separated by three parts: The header, payload and signature. The header contains metadata like the algorithm used (alg) and type (typ). The payload is the claim you would like to transfer and the signature is a cryptographic signature of the header and the payload. These parts are all Base64 encoded and concatenated with a period as a separating character.

Several signature algorithms are supported, and it used to be possible to set a None-signature. The None-signature is infamous.

He also recommended using a site called jwt.io — It has a debugger for jwt which enables you to easily read its’ contents.

JWTs are intended to replace session cookies. The main advantage here is that there isn’t really any need for a backend database as the session is contained within the JWT.

It is not possible to invalidate tokens out of the box. This means features like “Log out”-buttons will not actually invalidate the session, only remove the token from the browser’s storage. If that token is used anywhere else (or stolen) it is still equally valid despite the user logging out. You can build a blacklist of tokens on your server if this is an issue, but then we are back to databases again!

Luciano then went on to demonstrate a potential weakness of JWTs. If you are able to figure out the secret, that means you can create your own tokens and the server will not be able to tell the difference. He demonstrated using a brute-force library he wrote to brute-force the secret. In order to prevent your secrets from being brute forced, use a long, complex secret with a large symbol space, or use public/private keys instead.

Key rotations are also a way to prevent such key recovery attacks. This is especially true if the adversary tries to brute force an old, expired token, as the expiry of the key is irrelevant if the adversary knows the secret and can create their own JWTs.

Editor’s note: It may also be an idea to use httpOnly cookies instead of localStorage to prevent a potential XSS attack from exfiltrating the tokens.

Still, JWTs are considered safe, given the cryptography around them are done correctly.

The slides can be found here.

The YouTube video

Building virtual worlds with web technologies

By Monika Kedrova — @salad_milk_soup

Monika showing us the end result of what we saw in the talk

Monika spooked everyone in the audience when she out of nowhere played a really trippy trailer from a video game. At full blast from the speakers we were greeted by singing ducks on rainbows and block-head puppet-men. Now that we were fully awake, she launched into the talk, which was about Virtual Reality (VR) in JavaScript.

Virtual reality is actually a spectrum. At the most extreme real you have the real world. Between them you have Augmented reality, Mixed reality and at the other extreme: Virtual Reality. To better reflect this, a new API to replace the WebVR API has been drafted called the WebXR Device API.

Web + VR/AR/MR = WebXR Device API

WebXR allows detecting headsets and what capabilities it has, Orientation + position of headset, controllers.

Monika was inspired by a game called Katamari Damacy and decided to make a similar game in A-Frame. A-Frame is an abstraction on top og WebGL and WebXR that is very simple compared to using it directly. It is based on HTML, easy to get started, Cross-platform and very performant. It is alsoTool agnostic, which means it works with Angular, React, Vue etc.

Place things in the scene using X, Y and Z-coordinates and add materials and textures. This is all done using asset elements and referencing them using attributes on primitives’ HTML elements. If you need something that is not an A-Frame primitive, you can add an external 3D model (there are loads online). The preferred format is glTF, so use that if you can.

Physics and collisions: You can get physics out of the box using the A-Frame physics component. User interactions like keyboard and acceleration events are programmed in a separate script file. This is also where behavior like collisions are handled.

A-frame comes with debug tools that can be accessed by pressing <ctrl> + <alt> + i.

The finished product can be viewed at glitch.

The YouTube video

Building a ray tracer in Javascript

By Madlaina Kalunder@anialdam

Madlaina showcasing the huge difference various materials make

Once she walked onto the stage, Madlaina started off by pointing out that humans are visual beings, so using 3D makes sense. In some cases it can really improve the experience, especially because it makes it easier to visualize what a space looks like. An example of this is a floor plan in a housing ad.

She then went on to talk about the different materials that can be used in a scene. Specular reflections are like mirrors, diffuse reflections are energy-retaining and refraction is like glass. These all have a huge effect on how light travels around a room, and thus how ray tracing happens.

So what is ray tracing? In real life the sun shoots electromagnetic radiation at the Earch, and this is what we call sunlight. These work like rays, and god rays are a good example of this. The rays bounce off objects and at some point lose all their energy and are gone. This is how objects are illuminated.

Ray tracing works almost the exact same way, except the other way around. Imagine the camera sitting in front of the screen and the scene behind it. The camera shoots rays through every pixel of the screen to see what the rays intersect with. The rays will bounce around and eventually they should end up at a light source, which means it is visible.

Real-time ray tracing is computationally intensive, so instead games use rasterization. Rasterization moves everything “to the eye” instead of shooting rays so that they are flattened in order to be displayed onto a 2D plane like the monitor. Rasterization does not really look realistic out of the box, so a lot of work needs to be done in order to make it look right.

Takeaways:

Models, lights and materials matter.

Ray tracing is cool, but it is not going to be realtime any time soon.

Artwork is work. For real quality you need to pay your artists.

The YouTube video

Using New Web APIs For Your Own Pleasure — How I Wrote New Features For My Vibrator using the Web Bluetooth API and the Web Audio API

By Michaela Lehr@fischaelameer

Michaela asking the question: “What is sound?”

Introducing the conference’s most creative talk! Michaela bought a Bluetooth-enabled vibrator which was marketed with “Let him control you”. She found this very limiting and decided she should be able to use vibrator by herself. Not content with that she embarked on a project of about making the vibrator be triggered by the sounds of a video in the browser! Time to dig into the Bluetooth and Web Audio APIs!

Michaela identified three steps to get to happiness: First she would have to make a connection to the device. After she had set up a connection, she would need to analyze the sound of the video to expose vowels the vowels (aahs in this case). The final hurdle was to write the vibration commands via the Bluetooth API.

The way Bluetooth works is that a peripheral advertises it’s available to make a connection until a central device (like a laptop) tries to connect to it. Then they make a handshake and the connection is set up. This means Michaela would have to write a Bluetooth client that connects to the peripheral’s GATT server.

Michaela then went on to explain how she connected usint the Web Bluetooth API. For security reasons, connecting to a device can only be done on user actions like click events, so she added this to a button’s click event. After this was done the client would be able to send commands to the server. Excellent!

After this we started getting into the techical details of using the Web Audio API and identifying particular sounds from a waveform. This involved creating audio contexts with the Web Audio API that connected the video’s source node to an AnalyzerNode, which used audio witchcraft to figure out when it was passed an “Ah”-sound. This “pipeline” of audio nodes is called an audio routing graph.

To reverse-engineer the protocol of the vibrator, she used a packet sniffing technique to read the bytes that were sent to the vibrator and converted them to readable text. Looking through the byte stream she found a message that simply said vibrate: 4;. That is all she needed to make the vibrator vibrate!

She then showed us a series of videos showing us how it worked. Check out the full video for that (once it’s up)!

Conclusion: Be creative with more web APIs! Experiment and give feedback to the spec writers!

The YouTube video

Building High Performance React Applications

By Joe Karlsson@JoeKarlsson1

Joe preaching the good word, “Only render when you really need to”

For the final talk of the conference, Joe came on to talk about performance in react applications ant the implications of implementing it properly. He mentioned amazon at some point introduced a 100ms second delay in their application. That number might sound small, but they estimated that this simple regression reduced their total sales by 1%. For someone at Amazon’s scale, that is 1 billion dollars! Amazing!

So why focus on speeding up the client versus the server? Joe mentioned a few reasons why you might want to do this: When you improve performance on the client you see immediate results. There are no caches or CDNs that may affect your improvement, so it is easy to profile. It is also quite often much easier to do, which translates to it being cheaper and giving you more bang for your buck.

After this introduction, we turned our attention to React specifically. When working with react, you have a lot of ways to improve performance, but they all really boil down to the same point:

Only render when you need to

When React detects a change, it has to write that change to the DOM. These changes trigger repainting of the website in the browser, which in turn means they are are computationally expensive. So a general rule of thumb is to avoid this where possible.

It is important to always profile as you are making improvements. React includes a flag which will turn on flags the performance how to measure performance? It has been built since React 15 and can be enabled by adding ?react_perf to the website URL. After turning it on you will see the lifecycle events in the user timing panel of the Chrome devtools performance tab. It is important to note that this only works in react dev mode.

An easy performance optimization you can make is using keys in lists. Always pass a unique index to the element’s key prop allow React to optimize rendering that list and avoid extra renders. If you don’t have a unique key you can create a composite key from a series of unique fields or call a hash function on the object to. NEVER use a non-deterministic method like Math.random to generate a key, or you will ruin your performance.

When you have a component which only needs to render when props change, you can extend React.PureComponent instead of React.Component. This automaticaly implements shallow comparison of props and will not re-render unless they have changed. Great but dont’t use it everywhere.

In more advanced cases you can implement something called shouldComponentUpdate. It acts much like the other react lifecycle methods, but allows you to return true or false depending on wether or not the component should re-render. This allows you to specify exactly when a component should render. Use it sparingly, as it can break stuff when implemented improperly.

Using immutable data structures is another way you can make it easier to track changes. This way you can check a reference instead of diffing a shape. Look into Immutable.js, which is the most popular immutability library right now.

Enable production optimizations in your build pipeline. This is by far the easiest way to give your app an extra boost, because it turns on produciton optimizations inside of react. You get this for free when the environment variable NODE_ENV is set to production build-time.

As for stateless components, there are no real performance benefits by using them because React adds a full react class behind the scenes. This will change in the future, however, as the React team have indicated that they will look into optimizing them in the future. Currently they have the same performance characteristics as classes, though.

There are some cool features we can enable in webpack as well. Check out webpack-bundle-analyzer, which is a great way to visualize what is taking up most space in your app. Profile often, maybe also as part of build process. Editor’s note: Check out bundlephobia, which is a great way to check out the cost of adding a module.

However, the most important takeaway is: Make it work, then make it fast. Don’t optimize prematurely! Value action over perfection.

Editor’s note: Much of this is covered in the react performance documentation. Read up on that for more information on this subject.

The YouTube video

Thanks for this year, rebels!

Alas, all great things must come to an end — and what a great thing it was! I must honestly say that it’s rare that a conference has this many interesting and talented speakers. I’m simply not used to a conference where every talk is consistently exciting and inspiring, what a lineup 😱

I want to finish by giving a big thanks to the organizers who created a truly safe and welcoming atmosphere, you’ve really outdone yourselves this year. Thanks on behalf of us attendees to all of you, you deserve to sit down, relax and feel great about this achievement.

We at OMS hope to see you all again soon ❤️

--

--