How Readymag invented Shots to convert video into controllable image sequences

Readymag
Geek Culture
Published in
6 min readDec 25, 2022

Readymag developer describes the process from idea to implementation

From passive to actionable

The web is an organic, ever-changing environment and user expectations for websites have grown over the years. At Readymag, we do our best to help designers create fresh and immersive content that pushes the boundaries of the web. As a part of this effort, we introduced Shots — a unique tool that turns videos into a sequence of viewer-controlled experiences. It’s the perfect way to bring interactive animations to a new level. Developer Ilya Shuvalov describes, from idea to implementation, how Shots was developed at Readymag.

Searching for more interactivity

Elements in motion are much more expressive than static ones. They’re easy to perceive and convey the author’s intent in a simple and quick way. However, passive video and animation viewing doesn’t deeply engage users with a website. The next engagement level is the reaction to videos and animations for user actions, such as pointer movement, scrolling, clicking and tapping.

Adding interactivity to the video widget has been a long-time goal for Readymag. We would try to make videos play/stop on hover, or quickly change the current frame with cursor movements. However, we quickly found out that this cannot be done with videos hosted on most video platforms. Typically, their terms prohibit hiding controls. Moreover, the video itself might have a number of limitations — like a format unsupported by the web, too low or too high a bitrate, huge sizes and slow load times, and so on.

So we came up with a new idea: to make a widget that would enable interactive videos/animations in response to user actions.

A similar type of content is often used by Apple on their landing pages. As a rule, these are custom handmade solutions; not a universal widget, as in our case.

Manual of Diacritics

In this project, 3D Shots fit in nicely with the project’s overall minimalist style. It becomes a bright accent that grabs attention and helps viewers immerse themselves in the website at a deeper level.

Setting requirements for the future widget

We started looking into potentially applicable technologies for a new widget with a list of requirements:

  • The widget should work quickly and smoothly in all browsers, and on different types of devices — including low-power smartphones.
  • The process of uploading video/animations to the widget should be simple. From the user’s perspective, it must be as easy as uploading an image.
  • When loading a page with the widget, the first meaningful paint should appear as early as possible. That is, the user should almost instantly see something happen on the page.
  • The widget should play video/animation back and forth depending on the user’s actions: scroll up/down, hover on desktop, touchmove on mobile devices.

Exploring the field of technical limitations

The first and most obvious thing that came to our mind was to use the <video> tag and the JavaScript API to navigate video files back and forth. This approach seems actionable, but there are a number of significant drawbacks:

  • The user might upload a video in an incompatible format. It can be re-encoded (re-encode), but it will take some time and require significant resources (CPU power).
  • Encoding options vary: bit rate, resolution, number of keyframes and so on. All this affects the quality, speed and smoothness of playback.
  • Videos may not play as smoothly on older devices, especially if the visitor moves quickly from one part of the website to another.

Further experiments brought us to the idea of ​​using the <canvas> element, a technology that has been around for decades and is supported by all browsers (including older ones). Canvas also works as fast as possible, even on very slow devices.

Dichotomizing videos into image sequences

Eventually, we developed the principle of operation for our new widget: the user uploads his or her video to be recoded into a set of image frames on our server. When a page with a widget is shown, these images start to load and are very quickly displayed (paint) in response to the user’s scroll or hover.

Individual frames/images weigh much more (download size) than the volume of the original video, so they take more time to load. However, the loading algorithm that we use allows us to see the animation on screen almost instantly. In the first few seconds, the animation may look a little chunky and not play smoothly, but as it loads, smoothness improves quickly.

We used the dichotomy algorithm, which starts by loading the first and last images from the sequence, then dividing the entire sequence in half and loading the image from the middle. Bisection is repeated as many times as necessary until all the images are loaded.

Image frame download order

Uploaded videos are turned into a set of images through AWS Lambda. We use the ffmpeg converter, which is connected to Lambda as a Layer. The video that the user uploads is directly piped as an input video stream to ffmpeg. With it, we get by without intermediate saving and avoid further strain on our servers. ffmpeg converts videos to pictures, then they are uploaded to AWS S3 storage.

After that, metadata about the set of images (basepath, dimensions, format, and number of frames) is returned to the widget. High resolution video often produces very heavy images — each can weigh up to several megabytes, so we use a progressive JPEG quality scale. The lower the resolution of the image, the higher the quality.

lionzo.com

There’s a feeling that the project gallery acquires materiality and succumbs to its own unexpected laws of physics. If you take a closer look, you’ll notice that the composition in which the images are placed is pretty simple and widely used in design — but the volume and the user’s ability to interact with it help set this project apart from the rest of the portfolio.

For users who are not satisfied with the quality of images after conversion, we have added the ability to upload your own set of ready-made frames (image sequence) in JPEG and PNG to the widget. In this case, we do not convert or resize the files. For PNG, we support the alpha channel setting: this enables you to use images with transparency. The sequence of pictures in the finished project is displayed as is.

​​Readymag users can adjust the appearance and playback of Shots. For example, they can change the frame rate depending on scroll speed or cursor movement, as well as adjust the start of playback depending on the current position of the widget on the page.

diagrama.co

The Shots widget helps create a focal point that sets the project’s mood. It’s a cool solution with a text you can read only at a certain point — this way, viewers are even more engaged.

Turning out better than imagined

Experiments with different technologies had a great impact on the final version of our Shots widget. In the course of development work, the idea underwent drastic changes and eventually crystallized into a solution we didn’t consider at the beginning.

It turned out to be fundamentally better than the original: the Shots widget works quickly and smoothly on a variety of devices, adding a completely new type of interactivity to your projects, and a new layer of customization when it comes to appearance and playback.

Photo: Ilya Shuvalov, Readymag developer

Readymag is an in-browser design tool that helps create interactive websites and online publications without the hassle of code. It works like a graphics editor with enhanced capabilities for the web: draw up the design of your project, add multimedia and animations, see how it looks on various devices, and publish it online in a single click.

--

--

Readymag
Geek Culture

The most elegant, simple and powerful web-tool for designing websites, presentations, portfolios and all kinds of digital publications.