Quick Code
Published in

Quick Code

A simple technique for real-time video Gaussian blurs that did seem to work.

I’ve sometimes run into cases where a user-provided video needs to be Gaussian blurred within the web page, dynamically with a user-given radius (similar to what Projector does, which however utilizes WebGL 2.)

I’m not so familiar with WebGL however, and what I had tried in WebGL for this purpose failed (which will later be discussed).

Therefore after so many trials and errors, what I’ve eventually chosen is the good old 2D thing — CanvasRenderingContext2D. But let me list a few of the attempts I’ve made.

CSS filters

My first attempt was to directly style the video with the blur() CSS filter.

<video
src={userProvidedSource}
style={{
filter: `blur(${userProvidedBlur}px)`
}}
/>

However the rendering would be incorrect. The edge-most pixels would get semi-transparent and bleed outside the video boundaries.

Simulated rendering. This is what CSS filters do.

CSS backdrop-filter

The backdrop-filter CSS property seemed like a perfect solution when I first ran into this problem.

<div class="video-box">
<video src={userProvidedSource} class="video-box__video" />
<div
class="video-box__backdrop"
style={{
backdropFilter: `blur(${userProvidedBlur}px)`
}}
/>
</div>
Simulated rendering. Seems perfect at first glance.

When the blur radius grows larger, however, the rendering result might go awry.

Unblurred image is revealed below the backdrop, though I don’t quite know why.

2D canvas, first attempt

It appears fortunate that the CanvasRenderingContext2D.filter property supports both CSS and SVG filters.

So it seemed to be worth drawing the whole video on a canvas, where a filter has been applied to.

canvas.width = w;
canvas.height = h;
/** @type{CanvasRenderingContext2D} */
const context = canvas.getContext('2d');
context.save();
context.filter = `blur(${userProvidedBlur}px)`;
context.drawImage(video,
0, 0, video.videoWidth, video.videoHeight,
0, 0, w, h
);
context.restore();

Unfortunately the edge-most pixels are still semi-transparent using this method.

A demonstration of semi-transparent pixels drawn with canvas.

2D-canvas, second attempt

So I drew the edge-most pixels first and then set the container of the canvas’ overflow to hidden.

const r = userProvidedBlur * 2; // to cover the transparent pixels
context.filter = `blur(${userProvidedBlur}px)`;
// draw the corners
// draw the sides
// draw the video in the center

The rendering seems almost correct, but when called repeatedly, much of time would be wasted on the drawImage() calls, which makes it unsuitable for real-time video rendering. According to this article, even the fastest Gaussian blur implementation has an irreducible time × space complexity combined proportional to the area of the blurred shape. Even if maximally parallelized, this could well be translated to an O(max(width, height, blurRadius)) time.

So repeatedly calling drawImage() in a blurred canvas could mean unnecessary repeated accesses to pixels drawn on the canvas, which might underlie the performance issues I encountered.

WebGL, failed attempt

This brought me to WebGL rendering. Unfortunately, I know little WebGL, and did not know — and haven’t yet known — how to implement the aforementioned algorithm in the WebGL code, so I unnecessarily accessed the texture multiple times in each call to the fragment shader.

// the code below doesn't work
// for severe perf problems
precision mediump float;
uniform sampler2D img;
uniform vec2 texSize;
uniform float boxSize;
uniform float center;
uniform vec2 dir;
uniform bool flip;
void main() {
vec4 ret=vec4(.0);
vec2 px=dir*vec2(1.,1.)/texSize;
vec2 coord=gl_FragCoord.xy/texSize.xy;
float comp=boxSize-.1;
if(flip){ coord.y=1.-coord.y; }
float fi=.0;
for(int i=0;i<200;i++){
float offset=fi-center;
vec2 curCoord=coord+px*offset;
ret+=texture2D(img, curCoord);
fi+=1.;
if(fi>=comp){ gl_FragColor = vec4(ret.rgb/boxSize,1); return; }
}
gl_FragColor = ret;
}

Then I gave up.

The offscreen canvas

So I returned to optimization of my previous 2D-canvas approach.

I drew the edge pixels and the video itself onto an offscreen canvas’ rendering context, before finally drawing it onto the blurred canvas.

What it would’ve seemed like before clipping and blurring.

Blur once, run on every Chromium-based browser

When called in requestAnimationFrame() callbacks for video frame rendering, the performance is also improved compared to my previous 2D-canvas approach, though still a distance away from the perfect 60 fps.

A caveat

However, this approach does not work well on non-Chromium-based browsers, such as Firefox. The OffscreenCanvas API is not supported in Firefox by default, and even if manually enabled, it does not support the 2d rendering context.

Polyfills are possible using the HTMLCanvasElement, but the performance is still not good, and the resulting frame rate was about 2 fps. I’m not exactly clear what went wrong.

It’s hard to achieve any performance gain, however small it is.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store