RenderNode for Bigger, Better Blurs

RenderEffects #3: Using RenderNode for faster, better blurring

Chet Haase
Android Developers
11 min readNov 23, 2022

--

The previous two articles covered (in more detail, and in many more words), content that I talked very quickly about in the video Android Graphics that I made with Sumir Kataria for the recent Android Developer Summit event. This article goes beyond that content, though I did touch on it at the end of a recent conference talk at Droidcon London. So if you want the video version of this (and the previous) content, here you go:

In the previous article, I showed how to created a frosted-glass effect which both blurred and lightened a subsection of an ImageView to make a picture caption more readable. Here was the result, with the enlarged, captioned image appearing over the blurred background image gallery:

The caption is drawn with a shader which both blurs and frosts that area

But the blur used in the caption area, while valid and usable, was neither as good (because it’s not as blurry) nor as fast as we can get by using the platform’s built-in RenderEffect blur (which is already used in the background blur of the picture gallery above, as explained in the first article in this series). It sure would be nice to get a faster, blurrier version, to help pop the text out in areas like we see above at the end of the word “Ceiling,” where the black text is on top of other dark, hard-edged objects in the picture.

The reason I didn’t use the “best” approach for blurring was that it is… not intuitively obvious how to do that, and I was optimizing for clear code and techniques. But now that that’s all out of the way (in the previous article), I also wanted to show how you might achieve the same effect with the built-in, and better, blur effect.

The problem(s)

The reason that the blur+frost effect isn’t straightforward comes down to the fact that I only want to shade part of that ImageView in which the enlarged picture appears. That is, I only want to blur/frost the label area at the bottom, not the entire view. RenderEffect, however, applies to the entire view; there is no way to crop the effect to just a portion of the view. So when I apply a RenderEffect blur and then the frosted-glass shader to the ImageView holding the enlarged picture, with code like this:

val blur = RenderEffect.createBlurEffect(
30f, 30f, Shader.TileMode.CLAMP)
val shader = RenderEffect.createRuntimeShaderEffect(
FROSTED_GLASS_SHADER, "inputShader")
val chain = RenderEffect.createChainEffect(blur, shader)
setRenderEffect(chain)

I get this result:

Applying a RenderEffect blur to a view blurs the entire view, which is… not what I wanted

This more pronounced blur is great for popping out that caption text better, but it’s… pretty awful for everything else. The user probably wants to see the picture details, and the mega blur is not helping. But blurring and frosting only a portion of the view ends up being somewhat unobvious with our current APIs.

Okay, what about using a separate View?

You might reasonably wonder (as I did when first working on the app) why I can’t simply rely on the view hierarchy to help out. That is, instead of shading the label area in the larger ImageView object, I could use a separate view sized to the caption bounds, sitting over the bottom of the ImageView, just like the existing TextView which holds the caption.

In fact, I could just use the TextView itself. Then I could blur/frost that view instead of the ImageView. Well… yes. And no. I mean, I certainly could shade the TextView and get a similar effect. Ish. But this also applies the effect to the text; shaders are run after the View renders all of its content into the view — including the text here — so I end up with something like this:

The caption effect is now being done directly in the TextView. (But it’s not being done well).

If you look closely at the label area, there are a couple of problems compared to the effect we achieved in shading a portion of the underlying ImageView. As a reminder, here’s what it should look like:

The caption effect as it should be (shader applied to underlying ImageView)

The most obvious issue is the caption text. In the first image above, the letters are washed-out. This comes from the problem of the effect being applied on the entire TextView, including the text characters. This approach of shading the TextView directly results in blurry, frosted text, which is not really what we wanted.

The other problem is a little more subtle, but very noticeable if you look at the triangular mesh of windows on the right side of the caption area. In the correct version of the image, that area is obviously (if only slightly) blurred, whereas when we use the shader on the TextView, it is not blurred at all. This happens because the shader effect runs only on the pixels of the view it is applied to, not on whatever it happens to appear under those pixels on the display. So while it looks like it should shade those image pixels from your viewing perspective, that’s because they are rendered on top of each other on the display. But at the renderer level, the contents of each view are created dependently, based on that view alone, without regard to what they will sit on top of when they are drawn.

So when we apply the shader to a transparent-background TextView, the only thing that is actually blurred is the text. The transparent pixels in that view simply… remain transparent. The entire reason for the effect was to make the text more readable, so this approach is obviously taking things in the wrong direction.

The other idea I mentioned above, inserting a new view between the underlying ImageView and the top-most TextView would fail for a similar reason. While this technique would avoid the blurry-text artifact when shading the TextView, it would still not have the correct image data to apply the blur to (since that intermediate view wouldn’t contain the image data), so there would be no visible blur. The shader would be applied to whatever the color was in that placeholder view (presumably transparent pixels, as in the case of the TextView example above).

There is an additional thing we could do with an intermediate view, where we draw a cropped copy of the enlarged image into it, thus giving the shader something to actually blur and frost correctly. This would work, for the same reason that it works in the ImageView. And we wouldn’t need the cropping logic of the original shader because we are shading all of the pixels in this view. But it seems like a hack to go about manually cropping the photo and redrawing that duplicate copy to another view just to get this effect.

Maybe there’s a better way…

A Better Way… with RenderNode

What I’d really like to do (and what I tried and failed to do when I first wrote my demo app) is to chain effects together. That is, I’d like one RenderEffect using the system blur, via RenderEffect.createBlurEffect(), the same as I’m using in the underlying image gallery container. Then I want a second effect to apply a frosted-glass shader (without also applying the box blur that is in my current shader), created with RenderEffect.createRuntimeShaderEffect(). Then I could composite these effects using RenderEffect.createChainedEffect(), to tell the system to apply both effects together, one after the other.

This almost works… but there is no way to specify the crop area for the label, so it simply gives a blurred/frosted look to the entire enlarged image. Again, that’s not the look I was going for.

So I cannot use chained effects. But I can do something similar by using two RenderNode objects to apply the effects manually.

So far, we have been applying our shader to an entire View. This is powerful, but has the limitation explained above where the effect is applied to, well, the entire view. So if I want an effect (such as a blur) applied only selectively, or conditionally, it is not possible. Or rather, it is possible, but only when using shader logic like I have in the current blur+frost shader, which checks the location of the current pixel and runs or skips the effect appropriately. But this per-pixel logic approach is not possible for the other RenderEffects (blur, chain, etc).

However, I can, instead, apply effects to RenderNode objects instead of Views, and use those RenderNodes to draw into a view selectively, thus achieving the goal of shading a subset of a View. RenderNode has the same setEffect() API as View, so the setup for this approach is very similar.

But wait — what’s a RenderNode?

RenderNode: How Views Actually Draw Their Stuff

To quote from the reference docs on RenderNode:

RenderNodes are used internally for all Views by default and are not typically used directly.

Oh no, wait — that’s not what I meant to paste (we are mostly definitely going to use it directly). Here, this is better:

RenderNode is used to build hardware accelerated rendering hierarchies.

Every View object, at some point before its contents appear on the screen, records the operations and attributes to draw its contents, for delivery to the low-level renderer (Skia). It does this via RenderNode, which, as of API level 29, is exposed as public API that you can use directly. That is, you can cache commands in a RenderNode and then draw that node manually, typically to a View.

You would usually do this by recording your drawing commands to a RecordingCanvas, which you can get via RenderNode.recordingCanvas(). You then draw to that Canvas the same as you would a typical Canvas object, only now your drawing commands are stored in the RenderNode. You can then render that node (with those commands you stored in it) into a View with a call to Canvas.drawRenderNode().

Now, back to our blur+shader example: The idea is to use two different RenderNode objects, one to render the underlying image content and one to hold the RenderEffect which blurs that content. Each will be drawn into the View separately and can be positioned to draw the effects where we want them, which will give us the crop/position capability we wanted from RenderEffect.

Let’s see how this works.

Drawing with Multiple RenderNodes

First, we create and cache two RenderNode objects, which will be reused (redrawn) whenever the view itself is drawn:

val contentNode = RenderNode("image")
val blurNode = RenderNode("blur")

(Note: “image” and “blur” are not meaningful or referred to again; they are documented as being for debugging purposes, presumably internally as there is no way to access those properties from the objects after they are passed in.)

Next, we need to override the onDraw() method in the ImageView where this content will appear. The purpose of overriding onDraw() is to inject the RenderNode code at the time when we are drawing the content of a view. Normally in onDraw(), we might first call the superclass’s onDraw() method to draw the standard content in a View. But in this case, we want to create a RenderNode to hold that content, so we will draw it there instead, and then use it as the source to draw from into the View:

override fun onDraw(canvas: Canvas?) {

contentNode.setPosition(0, 0, width, height)
val rnCanvas = contentNode.beginRecording()
super.onDraw(rnCanvas)
contentNode.endRecording()

canvas?.drawRenderNode(contentNode)

// ... rest of code below
}

There are a couple of things to note above: First, we are creating a RecordingCanvas and then asking the superclass (ImageView) to draw into that canvas. We then copy that content into the view’s canvas with a call to drawRenderNode(). This avoids calling the superclass onDraw() method more than once, which is good practice in case there is extra overhead in that method which can be avoided by drawing with the cached version of the commands in the RenderNode object instead.

Second, note that I could have called setUseCompositingLayer() on the RenderNode if I were redrawing from that node often, for purposes of speed and efficiency. A compositing layer caches the drawing as a bitmap (as a texture in the GPU), and future drawing operations from it would be simple bitmap (texture) copies, which are very fast. The tradeoff is the extra memory consumed by that texture. In this case, I am only drawing the RenderNode twice; once to the view itself (in the code above) and a second time to the other, blurred RenderNode (in the code below). It’s not worth caching the node just for that one extra drawing operation. But it’s worth considering for your own RenderNode objects, depending on what you are doing with their content.

Finally, we perform the blur. We do this by drawing from the main RenderNode into the blurred node, with an appropriate translation. This blurred node has a blur RenderEffect set on it, and is positioned and sized to be just the label area that we want blurred.

    // ... rest of code above    blurNode.setRenderEffect(RenderEffect.createBlurEffect(30f, 30f,
Shader.TileMode.CLAMP))
blurNode.setPosition(0, 0, width, 100)
blurNode.translationY = height - 100f

val blurCanvas = blurNode.beginRecording()
blurCanvas.translate(0f, -(height - 100f))
blurCanvas.drawRenderNode(contentNode)
blurNode.endRecording()

canvas?.drawRenderNode(blurNode)
}

This code sets the blur RenderEffect, with a blur radius of 30 in x and y (far more, and better, than the measly 5x5 box blur of my earlier shader approach). Note that setPosition creates a much smaller size than the contentNode earlier, because we will only need this smaller area for the caption. Also note that the translationY operation moves the rendering to the bottom of the overall view, which is where the blurred caption lives.

Once again, we get a RecordingCanvas from this node. Before we draw into it (using contentNode), we back-translate from the caption location to the top of the image; this ensures that the content is positioned correctly for the larger image under the smaller/translated caption area. Finally, once the blurNode drawing is done, we render the result into the view’s canvas (on top of the existing content from contentNode) with one more call to drawRenderNode() and we’re done.

Here’s the final result. It’s pretty close to what we had originally, but you can see that the blur in the label area is much more pronounced, which helps with the readability of the caption’s text.

Better blur in the label by combining a RenderEffect blur in addition to frosted-glass shader, with two RenderNodes.

… And That’s It!

This is the end of this current series (though I reserve the option to write more shader and RenderEffect articles in the future. No promises). We managed to play around with the system blur to get a blur-behind effect that helped pop an image out from the background. Then we added AGSL shader logic to enhance the visuals in a caption for the image. Finally, we used RenderNode to take advantage of the system blur for a better (and faster!) effect, and to simplify the AGSL shader logic to simply provide the frosted-glass effect.

There are many resources out there if you want to know more about this. Here are some to start with:

  • RenderEffect: The class used to create effects for blurs, bitmaps, chains and more. These effects are set on Views or RenderNodes to change the way those objects are drawn.
  • RenderNode: The object which holds the underlying operations used to draw Views, but which can also be used directly to record and store custom drawing operations.
  • RuntimeShader: The object which holds the code for an AGSL shader.
  • AGSL: Android Graphics Shading Language. Like SkSL, but for Android. It provides a mechanism for creating very custom per-pixel drawing effects.
  • SkSL: The shading language for Skia. It’s like GLSL, but for the Skia rendering pipeline.
  • GLSL shaders: The language for fragment shaders when using OpenGL.

Play around! Create neat effects! Make better and more intuitive user interfaces! Have fun with graphics! That’s what it’s there for!

Thanks to Nader Jawad for his help in understanding and implementing the dual RenderNode technique above.

--

--

Chet Haase
Android Developers

Past: Android development Present: Student, comedy writer Future: ???