Hit Region Detection For HTML5 Canvas And How To Listen To Click Events On Canvas Shapes

Anton Lavrenov
3 min readFeb 22, 2017

Do you need a simple onClick for your canvas shapes? But canvas doesn’t have API for such listeners. You can listen to events only on whole canvas, not on part of it. I will describe two main approaches how to go around this problem.

NOTE! I will not use addHitRegion API because for the current moment (year 2017) it is still unstable and not fully supported. But you may take a look at it.

Let start from simple html5 canvas graphics. Imagine we want to draw several circles on a page.

Demo:

Demo link: http://codepen.io/lavrton/pen/QdePBY

Now we can simply listen to clicks on a WHOLE canvas:

canvas.addEventListener('click', () => {
console.log('canvas click');
});

But we want to listen to clicks on a circle. How to do this? How to detect that we clicked on a circle?

Approach #1 — use power of math

As we have information about our circle’s coordinates and sizes we can simply use mathematics to detect a click on a circle with trivial calculations. All we need is to get mouse position from click event and check all circles for intersection:

This approach is very common and widely used in many projects. You can easily find math functions for more complex shapes such as rectangles, ellipses, polygons, etc.

This approach is very good and it can be ultra fast if you don’t have a huge number of shapes.

But it is very hard to use this approach for very complex shapes. For example, you are using lines with quadratic curves.

Approach #2 — emulating hit region

The idea of hit regions is simple — we just need to get pixel under clicked area and find a shape that has the same color:

But exactly this approach will not work, because it may have shapes with the same color, right? To avoid such collision we should create a special “hit graph” canvas. It will have almost the same shapes but each shape will have a unique color. So we need to generate random colors for each circle:

After that, we need to draw each shape TWICE. First on visible canvas, then on “hit” canvas.

Now when you click on canvas all you need is to take a pixel on the hit canvas and find a shape with the same color. And this actions are ultra fast and you don’t need to iterate over ALL shapes. Also, it doesn’t matter how complex your shape is. Draw whatever you want and just use different colors for each shape.

Demo with the full code:

http://codepen.io/lavrton/pen/OWKYMr

Which approach is better?

It depends. The main bottleneck of the second “hit” approach is that you have to draw shapes twice. So performance can drop twice! But drawing on hit canvas can be simpler. You can skip shadows and strokes there, you may simplify some shapes, for instance, replace a text with just a rectangle. But AFTER drawing is finished this approach can be ultra fast. Because taking a pixel and accessing to a hash of stored shapes is very fast operation.

Can they be used together?

Sure. Several canvas libs use such mixed approach.

It works in this way:

For each shape, you have to calculate simplified bounding rectangles (x, y, width, height). Then you use first “math” approach for filtering shapes that have an intersection with mouse position and bounding rectangle. After that you can draw hit and test intersection with the second approach for more accurate result.

Why not just use SVG for such case?

Because sometimes canvas can be more performant and more appropriate for your high-level task. Again, it depends on a task. So canvas vs SVG is out of the context of this post. And if you want to use canvas and have a hit detection you have to use something, right?

What about other events? Like mousemove, mouseenter, etc?

You just need to add some extra code into described approaches. And you can emulate all other events once you can 100% detect a shape under the mouse.

Are there any good ready-to-use solutions?

Sure. Just try to google “html5 canvas framework”. But my personal recommendation is http://konvajs.github.io/. Almost forgot, I am a maintainer of this library. Konva uses only second approach and it has support for all mouse and touch events that we usually have for DOM elements (and even more, like drag&drop).

--

--