A very late update and before you read any of this post: as of Lollipop the most practical way of handling this is to use View#setClipToOutline() (and set the desired outline). It doesn’t support individual corner rounding or KitKat and below, but the setup is really minimal. The docs say this is a relatively expensive operation, but CardView uses this and it is used widely by Google apps on scrollable lists so it’s probably good enough for most use cases.
Quick note: I’m writing this mostly to structure what I have learnt while trying to implement video rounded corners on Android for the last couple of days. While I’m happy to share the findings, the article will contain imprecisions and errors throughout so take what’s written here with caution and don’t hesitate to contact me directly or leave a comment when you find something wrong.
The Android UI framework has quite a few things that I’m thankful for as a developer — I love being able to tint an ImageView (or any asset really) with setColorFilter or tweening any object property with ObjectAnimator— however there are also some significant missing features. The ones I miss the most are probably the ability to set rounded corners or drop shadows at the View level. Every Android dev must have looked at how to round the corners of an ImageView at least once in their dev life and they know how tricky it can be to implement right. And it shouldn’t be that hard, at least not corner rounding. Views are ultimately translated to a texture that is mapped to some geometry on the GPU so the framework would just need to create the right geometry to enable rounded corners (at the expense of drawing a few more triangles).
Approaches to video corner rounding
I have though about how to implement rounded corners on video views on and off for the last few months now and came up with three ways to go about it: 1) drawing some corner overlays that give you the impression that the view is rounding its corners, 2) apply some kind of mask to the content (e.g. through a GLSL shader that takes a vignette-like image as input or uses the stencil buffer) or 3) control the GPU geometry on which the video content is being mapped. While 1) sounds easy and fast, it is only convenient to implement if the background is static and you don’t plan to translate, scale, rotate or animate in any way the video container or any of the overlapping views. If that’s the case, 1) is perfectly reasonable, but still it’s not a generic approach. 2) looks promising, but dealing with the mask in a separate image might be tricky and I’m not sure how well the stencil buffer would be able to handle this (assumedly because I don’t know enough about it). So yeah, we’re left with 3) which to me sounds like it’s worth a try. We’ll simply change the geometry that ultimately is used to show pixels on the screen and all we need to do is create that geometry correctly. Clean and easy, right? Well, maybe… let’s give it a try.
Displaying video on Android
Assuming that you are going to use the MediaPlayer to handle video decoding for you — there is also ExoPlayer, MediaCodec and MediaExtractor if you need lower level access, but I don’t think it makes sense to tie you to these just for the sake of rounding corners — you basically need to have a Surface (or a SurfaceHolder which is someone who holds a Surface) where you can show the video frames as they are decoded by the MediaPlayer. The docs describe a Surface as:
Handle onto a raw buffer that is being managed by the screen compositor.
Which sounds a bit generic, but gives you an idea. Something that lets you access some block of memory somewhere on the system where some pixels that can be shown on the screen are stored. Actually, as these other docs state, it isn’t so simple. There is a BufferQueue entity in the middle managing what and when the produced pixels are allocated and consumed.
So we need a Surface. There are two ways of obtaining one: 1) add a SurfaceView or any of its extensions (GLSurfaceView, VideoView) to your layout and take the Surface these views create or 2) create one yourself. To obtain a reference to the Surface created by a SurfaceView you can call SurfaceView.getHolder().getSurface(), while to create one ourselves we need a SurfaceTexture (see the public Surface constructor), which is basically a handle to some texture on the GPU.
SurfaceTexture(int texName)Construct a new SurfaceTexture to stream images to a given OpenGL texture.
Thus, in order to create a SurfaceTexture you need the id of a texture on the GPU. Or simply add a TextureView to your layout and it will create one for you. Summing it all up we’re left with 3 different options to show video on the screen (assuming the MediaPlayer is used):
1) add a SurfaceView/VideoView to the layout and attach its Surface to the MediaPlayer,
2) add a TextureView to the layout, create a Surface with its SurfaceTexture and attach it to the MediaPlayer,
3) add a GLSurfaceView to the layout, generate a texture on the GPU, create a SurfaceTexture with it and then a Surface. Finally attach that Surface to the MediaPlayer so it draws the decoded frames to that texture and finally draw the texture on the GLSurfaceView.
The first two are of course the most convenient, but we don’t really get any control over how the video is mapped to the screen (would be great if the TextureView would allow you to define your own geometry + shaders) nor do we get access to where the frames are stored so we can modify them. The third approach gives you both, we know the texture id where the frames are being stored and we’re also mapping that texture to the screen ourselves so we have full control over what we’re drawing. It is a little bit more code but let’s see how we can do it.