OpenGL ES: Render A shape (square) in Android App

Chirag Prajapati
Mindful Engineering
6 min readMar 14, 2022

OpenGL is mainly considered an Application Programming Interface that provides us with a large set of functions to manipulate graphics and images.

Photo: Unsplash

About OpenGL

  • OpenGL uses a Graphics Processing Unit(GPU) is a hardware unit that is used to draw 2D/3D graphics and animations with high frame rates.
  • Nowadays Android devices available in the market are having GPU.
  • Open Graphics Library (OpenGL) provides support for high-performance 2D and 3D graphics in Android devices.
  • OpenGL is a cross-platform graphics API that specifies a standard software interface for 2D/3D graphics processing hardware.
  • Open Graphics Library Embedded System (OpenGL ES) is a flavor of the OpenGL specification intended for embedded devices (Android).

Prerequisites

  • Require Android 2.2 (API level 8) and higher to support OpenGL ES 2.0 API.
  • A device with support of OpenGL ES 2.0 version or higher.

Project Setup:

OpenGL ES requirements

  • If your application requires a specific version of OpenGL ES, you must declare that requirement by adding the following settings to your manifest.

Note: OpenGL ES is natively supported to Android. So no need to add any dependency to use it.

Implementation:

Add GLSurfaceView in XML file

GLSurfaceView

  • Using this view, you can draw and manipulate objects using OpenGL ES API which is similar in function to SurfaceView. Get a reference of GLSurfaceView and attach a renderer with it.

Get a reference of GLSurfaceView and implement the interface

GLSurfaceView.Renderer
This interface defines methods required for drawing graphics in a GLSurfaceView.

  • onSurfaceCreated(): This method will be triggered only once GLSurfaceView is initialized. Use this method to perform actions that need to happen only once, such as setting OpenGL environment parameters or initializing OpenGL graphic objects.
  • onSurfaceChanged(): The system calls this method when the GLSurfaceView geometry changes, including changes in the size of the GLSurfaceView or orientation of the device screen. Use this method to respond to changes in the GLSurfaceView container.
  • onDrawFrame(): The system calls this method on each redraw of the GLSurfaceView. Use this method as the primary execution point for drawing (and re-drawing) graphic objects.

Vertex Shader

  • If you want to draw a square, you need four vertices. Let’s say (-0.5f, 0.5f), (-0.5f, -0.5f), (0.5f, -0.5f), and (0.5f, 0.5f) as showing in below defined coordinate system.
Photo: Desmos
  • The code written in vertex shader will run once per vertex. So to draw a square shape, a vertex shader will run 4 times.
  • A vertex shader is used to emit the final position of the vertex.

Use this data to set the position of the vertex in a vertex shader file.

  • Here, quadPositionHandle is an attribute handler(index) that is used to pass quadrant vertex data from class to the vertex shader.
  • COORDINATES_PER_VERTEX is the number of the coordinates available in one vertex.
  • VERTEX_STRIDE is the number of bytes required for one vertex. We’re using float value for coordinate, so need to multiply vertex coordinates with float size. Like, in our case we have 2d coordinates for the square. vertexStride = 8 coordinates (4 vertex) * 4 bytes (size of float) = 32
  • quadrantCoordinatesBuffer is the float buffer of quadrant float array data.
  • gl_Position is a built-in variable used to pass the new value for the current vertex.

Fragment Shader

  • A fragment shader executes per fragment and emits the pixel. A pixel is a screen element.
  • A fragment is a corresponding portion for a given geometric primitive +- covering the pixel.
  • gl_FragColor is a built-in variable used to pass the color of the fragment.

The fragment shader will emit the color of the fragment. So in order to draw a square with a red color fragment, the input vector will be : (1.0, 0.0, 0.0, 1.0) -> (r,g,b,a).

Shader Program

  • A shader program object is the final linked version of multiple shaders combined.
  • When linking the shaders into a program, it links the outputs of each shader to the inputs of the next shader. Here, the output of the vertex shader will be the input of the fragment shader.

Mapping real-world coordinate to OpenGL coordinate system:

Model Matrix

  • The model matrix defines the orientation, size, and position of the object in the scene.
  • The model matrix transforms the vertex position of the object to world space.

View Matrix (Camera View)

  • The view matrix describes the position and orientation from which the screen is looked at.
  • The view matrix transforms the world space into the view space. This transformation adjusts the coordinates of the drawn objects based on the camera view position.

Projection Matrix

  • The projection matrix describes the mapping from the 3D points of the scene to the 2D points of the clip.
  • The projection matrix transforms from view space to clip space and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1).
  • This transformation adjusts the coordinates of the drawn objects according to the height/width ratio of the GlSurfaceView.

We know that android devices are with different resolutions. Here, the problem is that OpenGL considers square, uniform coordinate systems, and drawn shape but actually device screen is no-square. So it looks like this

Default OpenGL coordinate system (left) mapped to a typical Android device screen (right). Image credits to https://developer.android.com/guide/topics/graphics/opengl

To solve this issue, need to apply projection and camera view matrix to transform the coordinate system. So your shape has the correct proportion to any device screen.

How to apply projection and camera view matrix?

  1. Create projection and view (camera view) matrices
  • The projection matrix is calculated in onSurfaceChanged() method of GlSurfaceView.Renderer class.
  • The camera view (view) matrix is calculated in onDrawFrame() method of GlSurfaceView.Renderer class.

2. Create a view projection matrix handler in the application

  • The following code shows how to create a view projection matrix handler to access the uVPMatrix variable in the application which is used to apply projection and view matrices to coordinates. Do below-defined changes in the onSurfaceCreated() method of a GLSurfaceView.Renderer class.

3. Apply projection and view matrices

  • To apply projection and camera view matrices, multiply them with each other. The following example code shows how to modify the onDrawFrame() method of a GLSurfaceView.Renderer class.

4. Add view projection matrix to the vertex shader

  • Here, the uVPMatrix variable allows you to apply the view projection matrix to shape position coordinates.

Draw a shape

  • glDrawElements() draws a sequence of primitives by hopping around vertex arrays with the associated array indices.
  • It reduces both the number of function calls and the number of vertices to transfer.
  • glDrawElements() requires 4 parameters. The first one is the type of primitive, the second is the number of indices of index array, the third is the data type of index array and the last parameter is the address of index array. In this example, the parameters are GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, and indices respectively.

--

--

Chirag Prajapati
Mindful Engineering

Android Developer | Kotlin | Core Java | ARCore | OpenGL | Flutter