CODEX

3D Graphics With OpenGL in Android

Dayo Banjo
CodeX
Published in
6 min readMar 28, 2021

--

Computer graphics is a central part of our lives, in movies, games, computer-aided design, virtual simulators, visualization, and even imaging products and cameras. When we play 3D games, having virtual reality experience, or interact with complex data visualizations, the geometry that composes such a scene must be redrawn a few dozen times per second on the device. Besides the geometry, which consists of points, lines, and polygons, we typically also work in a 3D scene with textures, lighting, and virtual cameras to control the appearance of shapes and objects and to change our perspective within the scene.

OpenGL is an application programming interface for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit, to achieve hardware-accelerated rendering. OpenGL’s object is made up of primitives (such as triangle, quad, polygon, point, and line). A primitive is defined via one or more vertices. Android supports OpenGL both through its framework API and the Native Development Kit (NDK). This article focus on the Android framework interfaces through API.

There are two foundational classes in the Android framework that let you create and manipulate graphics with the OpenGL ES API: GLSurfaceView and GLSurfaceView.Renderer. We will use OpenGL to create 3D shapes and animation. Before we can draw our an objcet to the screen, we need to send it through the OpenGL pipeline, and to do this we need to use small subroutines known as Shaders. Shaders tell the graphics processing unit (GPU) how to draw our data. There are two types of shaders vertex and fragment shader, shaders are small programs that are executed in parallel on the GPU for each vertex (vertex shader) or each fragment (fragment shader)

1. A vertex shader: This generates the final position of each vertex and is run once per vertex. Once the final positions are known, OpenGL will take the visible set of vertices and assemble them into points, lines, and triangles.

2. A fragment shader: This generates the final color of each fragment of a point, line, or triangle and is run once per fragment. A fragment is a small, rectangular area of a single color, analogous to a pixel on a computer screen.

Graphic Pipeline in OpenGL

Graphic Pipeline in OpenGL

Vertex Processor: Like the way a lame will draw on paper, points and lines are connected together to form common geometric. A Vertex is simply a point representing one corner of a geometric object, with various attributes associated with that point. The most important attribute is the position, which represents where this vertex is located in space.

Vertex Processing involves taking a model from model space to the world space and transforming the view into camera space then projecting it to device's screens. Various transformation can be applied via matrix operation of scaling, rotation, and transformation as well as projection (orthographic or perspective projection) in rendering the object to a normalized device coordinate, via

Rasterization

This determine the pixel covered by a primitive (e.g a triangle) and interpolates the output variables of the vertex shader (i.e. varying variables and depth) for each covered pixel.

image from wikipedia

Fragment Processing

A Fragment Shader is the Shader stage that will process a Fragment generated by the Rasterization into a set of colors and a single depth value. The fragment shader is the OpenGL pipeline stage after a primitive is rasterized. For each sample of the pixels covered by a primitive, a “fragment” is generated. Lighting, shading , interpolation and texture mapping are done here.

image from vispy

Output Merging

Once the final colors are generated, OpenGL will write them into a block of memory known as the frame buffer, and Android will then display this frame buffer on the screen.

Let jump into some coding

Setting up the Environment

For backward compatibility we’ll use OpenGL ES 2.0 support, which is compatible with API 10: Android 2.3.3 (Gingerbread) and above.

Create instance of GLSurfaceView

GLSurfaceView takes care of the grittier aspects of OpenGL initialization, such as configuring the display and rendering on a background thread, handle the standard Android activity life cycle such as onCreated and onDestroyed. This rendering is done on a special area of the display, called a surface; this is also sometimes referred to as a viewport.

Handling Android’s Activity Life Cycle Events we add method to pause and resume to be in congruence with android activity lifecycle. there so that our surface view can properly pause and resume the background rendering thread as well as release and renew the OpenGL context, if we don’t, our application may crash and get killed by Android

Creating the Renderer

The renderer is responsible for making OpenGL calls to render a frame.The renderer methods will be called on a separate thread by the GLSurfaceView. The GLSurfaceView will render continuously by default, usually at the displays refresh rate, but we can also configure the surface view to render only on request by calling GLSurfaceView.setRenderMode(), with GLSurfaceView.RENDERMODE_WHEN_DIRTY as the argument. Since Android’s GLSurfaceView does rendering in a background thread, we must be careful to call OpenGL only within the rendering thread, and Android UI calls only within Android’s main thread. We can call queueEvent() on our instance of GLSurfaceView to post a Runnable on the background rendering thread.

Creating out Objects

OpenGL expects you to send all of your vertices in a single array. A Vertex Array Object (VAO) is an OpenGL Object that stores all of the state needed to supply vertex data (with one minor exception noted below). It stores the format of the vertex data as well as the Buffer Objects (see below) providing the vertex data arrays. We are going to be creating a scene that has a floor, ball, wall, splash, and drop. We’ll define vertex for the object of our scenes as follows.

We define our vertex data using a sequential list of floating-point numbers so that we can store positions with decimal points. This array is our vertex attribute array. A Float in Java has 32 bits of precision, while a byte has 8 bits of precision. There are 4 bytes in every float. The FloatBuffer will be used to store data in native memory. We have access to a special set of classes in Java that will allocate a block of native memory and copy our data to that memory. This native memory will be accessible to the native environment, and it will not be managed by the garbage collector. We transfer the data into the buffer object with a call to glBufferData() in the StoreVertexData(..) method and glBindBuffer in the StroreVertexData to upload the data into buffer specifying the buffer ID

We implement methods defined by the Renderer interface:

onSurfaceCreated(GL11 glUnused, EGLConfig config) GLSurfaceView calls this when the surface is created. This happens the first time our application is run, and it may also be called when the device wakes up or when the user switches back to our activity..

onSurfaceChanged(GL11 glUnused, int width, int height) GLSurfaceView calls this after the surface is created and whenever the size has changed. A size change can occur when switching from portrait to landscape and vice versa.

onDrawFrame(GL10 glUnused) GLSurfaceView calls this when it's time to draw a frame. We must draw something, even if it's only to clear the screen. The rendering buffer will be swapped and displayed on the screen after this method returns, so if we don't draw anything, we'll probably get a bad flickering effect.

--

--