Rendering 3D objects using Three.js in Android

Hetal Dave
Globant
Published in
8 min read3 days ago
3D objects

Rendering 3D objects in applications provides an interactive experience to users. 3D can be used everywhere, whether to showcase products on your e-commerce site, develop a game, create virtual assistants, create stunning landing pages, and whatnot!

Since handling WebGL (JavaScript API for rendering interactive 2D and 3D graphics within any compatible web browser) can be complex, libraries such as Three.js come into the picture to provide a simplified way to deal with rendering, interacting, and animating 3D objects while doing all the heavy lifting behind the scenes.

Three.js can still overwhelm developers who have never dealt with 3D objects. Hopefully, this article will provide a basic understanding of 3D terms and rendering 3D objects with an Android app.

Why Three.js?

Imagine a scenario where you want to work on a 3D feature in your web, Android, and iOS applications. Using the Three.js library, we can implement a 3D rendering logic in just one place, which could be used by all the platforms. This prevents separate development efforts on all platforms.

Before setting up Three.js with Android, let’s learn some basic terminology used in the 3D world.

3D geometry

A 3D geometry is a set of instructions that describes “how to create a 3D shape”. It consists of vertices, edges, polygons, faces, and surfaces. Refer to the image below to understand it better.

Source: https://en.wikipedia.org/wiki/Polygon_mesh

Luckily, Three.js provides us with some built-in geometry classes, such as BoxGeometry, CylinderGeometry, SphereGeometry, and much more…

For example, to create a simple box, we just need to provide the width, height, and depth. We don’t need to dig into the complexities of defining all the faces ourselves.

The scene

A scene is like a container that holds all the objects used to render 3D images. Think about a theater stage, where we place the set, the actors, and the lighting. In the same way, we can assign 3D objects and lighting to a scene.

Materials

Imagine holding two balls in your hands, one made of plastic and the other made of rubber. Both have the same shape, but they look different and have different physical attributes. In the 3D world, we use materials to define the attributes of 3D objects.

With Three.js, we can either create our materials with the help of the built-in `Material` classes or load materials externally using MaterialLoader.

Three.js also provides TextureMaps, which are image files that can be used to define more detailed materials and make them look real.

Mesh

A Mesh is an object that represents part of the model. It contains geometry and material. A 3D model can be made from multiple meshes.

Refer to the code below to understand all the terminology better.

// Creating a scene
const scene = new THREE.Scene();
const geometry = new THREE.BoxGeometry(10, 10, 10);
const material = new THREE.MeshStandardMaterial({ color: 0xff0000 });
const cube = new THREE.Mesh(geometry, material);

// Assigning a 3D object to our scene
scene.add(geometry);
const directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.position.set(0, 2, 10);

// Adding a light source to our scene
scene.add(directionalLight);

Loaders

Sometimes, we do not need to create a 3D model file from scratch but would like to import 3D models from existing files created by 3D modeling programs. For that, we can use Loaders, which are functions that load such files and convert them into Three.js objects. We can either provide a web URL or keep it in a local Android project file inside the assets folder.

import { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader.js";

const loader = new GLTFLoader();
loader.load(
// resource URL
'https://somesite.com/3dmodels/fox.gltf',

// called when the resource is loaded
function ( gltf ) {
scene.add( gltf.scene );
},

// called while loading is progressing
function ( xhr ) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded');
},

// called when loading has errors
function ( error ) {
console.log('An error happened' );
}
);

Camera

A camera is just like a window to our scene. Without a camera defined, we can’t see anything. There are multiple types of cameras in Three.js, but the two most useful ones are the orthographic camera and the perspective camera.

Orthographic camera — Orthographic cameras do not provide depth perception, so the object will look the same size no matter where it is placed in the scene. This is useful for isometric games.

Perspective camera — This camera provides depth perception to reflect the way the human eye perceives the world. So, if one object is positioned closer to us than the other, it will appear to be larger. This is the most commonly used camera for 3D rendering.

const perspectiveCamera = new THREE.PerspectiveCamera(
50, // fov — Camera frustum vertical field of view.
width / height, // aspect — Camera frustum aspect ratio.
1, // near — Camera frustum near plane.
2000 // far — Camera frustum far plane.
);

Lighting

Without lights, our scene would become pitch black. Three.js has many built-in light sources that we can use and also combine with others to create the perfect setting. The most commonly used are Dynamic light and Ambient light.

Dynamic Light: It gets emitted in a specific direction. This light will behave like it exists infinitely far away and the rays produced from it are parallel. A common use case for this is to simulate daylight. It can also emit shadows as it’s a one-directional light.

Ambient Light: It is a light that illuminates all objects in the scene equally. Ambient Light can’t be used to cast shadows, as every spot in the scene receives the same amount of light.

Shadows

By default, Three.js turns off shadows because casting shadows involves a costly calculation. To support it, we will need to perform the following steps:

Turn on shadow support for the renderer and provide a type of shadow map. Each type provides a tad different effect and calculation efficiency.

renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;
  • Set cast shadow and/or receive shadow flag to true for each mesh to emit and/or receive shadows.
cube = new THREE.Mesh(geometry, material);
cube.castShadow = true;
cube.receiveShadow = true;
  • Set castShadow property to true, to allow the light source to cast a shadow. We should also provide the shadow mapSize and bias, to modify shadows pixelation.
const spotLight = new THREE.SpotLight(0xffffff, 1);
// make it cast shadows
spotLight.castShadow = true;
spotLight.shadow.mapSize.width = 1024;
spotLight.shadow.mapSize.height = 1024;
spotLight.shadow.bias = -0.0001;

Rendering

A Renderer class is used to render all the scene components into the canvas element. It is pretty straightforward. We will need to provide the canvas size using the setSize property. We also need to know the device's pixel density.

 // define the renderer
renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true });
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);

// append the rendered to the dom
document.body.appendChild(renderer.domElement);

// render the scene
renderer.render(scene, camera);

Animating

We would often want the user to interact with the scene or create animations. To make Three.js re-render the canvas, we would need to call a renderer.render() each time.

If we need frequent changes, like adding an auto-rotation effect to our scene, we can use the below code. This is also required to support loaded 3D files, including their animation.

function animate() {
// Schedual the next update
requestAnimationFrame(animate);
// Some other changes that should occur on animate
// for instance, here we can rotate the cube a litle on every frame
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
// re-render
renderer.render(scene, camera);
}
animate();

I think you now have basic knowledge of Three.js features, So let’s integrate it with the Android app.

Integrating Three.js with Android

The integration is done in two parts. Let’s understand the responsibilities of each part:

Android App

It is responsible for initiating the 3D operation that we want to perform and sending the required data to JavaScript using the JavaScript interface which acts as a bridge and communicates with JavaScript methods.

JavaScript

It performs the 3D operation requested by the Android app using the Three.js library and then provides a result back to the Android app.

Let’s understand the integration steps in detail:

  • Create one fragment that contains a web view. The 3D objects are rendered in this web view.
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".HoloFragment">

<WebView
android:id="@+id/webview"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_margin="0dp"
android:padding="0dp"
tools:layout_editor_absoluteX="0dp"
tools:layout_editor_absoluteY="0dp" />

</FrameLayout>
  • This fragment should contain logic to set up a web view and a JavaScript interface that acts as a bridge between the Android app and JS and communicates with JavaScript methods.
val webView = view.findViewById<WebView>(R.id.webview)
val fragment = this
webView.run {
webTalk = WebTalk(webView, fragment)
webView.addJavascriptInterface(webTalk, "JavascriptInterface")
post {
loadUrl("https://appassets.androidplatform.net/assets/index.html")
}
}
  • Create an activity and add a fragmentContainerView which contains the fragment we have created before.
  • The fragment class should also contain the appropriate method to send GLTF file or 3D rendering data like sensor, camera position, transition information, etc., to JavaScript using the Webtalk class.
  • The code snippet below shows the example of calling the JavaScript method:
fun addSphere(objID:String="sphere", radius:Float=1f, hexColor:Int=0xff00ff) {
webTalk.sendMsg("{id:'addSphere', objID:'$objID', radius:$radius, color:$hexColor}")
}

fun moveCamera(x:Float, y:Float, z:Float, duration:Float=0.5f) {
webTalk.sendMsg("{id:'moveCamera', x:$x, y:$y, z:$z, duration:$duration}")
}

fun loadModel(url:String, mode:String="root") {
webTalk.sendMsg("{id:'loadModel', url:'$url', mode:'$mode'}")
}

fun moveObj(objID:String,x:Float, y:Float, z:Float, duration:Float = 0.5f) {
webTalk.sendMsg("{id:'moveObj', objID:'$objID', x:$x, y:$y, z:$z, duration:$duration}")
}
  • Each method should contain a unique ID that is used to identify the task differently and other parameter information if required.
  • For example, in the above code, addSphere() contains id, radius, and colour information.
  • The code snippet below shows the WebTalk class which calls the JavaScript method by sending the ID and parameters information in the message object.
class WebTalk(private val webView: WebView, private val fragment: HoloFragment) {

private val waitingMsg: MutableList<String> = mutableListOf()
var connected = false

fun sendMsg(message: String) {
if (!connected) {
waitingMsg += message
return
}
webView.post(Runnable {
webView.evaluateJavascript("if(window.WKTalk){WKTalk(${message});}") { }
})
}
}
  • Once JavaScript performs a task, the result is sent back to the app using the below code.
@JavascriptInterface
fun receiveMsg(message: String) {
when (message) {
"connected" -> {
connected = true
waitingMsg.forEach { sendMsg(it) }
}

"onModelLoaded" -> fragment.onModelLoaded()
else -> {
val json = Parser.default().parse(StringBuilder(message)) as JsonObject
when (json.string("id")) {
"onClickObject" -> {
val objID = json.string("objID")
objID?.let { fragment.onClickObject(it) }
}
}
}
}
}

Android app also needs to add JavaScript, index.html, and CSS files in the assets folder of a project.

We can also leverage Vite.js to automatically add the generated JavaScript file to the Android project path.

We can also use different sensor (Gyroscope, Accelerometer, etc.) data and send it to JS. Three.js uses this data to perform rotation and acceleration on 3D objects.

Conclusion

Three.js offers a wide array of features and capabilities for creating 3D graphics and animations. By understanding and leveraging the classes and methods provided by Three.js, developers can create more immersive and visually impressive web applications.

Special Thanks to Mayank Gangwal for his guidance….

Further read

In depth knowledge of Three.js

Three.js official website

--

--