Creating A 3D Virtual Expo for WebGL with Unity 3D

Rafiul hasan
Brain Station 23
Published in
8 min readApr 15, 2021

Recently I got the chance to Develop World’s First 3D Web-based virtual expo. It was really fascinating to work on something fresh. But it also came with its own challenges. By reading up to this point, you might be thinking huh, what could be so challenging in making a WebGL Application? But please do blindly trust me on this till I make it clear to you. If you are trying to build a performant 3D WebGL application with or without features like traversing a 3D tour, Rest API Driven contents, youtube videos streaming directly inside your system, and others, this article will help you a lot in terms of how to execute these. So here is my take on Creating a virtual tour for a virtual fair on a virtual player.

Exhibitor’s Landing Page

Graphics

The very primary requirement of the system was it should be very lightweight and performant on all browsers with average consumer computers as our primary target. So, we could not go crazy on the graphics but also we need to deliver something which should maintain a minimum standard. So, we planned to design the 3D assets with a few key points in mind.

  1. All the 3D models will be very low-poly including any humanoid models
  2. All the materials will only have a base map/albedo. No other maps will be added.
  3. All the textures we import for the project will be compressed to a minimum size that doesn’t break the visuals.
  4. Planning each scene efficiently, as occlusion culling in WebGL often fails to perform as expected.
Initial Booth and Level Designs

So, now we had our very optimized 3D assets inside unity. Everything is done. Right? No. Wrong. The lighting of the scenes was real-time, which is not very performant on WebGL. Also, the lighting was not consistent. So I decided not to use any lights. Let there be dark. No, Just kidding. I planned to bake the lights. Which drastically increased the size of the build. Then I thought… hmm…. Lightmaps are just some textures as baked data. Why not compress that too. And boom!! After tweaking some compression settings we got a look that is acceptable and the lightmap sizes were down by 93%. Now we had 3D scenes that looked good and very optimized for WebGL. Which we optimized more during the development by other means.

Traversing

What comes to your mind when you think about traversing in a 3D world? A First-Person Controller? A Third Person Controller? Or Maybe an auto movement like taking a ride on a Mario cart maybe??
Here is the problem, most of our target audience are non-tech persons, Maybe they haven’t even heard of the type of controls we have mentioned above. But we want to give them an experience that is engaging and gives them the feel of a first-person controller. So, I again took my pen and paper. And started planning. After losing some hair while scratching my head when planning, I came up with a camera system that serves our purpose. Firstly, I canceled out all dependencies from multiple input devices. So I decided to use the mouse as the input. No keyboard inputs, No Complex input structure. For this system to work I have placed some markers around the scene, which is clickable. When the user clicks on a marker, the viewing camera moves to the point of the marker by maintaining a predefined height from the ground. Which gives it a first-person controller feel without the use of additional inputs.

Moving Inside Virtual Expo

We also needed to give the user the ability to rotate the camera when stationary to see around. So I added that with some additional constraints for locking the camera rotation between certain values for more control over what the user can see or not. After that, the user could rotate the camera by clicking and dragging the right mouse button.

Camera Rotation

So, Now all my traversing in the scene can be done by using a mouse only. Also, as we gave it a feel of a first-person controller, the camera could be facing in a direction after moving which is not towards where we want them to be. So, I added a feature to the camera system which rotates the camera to A POI(Point of Interest) on reach. This helped us to immediately focus on something we wanted without breaking the immersion.

Facing Point of Interest

Now I was happy with what I had. But happiness is an arbitrary thing. So, I thought isn’t it still inconvenient for someone who doesn’t want to drag the camera to find a marker to click? So, I made a navigation bar on the very bottom of the screen so that the user can move to available movement spots via clicking on the directional buttons without even knowing where the marker is. The combination of our interactive controller and the navigation bar gave us the experience we wanted which serves any type of user, without breaking the immersion. We also used dynamically changing mouse cursors to identify different types of actions for the user to be familiar with.

Streaming Youtube Videos Inside WebGL

Streaming Video From Youtube

Streaming Youtube video inside unity is not a hassle on other platforms. But WebGL it’s a hassle because WebGL has Cross-Origin Resource Sharing (CORS) policy. For this policy, we can’t load external sites. So, I wore my researcher cap and started researching. Then I came up with a solution to use our server where we are deploying the WebGL application, as a proxy to stream youtube videos directly. What it does is the files used in proxy are hosted on our server, which satisfied the CORS policy and we can just stream the video from youtube via the link generated from our end. After tweaking here and there I finally got the youtube video to work inside WebGL builds without much hassle. Take That, WebGL.

Rest API Integration

API Driven Dynamic form submission

Every Data we had for showing from Available halls, available sessions to All the Image/Video brandings was coming from our backend servers. As all data to show and interact with has to be loaded from the server, I needed to figure out how can I optimize the data fetching so that We are not receiving and sending data continuously. So, I grouped all the API calls into different sections so that I have control over which data to load beforehand and which to load when needed, As calling a big number of APIs in a single frame can break the data flow in some situations. For the booths to be visited by the visitors, we delayed their interactivity till we finish loading the required data. But in the meantime, we gave a view to the user where they can see the ad banners are poping in place. Because who doesn’t like to see random images popping in some placeholders? I'm looking at you, Instagram…. So, now I have a system which uses unity web request to perform RESTful API tasks and also do download operations with ease.

Comet Chat Integration

Comet Chat Integration

Our Expo needed some sort of communication medium to communicate between different types of participants. They must be able to do voice and video calls also in addition to text chat. Also, we needed some group chat also. So, We decided to use comet chat for these purposes. But comet chat does not have an SDK for Unity 3D. That put our decision to use comet chat in jeopardy. But then we find a way to integrate the comet chat widget through our base HTML file. And we can implement various comet chat-centric functionality via Javascript functions. After some wrong decisions(Hey, I'm a human), I finally got it working properly. It still had some stability issues which we addressed later.

Javascript Integration

Booth Operations

As I mentioned earlier, We needed to use Javascript Functions for comet chat to work. We also needed to open a new tab, showing message boxes and other javascript specific functionalities. But Unity 3D doesn’t support these JS functionalities. So, we created an extension that made a bridge between our WebGL application and the browser, which we used for doing all browser-based JS operations. This helped us a lot, as it allowed us to use native browser JS Functionalities. And we didn’t need to implement any browser features inside our application.

Why Apple, Why?

Downgrading to WebGL 1.0

So, we were quite happy with the results so far. So, let’s deploy it and everyone lives happily ever after. But wait there was an iProblem (all pun intended) waiting for us. We built our system using WebGL 2.0. But guess what? Safari Browser doesn’t support WebGL 2.0. And as most of our end users will be using it on their macs, we decided to downgrade from WebGL 2.0 to WebGL 1.0. As a result, we lost the graphics quality we were hoping to deliver. We also downgraded the color space from linear to gamma for the same reason. And then every time we needed to bake the lights we had to switch to linear and then again come back to gamma for the build. If you are working to make a WebGL application for Safari, Then Best of Luck. Hope this heads up will help.

Conclusion

This was a very fantastic experience for me as I had the scope to learn many new things and be a part of the World’s first WebGL-based virtual expo. If you want to create a virtual tour on WebGL, I believe these things I mentioned will help you to formulate a plan for your system.

--

--

Rafiul hasan
Brain Station 23

I’m a Dreamer. Currently Working as Software Engineer at Brain Station 23