The evolution of 3D graphics for websites

Kim T
Creative Technology Concepts & Code
4 min readOct 8, 2019


The web has gone through a number of different solutions to displaying 3D graphics. here I’ll take you through the ones i’ve personally used and experienced over the years, and some of the latest developments!

Virtual Reality Markup Language

VRML allowed simple 3D environments

In the early days of the web (1994) there was Virtual Reality Markup Language (VRML) which allowed web pages display shapes, lighting, sound and developers to create interactive 3D worlds. However it wasn’t supported in all browsers, and had many limitations.

Macromedia Flash

Flash animation software

Macromedia and later Adobe took over the web in 1997 with Flash, which through an embedded plugin was capable of more consistent/compatible 3D graphics. However writing Flash sites required licensed software, and developers needed to learn new software and language (ActionScript) to work with the format. Flash sites quite often had large loading times and could frustrate users with slow connections.


With the launch of WebGL in 2011 and recent improvements in JavaScript performance, we finally have a non-plugin option with great cross-browser support. We also have libraries such as three.js, meaning developers can quickly create a 3D environment in their browser without prior experience.

However we still face limitations… as JavaScript does not run at native speeds, it is always a step behind what’s actually possible. One way to get native performance in the browser is to use WebAssembly, however that often requires writing your own 3D engine including, models, shaders, textures, renderers etc. As a web developer, the most compatible choice will always be to use the browser standards (WebGL) which are continuously improving.

Unity Engine

Here I look at two additional approaches using Unity’s existing 3D engine library embedded in the browser.

Unity WebGL library

a) Unity WebGL

One approach is to use Unity’s built-in WebGL export feature, which creates a JavaScript viewer which you can embed onto any site. It also generates WebAssembly code which is used to speed up particular functionality of the 3D Engine. One of the main downsides is that the WebGL exported version only supports a subset of features, for example advanced shaders are not supported.

You can communicate with the embedded viewer using predefined JavaScript functions in Unity, and call them similar to how window.sendMessage works. JavaScript functions have a slight delay and can’t be used for real-time functionality such as keyboard controls. This approach is good for less graphic-heavy games that need faster loading times and wider compatibility.

I’ve created an example project which shows how to communicate with Unity WebGL running in the browser using JavaScript:

b) Unity Standalone

Another approach is to export your Unity project as a standalone player (macOS, windows, etc) which is run on a dedicated server. You can then stream the view and use web sockets to communicate and pass information between the browser and the Unity project.

In this approach we get the exact Unity graphics with full shaders and functionality, but the trade-off is that dedicated servers cost more, and users are reliant on the speed of their connection. This approach could be better for high end graphics applications which require lots of memory and users on reliable connections.

I’ve created an example project which shows how to communicate with Unity Standalone running on a server using websockets:

Hope that helps you get started using 3D on the web!



Kim T
Creative Technology Concepts & Code

Creative Technologist, coder, music producer, and bike fanatic. I find creative uses for technology.