I am not in a possession of a crystal ball. But I do have an obsessively accumulated, encyclopedic knowledge of real time rendering techniques. The one which has most fascinated me all these years is the voxel engine. Voxels are commonly understood to be 3D cubes but can also be rendered as points, for example the point cloud models generated by 3D scanning.
If you sweep a laser up and down or back and forth in successive rows, generating a point in your software every place the laser hits something, the result is a point cloud. If the points are densely packed enough, the resulting virtual object appears solid, no polygons required.
GPUs available today are optimized for polygons, not voxels. But if you had hardware specially designed to permit absolutely, unfathomably massive numbers of voxels, you could create some truly photorealistic scenery. Unprecedented levels of tiny, intricate detail would become possible, as well as more realistic behaviors for light, fluids, gases and so on.
The simulation above is voxel based (allow some time for it to load, very large gif). It cannot be rendered in realtime due to the sheer number/density of the particles involved, but they are sufficiently numerous to create convincing solid objects, and the accurately simulated fluid dynamics results in interactions between the water particles resembling real behaviors of water.
The ability to more accurately model reality in this manner should come as no surprise, given that reality is also voxel based. The difference being that our voxels are exceedingly small, and we call them subatomic particles. As with a point cloud, were you to zoom in far enough you’d see lots of space between them. But zoom out enough and they take on the appearance of a solid object.
The nature of voxel objects is that they can be “hollow” like polygon objects to save processing power, but unlike polygon objects they can also be solid. A voxel object can be made of solid voxels, through and through. This would mean in a computer game that you could slice an apple in half from any direction and see an accurate cross section.
If you were exploded by an enemy missile your body would not come apart into pre-modeled polygonal chunks, but actually separate in a manner unique to that event, spilling your insides as it would in reality. Real objects are after all just atoms put together in a particular shape, and everything which happens is just an interaction between those particles, governed by physical law.
So great is the potential to replicate at least limited chunks of reality in software by this method, that those who put their hope in brain uploading specifically intend to achieve it by scanning the brain down to subatomic particle resolution, and generating an identical point cloud from that data where each point corresponds to (and is assigned the known behaviors of) each of those subatomic particles. The expectation is that this virtual brain will then resume cognition from where it left off.
Minecraft is a good example of a popular voxel based game, which leverages the gameplay potential of voxel terrain and interactions. It “cheats” in the sense that the voxels are just polygonal cubes, but the math involved in procedurally generating terrain in Minecraft will be familiar to anybody who has ever tinkered with a voxel based terrain engine from any other game.
Outcast (which came out in 1999 if you can believe it) is another notable example of a voxel based game, this time with no polygonal cheating, and it demonstrated the potential to produce far more detailed terrain than was possible in polygonal 3D engines of the day.
So clearly, some amazing things are possible with voxels. Because our own reality is comprised of particles and their interactions, it is the natural path to take for efforts at simulating reality. Euclideon has demonstrated recently a modern voxel engine with environments generated from 3D scans of real world locations:
The results are so stunning that it’s difficult to believe this isn’t a photograph. Understandably many called bullshit from day one. But they have since shown their engine running in real time even on a relatively modest laptop, as their central claim is to have discovered a means to greatly optimize voxel rendering.
As soon as they demonstrated the engine running in real time, the goalposts were shifted, and their critics said “Alright it can do static environments, but not moving/animated objects”. They then demonstrated animated and moving objects, so the goalposts shifted again. “None of that is skeletal. It’s all frame by frame, precalculated.”
So it went. Much as with the ongoing skepticism of the EmDrive, every time it passes a test, the skepticism only grows more furious and intense. So afraid are we to be fooled that we dare not even hope for such a fantastical breakthrough.
This is all on hardware not remotely optimized for voxels. They’ve achieved it purely by the discovery of a selective rendering method that cuts out most of the workload, determining by some dark juju what is or isn’t visible to you and culling accordingly, but using a small fraction the processing power that normally requires.
Just imagine what will become possible with next generation video cards specially designed to push points, not polys. Imagine when those points become small enough as to be imperceptible, even with 4k, 8k and even 16k resolution VR headsets. The result will be truly photorealistic virtual environments, captured from real world objects and locations.
Of course when you add realtime lighting, fluid dynamics and so forth, the computing workload skyrockets. But if there is anything in this world as certain as death and taxes, it’s that computers will become more powerful. Conceive of computers a century from now which can render the entire Earth in points as small as actual subatomic particles. Or computers two centuries from now which can render the entire solar system, or galaxy.
Already, point cloud sims are used to model the collision of galaxies, or the expansion of the universe. Imagine if those simulations were really complete. If you could zoom in to any individual planet and it would be as detailed, down to the subatomic level, as actual planets are. At that point, what would distinguish it from the reality in which we reside?
This should make some sense of why Elon Musk, Stephen Hawking and others have recently been vocal about the high probability that we already reside in a simulation. Our own technology is approaching the capability of rendering large chunks of reality to the same fidelity as the one we reside in.
After all, if we can one day simulate an entire universe, probably we will not be the first to do so unless humanity is the first intelligent life ever to evolve in the universe. Probably we will also not be the last to do it, as simulating whole universes has obvious scientific merits where it comes to learning about our own universe.
If those simulations are indeed perfectly accurate, then life will arise within the sim universes for the same reasons it did in this one. Those simulated species which become intelligent will then eventually develop the technology required to run their own whole-universe simulations, and so on.
This would result in each actual universe containing many simulated universes at any given time, each of which contains many further sub-simulations, each of which contains many further sub-simulations and so on in a fractally arranged process tree. Of course it can’t extend forever as the processing power of the root sim is limited, but even then the number of simulated universe necessarily greatly outnumbers the actual universes at the ‘top level’.
Probably then, the intelligent inhabitants of all those universes (at least the ones who’ve not yet reasoned all of this out) assume they exist in a real universe, just as most human beings do. What are the odds that we actually do live in one of the comparatively few real universes, rather than the vastly more numerous simulated ones? Vanishingly, remotely small.
So the next time you look up at the countless points of light in the night sky, think of the unfathomable shitload of particles each of those stars is made out of. When you look at a particularly lovely tree, think “nice graphics”. And the next time someone asks you if you think this is a game, nod.
Follow me for more like this! And why not read one of my stories?