Riley Faulkner
Nov 1, 2021 · 4 min read

Virtual Reality is a fun concept, and very fun in practice- and I’d love it if I could take a peek 15 years down the road to see how that technology advances. Sadly, we don’t have the power to do that—so let’s look at what we may see going forward!

The best headset you can get your hands on (for a reasonable price) will allow you to plug in and walk around your room, and track your fingers 1 to 1. If you want to get into other peripherals- you’re looking at also tracking your body (IE- your limbs will move in-VR as they do in real life), tracking your eyes (for menu navigation mostly), and your face (so your avatar’s face moves as your does, if the application supports it). Shell out some more cash- and you can get yourself a 360-degree treadmill.

The Valve Index, from Valve: widely seen as the best VR experience for the home. Supports finger tracking through the controllers, and allows you to move around the play area- and move in-sim.

Everything here gets better and cheaper each year, and soon we’ll come to a point where the tech will proliferate many tech-forward households, classrooms and workplaces.

From Goldman Sachs research: as tech sectors grow, the products within them typically fall in price relative to their functionality (see: home virtual assistants, smart phones, electric vehicles)

I want to make a note here of a couple terms—‘Full-Dive’ and ‘Brain-Computer Interface’. These two ideas are quite common sci-fi ideas, and they are prevalent in sci-fi media.

Full-Dive is a term that came about from Sword Art Online, a Manga with an Anime adaptation- but has become a popular buzzword in the VR area. The key point behind this term is a full disconnect from reality- your senses are all mirrored through a computer. Think ‘The Matrix’ or ‘Ready Player One’.

Neo’s not moving here- but in The Matrix, he is. He’s also using all 6 senses- an important point for full-dive VR. Source: The Matrix, 1999

Brain-Computer Interfaces, or BCI for the sake of letter count, are being worked on right now. The term is self-explanatory, and it’s intuitive to understand once you get what VR is trying to achieve. The basic principle is being able to send and receive input between a computer, and someone’s brain. As an example- think about the possibility of replacing a keyboard with this. You click into a text field, and you begin to think about what you wish to write. I’m sure most people think faster than they type.

From research done at the University of Houston, a Brain-Machine interface allowing a user to control a prosthetic arm, using only his thoughts.

Both full-dive virtual reality, and BCI’s are cool to think about, but I’m sure most would agree that they’re quite scary.

Just looking at the plot of ‘The Matrix’- you can see where this tech can go wrong, assuming it gets out of control. But how would we even get there?

People in pods- all living out virtual lives in The Matrix. Source- The Matrix, 1999

Right now, we have rudimentary BCI’s. Robotic limbs controlled by nerve signals, while not exactly hooked up to your brain- this is a bit of proof of where we can head. Here’s a scary thought; what if someone was placed under medical paralysis- and then instead of controlling a robotic limb, that signal is passed off to a computer, and the person controls a digitized limb?

Another example of current BCI tech; ‘Next-Mind’ is a company researching this area, and they have a device that works (albeit, in a rudimentary way). Next-Mind sells a product which sits at the back of your head, and looks for a very specific brain signal.

Next-Mind’s Next-Mind, from next-mind.com. Nextmind.

They achieve control by placing a quickly flashing pattern on screen- or in VR, which your brain goes crazy about when you focus on it. You focus on the pattern, a part of your brain lights up, the device detects this- and voila, something happens in the software.

This isn’t exactly Matrix-level full control over a digital-self, but it’s a cool proof of concept. We have the capability to read signals from the brain unintrusively.

Now compare that tech to internet technology. Signals started off terrible- low bandwidth, not a ton of detail. Now look at where we are. Lightning fast speeds, the data we’re able to send around is crisp, concise, and cheap. We may see this tech move the same way, where the tech will improve, and the level of detail for reading brain-waves goes up.

Moore’s Law- stating that transistor count in a dense integrated circuit will double every two years. Not totally related- but this mirrors how tech has evolved in recent years.

So- let’s put this all together. In 25 years, will we have the capability to paralyze someone- read their brain and nerves, digitize that, and then have them experience whatever the computer makes of it? Maybe.

Part of me hopes that doesn’t happen- there’s a lot of fiction explore this idea that turns me off to this. Would it be cool? Oh yeah.

Digital Shroud

Reflections on ubiquitous computing by students at Drexel University

Digital Shroud

Research and reflections on ubiquitous computing by students at Drexel University, covering all things smart, wearable and pervasive. Articles are by students in the class “Intro to Ubiquitous Computing” in the College of Computing & Informatics. http://cci.drexel.edu

Riley Faulkner

Written by

Digital Shroud

Research and reflections on ubiquitous computing by students at Drexel University, covering all things smart, wearable and pervasive. Articles are by students in the class “Intro to Ubiquitous Computing” in the College of Computing & Informatics. http://cci.drexel.edu