EEG + VR + Machine Learning = The Real Ready Player One

Markus M. Milder
Full Random
Published in
3 min readDec 19, 2019

Although Ready Player One convinced me on large living in VR being the most probable 20-year near-future outcomes, there were aspects of that world that I wasn’t entirely sold on. Namely, how the players manually controlled themselves in that virtual world. Such as having an actual console or if you have the means, the capability of buying as actual rig used by the whole body in order to immerse yourself even more. I figured the console must be too restrictive — just like in the present PlayStation remotes you’d have a very limited choice of actions, about 45. And the rig… do you see yourself actually messing around in the air and having your body restricted, for 10 hours a day? Two, it’s not portable so playing anywhere else but your own home is out of the question. And three, it’s super hard to create one — many safety tests required and still, there is the possibility that it could crush you at any moment like malfunctioning robots have done before (mainly in factories). So, what solution covers all these issues?

It is clear that measuring brain activity and using the emerging patterns to work electronics will be the next massively/commercially used way to interact with our devices. Even though Neuralink comes to the minds of most, I don’t think the Z generation (born 1995–2015) are willing to have intrusive electrodes operated into their brains. Early adopters (2.5–5%), sure. But not on the large scale. But what if you had to wear an EEG hat that is not intrusive and thus has no way to impact your thoughts… only you will be affecting what it reads. Now what if you could control your PlayStation game with your mind… instead of having merely 45 (cumbersome) actions on PS, you could have hundreds. That is especially essential in VR, because you wanna feel truly immersed like you were actually living there — that is the Holy Grail of VR. And obviously you don’t have merely 45 possible actions in real life. Especially if you figure that a hand has at least 7 degrees of freedom.

Sure, EEG does not yet have a 100 clearly discernible patterns. They’re too overlapping and thus can only safely be used to move a drone up or down. But that’s why the environment being virtual makes a huge difference. In contrast to the real world, there you can make mistakes. The more users they gather, the faster the learning — just like having more data for machine learning. So imagine that you think of turning right, but turn left instead. In contrast to a tragic drone accident, in that world you will only notify that system that it made an error and what you actually wanted to do (turn right instead of left) and the system will step-by-step learn clear separate brain activity patterns.

As a closing note, we should be wary of this future as it resembles Black Mirror’s couple VR-MacGuffin episodes with the small round white thingy set on the temple and seeming asleep for a bystander. However there, like ‘USS Callister’ (Star Trek) episode, lie many dangers such as staying in that world until one dies. Those dangers are quickly alleviated when with EEG solution the technology cannot affect the user like could be the case with Neuralink.

I will definitely write about how this might result in Matrix, but instead of having brain in the vat…they’re simply connected to the VR, which could then be created into anything. And as long as the VR world is increasingly comfortable while being interesting enough then those living in there have no motive to come out. Especially if the real world has gone to shit, just like Matrix. But that’s for another post.

--

--