The brain as a special kind of hard drive. Android Jones, The Vision Agency, Microdose VR, EEG, Muse, AWE2017, Nvidia’s GPU Cloud, and the Neural Lace Podcast #7

SVGN.io
Silicon Valley Global News SVGN.io
9 min readJun 11, 2017

June 7th, 2017 Written by Micah Blumberg, Journalist, Neuroscientist by hobby since 2005, Founder of the Neural Lace Podcast and Founder of Self-Aware Networks: Computational Biology: Neural Lace.
Location: This audio included at the bottom was recorded at the 8th Augmented World Expo AWE 2017 the largest AR VR event in the world.
Listen to the latest Episode of the Neural Lace Podcast at Soundcloud
https://soundcloud.com/user-899513447/the-neural-lace-podcast-7-guest-android-jones
Also read Fifer Garbesi’s first article for VRMA
called “What EEG Can Bring to Your VR Experience” https://vrma.work/2017/06/11/what-eeg-can-bring-to-your-vr-experience/

My research regards the human brain, actually any brain, as a special kind of hard drive that we will soon be able to read and write to, uploading and downloading bits of ourselves and bits of others, upgrading and downgrading any part of ourselves, sharing extremely detailed memories with friends, family, even with the legal system should we so choose.

I would say that what has changed in the world in the past three decades that enables us to now be on the cusp of a next generation brain to computer interface or computer to brain interface is that people from around the world are able to share research like never before thanks to the internet. From 2006 to 2016 I would speculate that there has been more research done in Neuroscience, and more advancements in our collective understanding of the human brain than in all the years of human history prior to that 10 year span of time. I would speculate that in 2017 alone neuroscience research will again leap beyond all the previous years of research, throughout all of human history, going back thousands of years, combined.

I want to show you three examples of how advanced neuroscience research is getting. These examples can also be found in my facebook group called “Self Aware Networks” https://www.facebook.com/groups/neomindcycle/

https://www.facebook.com/groups/neomindcycle/

1. We are rapidly gaining new understanding about how human brain networks function at a large scale. Example: Human brain networks function in connectome-specific harmonic waves https://www.nature.com/articles/ncomms10340

2. We are rapidly gaining new understandings as to how our brains assemble information. Example: The Code for Facial Identity in the Primate Brain http://www.cell.com/cell/fulltext/S0092-8674(17)30538-X

3. We are rapidly understanding new ways to stimulate our brains with wireless targeted transcranial magnetic stimulation, which will eventually allow for next generation computer to brain interfaces. Noninvasive Deep Brain Stimulation via Temporally Interfering Electric Fields http://www.cell.com/cell/fulltext/S0092-8674(17)30584-6

The rapid advancement of Neuroscience is logical, possible, and plausible because the internet has enable a massive parallelism between researchers across the entire globe with the biggest population size of humans interested in the sciences that has ever existed before on the planet all working together online to share articles, discuss articles, and device new science experiments from which we can collectively learn.

We have speculated in the Neural Lace Podcast (#4) that the human brain might be capable of running linear mathematical calculations faster than any off the shelf computer system if we were able to change it’s instructions, change how it operates, by giving it a new program, introduced via neural lace.

Individually a chemical synapse may be quite slow compared to a transistor in a computer, but massive parallel processing between eighty six billion neurons is more than enough to make up for the difference in speed at the individual unit level, that’s not to conclude that the individual synapse is comparable because there is a vast amount of complexity in both the pre and post synapse, and so perhaps you would need a great number of transistors duplicate the information processing complexity of a chemical synapse.
The brain would include the ability to run vast calculations with vast amounts of memory on almost any kind of data, but if and when this brain hard drive, assuming that it is healthy, fails to perform well at certain kinds of tasks the hypothesis might be that it has failed because the dominant program running at the time of the task was optimal for some other kind of task but not optimized for the present task.

In 2012 I was trying to optimize my own brain, and the brain’s of my friends with the Len Och’s protocol from 1980.

https://www.youtube.com/watch?v=RMcoB98xKts Len Och’s then combined EEG with light effects such that your brain’s drove the changes in the light that you saw. His patients reported significant improvements in their symptoms across the board. So in 2012 I replicated Len Och’s experiment with a light and sound machine from Mindplace called Procyon, I used an EEG Device called Emotiv, and software called Mind Workstation. With Mind Workstation I was able to add isochronic beats, binaural beats, and I was also able to add a variety of customized audio and visual effects that would reflect one’s own brainwaves back to you.

The people who tried my service reported significant improvements in their symptoms, although they were not my patients, just friends of mine who volunteered to test my program. My equipment worked for about two years before it broke from normal use and at that point a technology called Virtual Reality was making a comeback, I resolved to build a next generation version of the device for Virtual Reality, and so I became a VR journalist, even as I continued to study neuroscience and the topic of artificial intelligence.

In 2016 at VRLA in August my friend Nick Ochoa insisted that I try this dome with really cool artwork, and at the Kaleidoscope Film Festival later that year I met the people who created the artwork for that amazing dome. I met Phong and Android Jones, and besides this dome they had a product called Microdose VR.

When I told Phong about my Neuroscience research we instantly formed a lasting connection, and later on I managed to get Phong, Android and Tim Mullen, an amazing software architect (and a developer behind Glass Brain) all on the phone together to talk about EEG devices. My thought was that Tim Mullen was the best person in the world to advise these guys on the landscape of EEG devices, what they might use, and how they might use it.

Fast forward to AWE2017 and Android Jones was showing off the latest version of Microdose VR in which he has now incorporated an EEG device called the Muse that has been encorporated into the HTC Vive’s headband.

In the course of the podcast I had an agenda to persuade Android to not utilize the Muse EEG signals as a game might use EEG signals, but instead to just let the full spectrum of EEG signals drive a full spectrum of light and sound effects so that the EEG signals are simply a mirror to the user.

Utilizing EEG as a mirror contains the essential value proposition of Neurofeedback, and this idea goes back to the work that I was doing between 2012 and 2014 incorporating the Len Ochs protocol from 1980 where he was using EEG to drive lights.

In 2012 I was applying Och’s light based EEG protocol to new light and sound effects that I crafted with Mind Workstation on my computer.

Today Len Ochs is the leading mind behind Lens, Low Energy Neurofeedback System, which is a new protocol that uses magnetic waves instead of light based feedback to stimulate the brain.

The modern Len Ochs protocol differs from standard Transcranial Magnetic Stimulation treatment in that it focus on the specific dispersion of EEG amplitudes and other properties that are unique to each individual. TMS aka Transcranial Magnetic Stimulation is the technology I referred to in the 3rd example above of “how advanced neuroscience research is getting” above.

While current TMS is a rather blunt tool, like a shotgun of brain stimulation, (according to a neurosurgeon that I asked) researchers are working on refining it so that it is more like a scalpel in terms of its precision. In order to use TMS for Neural Lace it would need to become at least an order of magnitude more precise in terms of the signals it sends and receives, but the good news is that scientists are working on refining the technology.

Android Jones was at AWE 2017 to showcase the latest product from the Vision Agency that integrates MicrodoseVR with a new EEG headband from Muse that has been integrated into the headband of the HTC Vive.

This is one of many examples of how biometric sensors are going to be integrated into basically all future AR and VR devices, AR VR glasses, they will have eeg built into the headsets themselves.

In this podcast we also talk about Nvidia’s new GPU cloud. For Consumer Neural Lace we will need a ton of computer power for each and every person, the reason is that even after we have created a language for computers to talk to the human brain, introducing visual, auditory and other sensory concepts that are not actually there, we can’t send the patterns directly, we have to first listen to the patterns that are there at present, from the environment each person is in, so that we can send only the difference between the pattern that is there in someone’s brain, and the pattern we want to be there.

There are basically three ways to use EEG with AR and VR

1. One way to use EEG is that with Artificial Intelligence you can predict someone’s emotions and intentions at the interface level.
2. A second way to use EEG is to gamify the Virtual or Augmented environment
3. A third way to use EEG is to just let the EEG drive raw changes in the light, sound, and tactile effects, so that you are providing a mirror for someone’s brainwave signals, and they begin to form concepts about how their thoughts and emotions are brainwave signals.

This episode of the Neural Lace Podcast #7 Guest Android Jones, it was co-hosted by Micah Blumberg and Fifer Garbesi.

This audio was recorded at the 8th Augmented World Expo AWE 2017 the largest AR VR event in the world.

Listen to the latest Episode of the Neural Lace Podcast at Soundcloud
https://soundcloud.com/user-899513447/the-neural-lace-podcast-7-guest-android-jones

When the audio begins you can hear Fifer and Android going back and forth, and at that point I’m standing there holding a single boom microphone between them before I eventually speak up to join the conversation.

My co-host for this episode is Fifer Garbesi who is also a journalist field reporter for VRMA Virtual Reality Media, reporting on Virtual and Augmented Reality devices as well as 360 cameras, Photogrammetry, Videogrammetry, EEG devices, and very recently Fifer is directing her first Neuroscience project that I can’t say anything about yet but I can say that I am helping to build it, and that we are consulting major Neuroscientist’s to help make sure that it is as medically accurate as possible.

Read Fifer’s latest article for VRMA called “What EEG Can Bring to Your VR Experience” https://vrma.work/2017/06/11/what-eeg-can-bring-to-your-vr-experience/

--

--

SVGN.io
Silicon Valley Global News SVGN.io

Silicon Valley Global News: VR, AR, WebXR, 3D Semantic Segmentation AI, Medical Imaging, Neuroscience, Brain Machine Interfaces, Light Field Video, Drones