Self Aware Networks: Computational Biology: Neural Lace

SVGN.io
Silicon Valley Global News SVGN.io
7 min readFeb 20, 2017

The Neural Lace Talk Show is a podcast, show, and journal about science, technology, and next generation brain computer interfaces.
Contact via micah@vrma.io
The research for Neural Lace leads directly into creating artificial cortex and artificial brains. It means Augmented Reality and Virtual Reality without Glasses, and it means downloading what you see, taste, feel, and hear to be shared with others who can opt to upload your shared experiences.

Previous articles:

I am doing media as part of my research effort to find the best science, technologies, and people in order to work on building next generation brain computer interfaces, artificial cortex, and artificial brains.

I am the host of The Neural Lace Talk Show.

I study everything at the intersection of Virtual Reality, Neuroscience, and Artificial Intelligence.

I believe we have never been closer to hacking into the VR system of the brain. So we can create our own reality at the push of a button.

Have you ever wondered how Neural lace might work? I have some amazing guests talking about it on my podcast, give it a listen if you have time today.

In the 4th episode of the Neural Lace Podcast, I talk to Andre Watson, the CEO of Ligandal, a genetic nano-medicine company developing personalized gene therapies. goo.gl/cgCNwX Watson and I take a deeper dive into the synapse physiology and molecular biological basis of consciousness.

How much do we really need to understand and observe to effectively create neural lace? Andre presents his argument for the biological basis of consciousness.

My new podcast is being recommended by high level science folks to other high level science folks, people with letters like DR, PHD, MD, before or after their names! My podcast is being listened to by the executives of major tech companies. I am getting great feedback on the new podcast, and it’s getting global attention, people with five star professional backgrounds from all over the world are writing to me for example from countries like India, Germany, and Japan asking for more things they can read about the topic of Neural Lace related to the contents of my podcast. It’s truly a podcast for the Global Silicon Valley community! The frequency at which new people are reaching out to me to talk, and to listen to the podcast feels very special to me, like a count down sequence to lift off. 10, 9, 8, ….

I recently have had to think hard about how much GPU power it might take to read the human mind. The fact that I have a meeting today with a major corporation in which the question will come up is part of why I have been thinking about it. In all honesty I am not at all certain at what the answer should be.

What do you think the hard number is? How much AI am I going to need to learn to read brainwaves as easily as reading a newspaper?

Someone’s answer to this was about the raw complexity of the human brain with all it’s synapses and dentrites and connections.

Another person said that we will need quantum computing to save the day, because it’s just too complex otherwise.

‪Yet for me neither person really attempted even to answer my question. How much GPU power will be necessary to just barely crack the code of the human mind, we don’t need to brute force our way into every secret (Yet) we just need to stick a crow bar in the doors of perception long enough to take a peak. Then the computer can help model the rest in time.‬

My anticipation is that the depth of the sophistication of the information passing through the nervous will be incredibly complex, but we should be able to build a working model of how our minds work with the research I have planned.

I’m very interested to study how a small tiny portion of brain activity corresponds to activity in other areas, such as in the environment of the individual being studied.

So in a sense I need AI to study both the person, with a new brain computer interface I am designing, and I need it to study the environment of the person and that person’s reactions to that environment, their heart beat, their eye movement.

There are two directions for the new brain computer interface.

There are a number of new chips coming to the market, we may or may not gain approval to implant some of these chips into the nostrils of human beings for basic research into the properties of self-awareness which could not be conducted on an animal because in this study we will need the self reflection of the person involved to give us feedback on a few small parts of the experiment.

Soon after basic research is over however we will look at the options for the wireless reading of signals from the Thalamus region, and the wireless transmission of signals back into that area.

Regardless of how that particular direction of the research goes there will be numerous other insights from this extremely detailed study of the human nervous system with it’s environment.

What I can say is that we will be able to make major advances in medical research.

We will begin to map the information channels like never before, using a computer to model the network of information throughout the nervous system, from the fingers, to the toes to the brain to the eyes, to voice, to the ears. To how we listen and see, to understanding how the metaphors of smell are encoded digitally in networks of neurons.

So we are going to begin to listen in, on the complex information patterns of the human nervous system, I am like taking a Stethoscope to your brain, to see what patterns are inside it that the computer can understand and translate for us. That is the plan.

If you can help me answer some of these hard questions, if you are stuck up late nights reading the latest paper in computational biology and also totally alert to all the advances in AI coming out of Google via DeepMind Technologies, then I want to talk to you.

I think it takes people who are students of both biology and computer science to understand where the world is going to go next.

If you are interested in discussing the science and technology that might go into building next generation brain computer interfaces please connected with me in these groups:

Self Aware Networks: Computational Biology: Neural Lace

https://www.facebook.com/groups/neomindcycle/

Theoretical and Computational Neuroscience

https://www.facebook.com/groups/theoretical.and.computational.neuroscience/

Neurophysics+

https://www.facebook.com/groups/IFLNeuro/

Death Star Robot

https://www.facebook.com/groups/deathstarrobot/

I’m also with The Vision Agency helping to bring products like MicrodoseVR to the world.

--

--

SVGN.io
Silicon Valley Global News SVGN.io

Silicon Valley Global News: VR, AR, WebXR, 3D Semantic Segmentation AI, Medical Imaging, Neuroscience, Brain Machine Interfaces, Light Field Video, Drones