FNIRS Functional Near Infrared Spectroscopy: The Neural Lace Podcast Season 2 Episode 4

This podcast was recorded just over a week before Neuroscientist Jonathan Toomim will give a talk about FNIRS at NeurotechX in San Francisco at the Red Victorian Experimental Hotel.

SVGN.io
Silicon Valley Global News SVGN.io
7 min readMay 11, 2019

--

Podcast can be found at https://youtu.be/DlfAOM4dx90

The talk will be held on May 16th 2019 to RSVP click here: https://www.meetup.com/NeuroTechSF/events/nczrxqyzhbvb/

Article, Podcast, and Event Organized and hosted by Micah Blumberg, http://vrma.io

To learn more about FNIRS in this podcast I spoke to Jonathan Toomim who was in a valid sense grandfathered into it. His grandfather, the late Hershel Toomim and Robert Marsh were awarded the US patent on HEG Hemo-Encephalography in 1999.

What can neurotech enthusiasts and neuroscientists do today with FNIRS?
Answer: Biofeedback, and pure measurement (like a poor man’s FMRI.)

Some of the references we made in the podcast:

Mary Lou Jepsen’s Openwater talk at the Long Now

2. Reconstructing speech from the audiocortex:
“When people speak, or even just imagine speaking, telltale patterns of activity appear in their brain. That is already known knowledge from decades of previous research. What they’ve also found by now is that distinct, but recognizable, patterns of signals also emerge when we listen to someone speak, or imagine listening.

“Therefore, speaking, imagining speaking, listening to someone speaking, or imagining listening, all produce a pattern of signals. Experts through the years have tried to record and decode these patterns. They see a future in which thoughts need not remain hidden inside the brain — but instead could be translated into verbal speech at will. Although it was one thing to theorize doing this, it was another thing altogether to actually do this. The feat was much more challenging than expected.

“Dr. Mesgarani is an associate professor of electrical engineering at Columbia’s Fu Foundation School of Engineering and Applied Science. At first, to decode the brain signals, he and others focused on simple computer models that analyzed spectrograms — which are visual representations of sound frequencies.”

“Towards reconstructing intelligible speech from the human auditory cortex”

3. Highly Spatial EEG from Switzerland and Germany: “Now, scientists at the University of Geneva Switzerland in collaboration with Cologne University (Germany) have investigated whether a non-invasive method — electroencephalography (EEG) — could be employed in tandem with mathematical algorithms to measure this brain activity externally.

“For the first time, they proved that this technique is able to record signals usually only seen by implanting electrodes in the brain.

“Using the technique, scientists were able to quantify and record the electrical activity of the subcortical areas of four OCD and Tourette’s patients who had been given electrode implants. While doing this, patients were equipped with an EEG as the scientists measured the activity of the same areas from the surface.”

4. Hebbian Learning

5. Predictive Coding

6. Jeff Hawkins Cortical Column’s behaving as Gridcells for items in the world (and more) references
https://numenta.com/resources/videos/jeff-hawkins-human-brain-project-screencast/
https://numenta.com/neuroscience-research/research-publications/papers/a-theory-of-how-columns-in-the-neocortex-enable-learning-the-structure-of-the-world/
https://numenta.com/resources/videos/thousand-brains-theory-of-intelligence-microsoft/

/////////////////////////////////////////////////////////////////////

About the talk:

The talk will be held on May 16th 2019 to RSVP click here: https://www.meetup.com/NeuroTechSF/events/nczrxqyzhbvb/

On February 16th Jonathan Toomim will give a talk on Functional Near Infrared Spectroscopy at NeurotechX hosted by the Red Victorian in San Francisco

Jonathan Toomim, who is a Neuroscientist by training, will talk about brain computer interfaces and more, specifically he will talk about FNIRS: Functional Near Infrared Spectroscopy, a product that he built in 2014 before he started getting involved with projects like Xthinner and Blocktorrent

Here are some interesting links about Toomim that may give you an idea about some of the interesting projects he has been involved in.

Excerpt:
“Hemo-EncephalographyIn 1999, the late Hershel Toomim and Robert Marsh were awarded the US patent on HEG. HEG uses light to observe blood in the brain through the skull.In 2009, Hershel Toomim and I had the opportunity to discuss the wave. Had he seen it using HEG? His answer was, that he had looked for it, but “No”, he had not seen it. His thinking was that for cerebral homeostasis, the brain micro-manages blood flow, normalizing the wave.
“Hemo-EncephalographyBut could it be a matter of filtering, i.e. very low frequency signals were being excluded? (The way we were able to see the wave plethysmographically was by eliminating the low frequency filtering of the state-of-the-art heart rate variability instrument.)Hershel and I agreed to consider it..He passed away in 2011 at age 95.
“This year I learned that Jonathan Toomim, Hershel’s grandson had continued his HEG research.I connected with Jonathan via Skype, and learned that he had developed a research instrument without filtering.”
https://coherence.com/Breathing_Blood%20Flow_And_The_Brain_Production.pdf

http://jtoomim.org/

/////////////////////////////////////////////////////////////////////

About NeurotechX of San Francisco (or NeurotechSF for short)

We meet to discuss ideas related to neuroscience, brain computer interfaces, deep learning, software development, we host talks, and we have hacknights were people can come and work on neurotech related technologies.

In general the NeurotechX group of San Francisco has it’s focus divided between a few different key topics which are all in some sense related to the research and development of neurotechnology: Medical imaging technologies including brain machine interfaces, brain computer interfaces, and software development around brain computer interfaces including spatial computing like VR, AR, WebXR, and also Deep Learning as applied to 3D volumes of data, and as applied to medical imaging.

This year 2019 we have pledged to spend more time studying Deep Learning on 3D Point Clouds. The goal is to study how to do semantic segmentation or object segmentation on 3D data such as point clouds, voxels, meshes etc that might be collected with lidar, RGBd cameras, fMRI machines, FNIRS functional near infrared spectroscopy, openwater (Mary Lou Jepsen’s technology), EIT (electrical impedance tomography, or new highly spatial EEG (reference: https://www.techexplorist.com/gentle-method-unlock-mysteries-deep-brain/21231/?fbclid=IwAR3HPPRuBUUc617f9MxlV9NHB6HJ-f3k9WntYhmkHQfFKU3fi7ZUw8YMPTg )

Applications: In short semantic segmentation of 3D data allows (for example) a vehicle to identify objects in the environment and separate out which pixels belong to that object and which do not. Object segmentation has been used to make advanced neural correlations in fruit flies to fruit fly behavior captured with a high speed camera.
https://www.hhmi.org/news/artificial-intelligence-helps-build-brain-atlas-fly-behavior?fbclid=IwAR0ectGUfE9vgnWp4gac1H-UyYCWeG9h2Q737M52I0_lqyC2SEclkH7gtws

Other applications can include recognizing spaces, planes, edges, and objects for augmented reality and virtual reality applications, such as the ability to re-skin the couch in your space so that it appears as a wall in VR.

In January of 2019 we were inspired by a talk given by Or Litany https://orlitany.github.io/ to shift our focus onto deep learning in a significant way. Previously our group met to work on connected EEG to VR via an implementation that involved this group learning how to develop WebXR with AFRAME, three js, and pipe the EEG signals to the webpage via a web-socket, needless to say we were successful and now we are changing our focus to accomplish something bigger.

This group is interested in Pointnet http://stanford.edu/~rqi/pointnet/ and in 3D cross hair convolutional neural networks https://medium.com/silicon-valley-global-news/3d-cross-hair-convolutional-neural-networks-5d39e2b565ca

Previous examples of our work with EEG and VR:
https://photos.app.goo.gl/5XsrPcEeVdUmVt9t6

Here is older video of the Neurohaxor WebXR, EEG, FFT, Scatterplot/spectrogram project running in WebVR from 10.25.2018 https://www.facebook.com/worksalt/videos/2467372666622699/

This was our original event description for Neurohaxor code nights https://medium.com/silicon-valley-global-news/neurotechsf-sf-vr-360-noisebridge-91a34d788a5d

The story so far: https://medium.com/silicon-valley-global-news/noisebridge-went-to-the-maker-faire-in-this-article-you-will-learn-about-ngalac-the-93f4857d3014

Watch the Neural Lace Podcast Season 2 Episode 1 NeuroTechX and OpenEIT https://youtu.be/aexQwTpOwYc with Jean Rintoul to get the big picture vision of what we want to accomplish.

Also watch the Neural Lace Podcast S2 E2 with Jules Urbach https://youtu.be/yMsaNsqzjFQ

Previously: We made significant progress at the July 29th, 2018 meetup: We were able to cause voltages from the skin to move objects in WebVR. https://www.facebook.com/worksalt/videos/2332211350138832/

Other links including the Github and our online groups: https://medium.com/@vrma/list-of-links-from-neurotech-sf-vr-on-8-31-2018-7a80cfd3901b

/////////////////////////////////////////////////////////////////////

--

--

SVGN.io
Silicon Valley Global News SVGN.io

Silicon Valley Global News: VR, AR, WebXR, 3D Semantic Segmentation AI, Medical Imaging, Neuroscience, Brain Machine Interfaces, Light Field Video, Drones