Neurohacking your brain with Nootropics (Draft for Comments)

Vadim Astakhov
Brain Computer Interfaces
10 min readMar 18, 2021

AI/ML pipeline for Nootropics Ultrasound dose refinement and targeted delivery to brain regions

Introduction

Nootropics become very popular among people (often calling them-self Neurohackers) looking to enhance their mind, brain and body capabilities. Neurohacking is a subclass of biohacking, focused specifically on the brain where Neurohackers seek to better themselves by “hacking the brain” to improve reflexes, learn faster, or treat psychological disorders by leveraging warehouse transcranial stimulation technologies as well as nootropic supplements.

Coffe is the most appreciated one but this industry is growing with a lot of new offerings even though the effects of the majority of these agents are not fully determined. Researchers need none-expensive tools to perform experiments with volunteers where they administer certain substances multiple times. They often try to minimise doses to mitigate potential side-effects but still administer enough substance to produce meaningful effects (if any).

Even nootropics considered Generally as Safe Substances, extensive consumption might lead to some psychological addiction and even side effects due to accumulation of those substances and/or their metabolites in the body. At the same time reducing the dose might lead to no effect. Nootropics are usually taken as pills which go through the digestive system where some of them are lost before entering the blood system. Then they have to cross the Blood Brain Barrier (BBB) to get to the brain and have an effect while they constantly get metabolized.

One way to enhance efficiency will be to enhance the substance absorption rate in the brain regions where nootropics are meant to be delivered to create desirable effects.

In this blog, I will demonstrate the use of Google Cloud Services to set up an AI/ML pipeline for researchers exploring Nootropics claim that those substances can be taken to improve mental performance in healthy people.

I am exploring a low energy transcranial ultrasound stimulation for a certain part of the brain to increase percolation of nootropics through BBB. This approach is typically used in Chemotherapy but Nootropics researchers can quickly adopt this method by leveraging low intensity ultrasound devices with low cost Brain Computer Interfaces for brain activity mapping (Visual demo on “Neurohacking your brain with Nootropics and Ultrasound Stimulation” can be found here) .

Also, there are a couple more use cases we are exploring where this solution might be repurposed:

It might be possible to adopt this pipeline as an element of drug addiction by reducing drug dose. (Though this proposal will require extensive research by medical professionals).

Another use case is to leverage a pipeline to detect early signs of migraine or certain disorders like depression and help in administration of low doses of prescribed drugs.

This blog will walk through the steps to set up such an environment and required cloud based computation resources. I’ve repurposed a solution developed in a previous blog (Game Environment as a Training Platform for Brain Computer Interfaces).

High Level overview of the pipeline

I’ve collected EEG data from Brain Computer Interfaces and pushed them to PubSub. I’ve used Dataflow to move data from PubSub topic to Bigquery and then use BQ AutoML to create a model to perform classification of different states of my mind. This model has been used to leverage nootropics supplements and guide ultrasound brain stimulation to experiment with nootropics supplement dose and effects.

Hardware used to build the solution

BCI for brain activity sensing

I’ve explored several low cost Brain Computer Interfaces (BCI) which collect signatures of brain activity. This information was used by AI/ML pipelines to map signals to human psychological states before and during nootropics administration.

NextMind and MindWave were easy to use but did not provide wide coverage for brain regions where I could detect signals and I’ve picked Emotiv, OpenBCI and Neurocity which has been developed in recent years. Those systems have relatively low cost and provide advanced APIs and GUIs to collect EEG signature and experiment with different emotional states.

Neurosity GUI at the top and OpenBCI User Interface at the bottom

Steps to build the data acquisition pipeline described for Emotiv BCI in the previous blog (Game Environment as a Training Platform for Brain Computer Interfaces). to repurpose this pipeline for Neurosity, researcher will have to install Neurosity Notion API and run the code to get raw brainwave signals from Neurosity Notion device then add function which will publish message to PubSub topic (see details in the blog):

const {PubSub} = require(‘@google-cloud/pubsub’);

// Creates a client

const pubsub = new PubSub();

async function publishMessage(topicName, data) {

const dataBuffer = Buffer.from(data);

const messageId = await pubsub.topic(topicName).publish(dataBuffer);

console.log(`Message ${messageId} published.`);

}

Similar steps can be performed if researchers use OpenBCI Python API and leverage this example to export real time data from the device.

Transcranial Ultrasound Stimulation Hardware

I’ve used a waterproof Ultrasonic Module JSN-SR04T Transducer Sensor for Arduino. This transducer is small and can be easily mounted to BCI as it shown on the picture to perform transcranial stimulation of a specific part of the brain.

Simple tutorial to set up a transducer can be found on the web with examples of the code to run transducer. Researchers can experiment with different ways to run stimulation by adjusting the delay time parameter in the code which will trigger the transducer (see Appendix A)

delayMicroseconds(5);

// Trigger the sensor by setting the trigPin high for 10 microseconds:

digitalWrite(trigPin, HIGH);

delayMicroseconds(10);

digitalWrite(trigPin, LOW);

Transcranial Ultrasound Stimulation should be used with conscious

The idea of inducing brain activity or even projecting complex holograms in someone brain to create experience is not new. Ultrasound technology has been used to stimulate specific brain regions and specific neuron types. Ultrasound stimulation focused on certain regions provides a mechanism for neuromodulators to cross the Blood Brain Barrier and get absorbed by neurons in this specific region.

There is a nice short video of how this technology can be used to deliver drugs (more detailed can be found here).

I’ve decided to experiment with ultrasound transcranial stimulation as it was extensively studied for brain stimulation and allowed me to use various ultrasound devices and allow me to focus ultrasound to some areas of the brain.

Also, I’d like to target not only certain areas of the brain but also certain types of cells in the brain. Those cells use different types of neurotransmitters and neuromodulators. So, I’ve decided to explore low dose nootropic/foot supplements which can be precursors for neurotransmitters and neuromodulators .

When leveraging ultrasound technology we would like to use the minimal intensity and maximum specificity for brain region and neuron types thus we are going to use specific food supplements which can act as a precursors for neuromodulators specific to certain emotional states

Claims to explore

I’ve explored the claim that nootropics affect emotional state and the second claim that nootropics can enhance creativity and concentration.

For this experiment with emotions, I’ve used out of the shell supplements like Curcumin and 5-HTP which often suggested to improve mood state.

I’ve decided to leverage Brain computer Interfaces to verify my emotional state not just by subjective report but also with EEG signatures of my brain.

Experiment (Does the supplements improve the mood?)

To perform this experiment, I’ve reviewed scientific publications to understand what part of the brain controls emotions and if we can use artificial intelligence to map our moods.

Then I’ve repurposed algorithms developed by researchers actively working in this area of detecting emotions at scale.

There were several researchers published studies on mapping brain activity to emotional states like “Decoding the Nature of Emotion in the Brain” or “A Bayesian Model of Category-Specific Emotional Brain Responses” published in PLOS.

Those works demonstrate that certain areas are more active during various emotional states which provide a way for tracking emotions real time in the brain.

Source: https://gifer.com/en/BP8u

This technology collects personalized databases of emotional states and correlates it with Brain activity.

Brain has some distinguished signatures which represent emotional state. Researchers are presenting a subject with video and audio clips which can induce various emotional states and collect EEG signatures of those states.

Comments: There are some other approaches like Affective computing developed by Affectiva — a software company that builds artificial intelligence that understands human emotions, cognitive states, activities and the objects people use, by analyzing facial and vocal expressions.The company spun out of MIT Media Lab and created the new technology category of Artificial Emotional Intelligence (Emotion AI).

I’ve used Brain Computer Interfaces to collect EEG signals and push them to the PubSub topic, then Dataflow moved those data to BigQuery. This approach was presented in the works of a MIT researchers when they collect EEG signals for different emotional states and use them to train ML model for EEG emotion recognition where ML model was trained to perform simple classification positive vs negative emotional state. (Source: https://cleverpoint.pro/applications/ )

The model was able to identify positive emotions vs negative emotion with good accuracy by recognizing higher intensity of Gamma and Beta rhythms in certain part of the brain. (Source: https://bciovereeg.blogspot.com/2016/01/identifying-stable-patterns-over-time.html )

I’ve did follow same approach and leverage published protocol to induce emotions together with the Dataset for Emotion Analysis using Physiological and Audiovisual Signals for inducing emotions. I’ve modified those protocols and created a personalized database of clips which can induce some emotional states specific to my-self though based on The Anatomy of Emotions and used 6 basic emotional states such as

Happiness -which I’ve triggered by collection of images of loved ones
Anger — by leveraging stories of someone abusive behavior
Sadness — by leveraging memories about died friends and so on.

EEG signatures for various emotional states. Rhythms like alpha, beta, gamma, and theta presented in different colors

To extract unique signatures for a specific image, I’ve been using an approach published in the scientific paper “Natural image reconstruction from brain waves: a novel visual BCI system with native feedback” where I’ve leveraged the feedback loop idea to map EEG signatures to specific emotionally loaded images.

Software package — Merging with AI: How to Make a Brain-Computer Interface to Communicate with Google using Keras and OpenBCI helped me with signal processing.

Next step was to repeat the data collection process several times and create a look-up table which can map mood states to EEG signatures and perform Emotion Classification Using Deep Neural Networks. (A very nice overview on how to use Brain computer Interfaces to build classifiers can be found in A Beginner’s Guide to Brain-Computer Interface and Convolutional Neural Networks.)

Finally, I’ve used an EEG device to detect mood state and leverage it to create a look-up which I’ve called “Brain to Emotion Map”.

After those steps were completed, I could run an experiment when video clips were played to induce the feeling of sadness and then administer a low dose of nootropics. I’ve repeated the experiment 10 times in two consecutive weeks. In 5 cases, I’ve used an “Emotion Map” to guide ultrasound stimulation devices to stimulate brain regions active during specific emotional states.

Those experiments provided some interesting insights. In all 5 cases with ultrasound stimulation, I’ve experienced a quick shift in my mood state, though I definitely do not claim to produce scientifically rigorous results.

Demo of BCI use to explore ultrasound stimulation

Even more to say, this approach requires further development and can be only considered as a toy environment to explore emotions and explore ways to privately share them.

Future development (Sleep Lab)

Nootropics proponents often claim that supplements can enhance concentration and creativity. To explore this claim, I’ve choose to explore supplements which are reported to enhance concentration and control during dream states. It seems an interesting use case as it allows us to experiment with creativity through complexity of the dream and at the same time test if supplement enhances the control and concentration.

It was recently reported that Lucid dreamers can hear and answer questions while still asleep that makes it interesting case for potential new therapies

Experiment 2 (Does the supplements enhance concentration and creativity?)

For this experiment, I’ve explored protocols described in the book of “Advanced Lucid Dreaming: The Power of Supplements” where it describes how some food supplements boost awareness and consciousness control during dreams.

I am using ZLab software and Hardware from ZMax to set up my “sleep lab” at home and run experiments similar to those described in the original paper (some video materials about original study can be found here).

I am experimenting currently with a set-up when GCP analytics platform can help me predict optimal protocol for sleep set-up, such as dose time on-set for lucid dream supplements.

Also, I am exploring different modalities for stimulation such as devices from Vielight and some new optimized models.

I don’t speak for my employer. This is not official Google work. Any errors that remain are mine, of course.

Appendix

/* Arduino example sketch to control a JSN-SR04T ultrasonic distance sensor with Arduino. No library needed. More info: https://www.makerguides.com */

// Define Trig and Echo pin:

#define trigPin 2

#define echoPin 3

// Define variables:

long duration;

int distance;

void setup() {

// Define inputs and outputs

pinMode(trigPin, OUTPUT);

pinMode(echoPin, INPUT);

// Begin Serial communication at a baudrate of 9600:

Serial.begin(9600);

}

void loop() {

// Clear the trigPin by setting it LOW:

digitalWrite(trigPin, LOW);

delayMicroseconds(5);

// Trigger the sensor by setting the trigPin high for 10 microseconds:

digitalWrite(trigPin, HIGH);

delayMicroseconds(10);

digitalWrite(trigPin, LOW);

// Read the echoPin. pulseIn() returns the duration (length of the pulse) in microseconds:

duration = pulseIn(echoPin, HIGH);

// Calculate the distance:

distance = duration*0.034/2;

// Print the distance on the Serial Monitor (Ctrl+Shift+M):

Serial.print(“Distance = “);

Serial.print(distance);

Serial.println(“ cm”);

delay(100);

}

--

--