Game Environment as a Training Platform for Brain Computer Interfaces

Vadim Astakhov
Brain Computer Interfaces
10 min readSep 11, 2019

In this post, I will demonstrate the use of Google Cloud Platform and Game Engine Unity to build personalized Artificial Intelligence pipeline for non-invasive Brain Computer Interface. (Demo can be found here)

I will walk through the process of building a demo where Cloud PubSub service has been used to ingest data from small BCI device which detect the electrical activity from human brain. I’ve used Dataprep and Dataflow to transform raw data from the Pubsub topic and import transformed data into BigQuery. Then, BQML and AutoML services have been used to build Machine Learning models and train those models to recognise my intentions based on patterns of electrical signals from the brain. Finally, I’ve leveraged those models to mentally control character/avatar in video game environment Unity.

The pipeline has been built generic enough to be quickly adapted to various types of computer games.

Overview and why I am doing this

Brain Computer Interfaces (BCI) have known for decades but still fails short to get adopted by general public. Non-invasive BCI devices require users to keep a high concentration on specific mental tasks to produce robust patterns detectable by computer. Such concentration can be easily disturbed by external factors and over all does not provide a level of comfort required for a technology to be adopted in everyday life.

Pre-trained Machine Learning Model can be used to simplify the use of the technology by providing personalized-AI which will be able to detect our intentions and mental commands by analysing electrical signals from brain. Still, those personal AIs should be trained to recognize a person’s unique electrical signatures generated by brain activity.

Computer Video games seems like a good environment where we can train our personal AI to “understand’’ personal language of mind. As a motivation factor, BCI can provide an additional way to operate a gaming environment. They could be useful as an additional control channel, to send commands or mental states which cannot be easily transmitted through audio or touch interfaces. BCI could also be used for emotional state monitoring either 1) during the game, in order to make adaptive and dynamic video games or 2) during the game creation in order to maximizes some measures of game quality that could be derived from a tester’s mental state.

Brain computer technology has really commoditised recently.Today we have devices like the OpenBCI, Emotiv, NeuroSky, Muse, Neurosity which are affordable for both developers and consumers. That makes the idea of using those devices with video games very much realistic.

The first adopters of that technology can be gamers interested in new way of operating inside a game with “mind command”. People interested in “Emotional” communication, those who might be interested to express their emotional state when articulation or visual symbolic communication might not be adequate. Also people with motor and voice disabilities which can not use traditional keyboards, mouse or voice interface.

BCI Hardware

There are a few affordable low cost commercially available BCI devices. For this demo, I was using Emotiv Insight which comes with well documented steps on how to build applications using the Emotiv Cortex API.

Once you have the device, you will need to install Cortex API and download the Emotiv NodeJS example provided on github. Cortex software can be found on the Cortex download page, where installers include the Cortex UI application and Cortex Service. Cortex Service start automatically after install finish. To check that Cortex Service is running, you just have to turn Emotiv Insight device on and open Cortex UI.

If Cortex Service works well, then you can enter to Cortex UI and check the quality of signal coming from the device. Make sure that contact quality is not too low otherwise you will not get good and representative signal to train your model. Cortex UI will show electrodes which have bad contact as dark dots. Try to press or move them a bit to get better contact quality (at least 70% or better).

Set-up for BCI data acquisition pipeline

As a next step, you will have to install node js environment and download examples provided on github (numbers.js and raw.js). Those two examples will let you capture signals from device and send them to PubSub. All you have to do is just add a few lines of the code to publish a message to pubsub topic.

Add this line to the top of “numbers.js”

‘use strict’;

And add function which will publish message to PubSub topic:

const {PubSub} = require(‘@google-cloud/pubsub’);

// Creates a client

const pubsub = new PubSub();

async function publishMessage(topicName, data) {

const dataBuffer = Buffer.from(data);

const messageId = await pubsub.topic(topicName).publish(dataBuffer);

console.log(`Message ${messageId} published.`);

}

Then find a line (see numbers.js) where key value map created from data coming from Cortex API from API

const output = Object.keys(averages)

.map(k => `${k}: ${averages[k].toFixed(2)}`)

.join(“, “);

and change this line to create JSON lines.

const output = Object.keys(averages)

.map(k => `”${k}”: “${averages[k].toFixed(2)}”`)

.join(“, “);

publishMessage(‘my-topic’, “{“ + output + “, \”Timestmp\”: “ + “\”” + new Date().toISOString() + “\”” + “}”);

console.log(“{“ + output + “, \”Timestmp\”: “ + “\”” + new Date().toISOString() + “\”” + “}”);

Google provides a Dataflow template that can consume messages formatted as JSON lines from PubSub topic and store them in BigQuery.

You will have to create a ‘my-topic’ topic in PubSub then follow steps in the Dataflow template to deploy dataflow. After that you can run “node numbers.js”.

It will start print data from Cortex API and also send them as messaged to PubSub ‘my-topic’ . You should see your messages printed on the screen

Then you can start this Dataflow template which will automatically read data from PubSub and upload them to the Big Query table.

You can explore data from BCI device directly by querying table in BigQuery

Set up for BCI training pipeline

I’ve used the EmotivBCI User Interface to practice simple mental commands such as moving an object left or right, push or pull. EmotivBCI UI present a box which move in different direction and walk you through a few exercises when you asked to think about certain simple action like move left or move right while this box is moving through the screen.

Data collected from BCI device while you are practicing those exercises. You can see the profile and use it as an initial training data set for Machine Learning algorithm. Also, it lets you explore training profiles visually and see how different mental patterns for different mental activity.

NodeJS code sends data collected from BCI device to PubSub where they get moved to BigQuery. Timestamp was added to records when I practised moving my mouse and I’ve used those timestamps to annotate data and mark periods when I’ve practised specific mental commands such as moving the mouse left or right. This anonated data has been used to train ML models to predict my mental commands and intentions.

I’ve used the Dataprep service to create another Dataflow to prepare the training Dataset. This dataflow transform data and prepare training data set for ML training.

It uses my timestamps to mark portions of training data with specific flags which represent the mental commands I’ve been practising during that time. Those flags let me map mental commands to computer game controllers. For example, when I mentaly command “left” the game avatar should turn left. This map is subjective and reflects personal preferences on how to mentally control the game environment. It can be changed based on personal preferences. Map is stored as additional column in transformed data and used as a target to predict for Machine Learning models.

As the next step, I’ve used BQML to build multi-class classification models. Creating a model in BQML is very simple, just go to the console and run query to create logistic regression model which will classify your signals for different mental commands which will represent your classes. In this query, you will define model name, model type, your target to predict as well as training data from your processed data set.

CREATE MODEL `dbmigration.emotiv.emotiv_unity` OPTIONS

( model_type=’logistic_reg’, auto_class_weights=true, input_label_cols=[‘dateformat_Timestmp1’], max_iterations=10 ) AS

SELECT alpha, betaL, betaH, gamma, theta, dateformat_Timestmp1 FROM `dbmigration.emotiv.eeg___2`

To explore the efficiency of your model before moving forward with pipeline, you can just query ML.PREDICT directly in BigQuery editor

AutoML Tables is another alternative if you would like to explore models not provided through BQML API. You will have to go through a few steps such as importing transformed data set from BigQuery, then define your target variable to predict your mental command and train your model.

After the model get trained and deployed you can test it in the console and use REST API to make predictions programmatically in real time.

Use ML models for BCI mental command identification

BQML and AutoML provide different ways to programmatically make predictions to identify mental commands in real-time. For this blog, I’ve demonstrated use of BQML. The REST API call for AutoML can be copied from the console and used with minimal coding.

I’ve created another copy of numbers.js which detects signals from Emotiv Insight device in real time. I modified this code to call the BQML model every second to predict my mental commands such as moving avatar left or right.

const {BigQuery} = require(‘@google-cloud/bigquery’);

async function query(alpha, betaL, betaH, gamma, theta) {

// Create a client

const bigqueryClient = new BigQuery();

const query = `SELECT predicted_dateformat_Timestmp1

FROM ML.PREDICT (MODEL \`dbmigration.emotiv.emotiv_unity\`,

( select alpha, betaL, betaH, gamma, theta

from \`dbmigration.emotiv.eeg___2_copy\`

where EXTRACT(DAY FROM Timestmp) =155554084100

union all

select `+ alpha + `,` + betaL + `,` + betaH + `,` + gamma + `,` + theta +`))`

;

const options = {

query: query,

// Location must match that of the dataset(s) referenced in the query.

location: ‘US’,

};

// Run the query as a job

const [job] = await bigqueryClient.createQueryJob(options);

const rows = await job.getQueryResults();

console.log(“Call BQML “ + JSON.stringify(rows).substring(37,42) );

//right

if(JSON.stringify(rows).substring(37,42).match(“right”)){

}

console.log(“Call BQML “ + “ right” );

}

I’ve printed those responses to have a feedback and visually verify the correspondence between my intentions and what is predicted by ML model. Such visual feedback can be used to collect more data and correct models when prediction does not correspond to personal expectations.

Try ML models for BCI control of Unity Game Environment and Model enhancement

Now that the preliminary model has been created, we can use it to operate Gaming environment. I choose to use Unity Game engine. To operate this environment with mental commands, I’ve used RobotJS library in my NodeJS code to control mouse movement. This library provides a robot to automatically trigger mouse and keyboard events. Every time the model classified my intention like move avatar left/right or jump, it triggered a specific key event which control Unity Avatar moves.

var robot = require(“robotjs”);

robot.keyToggle(“up”, “down”);

robot.keyToggle(“up”, “up”);

Gaming Environment can be it-self a platform to train Brain Computer Interface. Playing the games with mental command let me collect more data for ML models. When I see that my mental commands do not correspond to ML predictions, I can interfere and use simple mouse click to mark prediction as wrong prediction. Also, I can use a physical mouse to correct avatar movements. For example if model predict “move left” but my intention to move right, then I can intercept the action with physical mouse which will correct the avatar moves and at the same time annotate the data from BCI with my real intention. That data can be used to retrain the model.

While playing with Unity, I kept my data acquisition pipeline running. This way, I was collecting data while playing with Unity environment and use new data to re-train ML model. Such recurrent approach let me to achieve better prediction power of personal AI model over time and operate Brain Computer Interface more precisely. It was a great fun to play games and train personal BCI at the same time.

(Those who are interested in real time demo can found it here: Operate Unity gaming environment with Brain Computer Interface).

Those who might be interested to generalize this pipeline for other Brain Computer Interface devices, should consider to leverage Google Cloud IoT Core which provide unified interface to integrate small devices with google Cloud.

--

--