Emoto: Software Overview and Instructions

Gautam Bose
6 min readJul 16, 2019

--

Build this ‘sick’ emoto controlling application!

So you want to build emoto, and actually have it move.

These instructions assume you have a familiarity with python, javascript, very basic iOS development (i.e. being able to open XCode and build an app), and command line interfaces. They will show you how to create ‘actions’ as demonstrated in the concept video.

Introduction

The software we will publish for Emoto is a prototype system. It is the system we used internally to film the concept video and therefore is very rough and alpha in areas. It also is one part of a greater software system that was originally planned for Emoto, which is detailed here.

Currently, the only way to manipulate Emoto is with direct manipulation, i.e all movements are directly tied to inputs on the Controller App. All the actions seen in the video are puppetered with a Controller App in real time by us; usually Lucas or Marisa.

1. Raspberry Pi Setup

The first thing required to get Emoto moving is to setup the raspberry pi zero. There are a couple preliminary setup features, common to any raspberry pi that are linked below.

Flash Raspbian

Download raspbian stretch lite here. This is the version of raspbian that we have tested on.

Install it on your raspberry pi’s SD card according to these instructions.

Wifi and Networking Setup

After installing raspbian, plug in your Pi to a monitor, keyboard, and power. You’ll first need to setup the pi on your network.

Setup the raspberry pi on your home WIFI network according to these instructions.

Additionally, enable SSH and I2C:

  1. Enter sudo raspi-config in the terminal
  2. Select Interfacing Options
  3. Navigate to and select SSH
  4. Choose Yes
  5. Select Ok
  6. Navigate to and select I2C
  7. Select Ok
  8. Choose Finish

Emoto Instructions

Install python3 pip. Raspbian stretch already includes a compatible version of python3.

sudo apt-get install python3-pip -y

Clone the emotoKit repository

git clone https://github.com/emotoConcept/EmotoKit.git

Move into the py_zero_lib directory and install required python packages. This is where all the driver code for the Emoto body lives, as well as all the code to interface with the servo hat, eyes, and emoto control app.

cd EmotoKit/piServer/py_zero_lib/
pip3 install -r requirements.txt

Test run the main script (make sure the Servo Hat is plugged in to the Pi)

python3 PositionControl.py

If there are no errors, then this section of the setup is complete! Otherwise, refer to our troubleshooting guide. You can unplug the Pi and continue completing the hardware.

2. Build the Controller Application

The controller application requires Expo to build. To install Expo, follow the directions here.

*ahem* this step is one of the most ‘alpha’ parts of the woftware

In order for the controller application to communicate with your raspberry pi inside Emoto, the app needs to know the IP address of the raspberry PI. To find out what it is.

To find it out start up emoto and give the raspberry pi ~20 seconds to startup and join your network. Look for its IP address using some kind of network scanner. While making our video, we relied on the free version of LanScan, but there are several good alternatives available such as Advanced IP Scanner.

Look for the device with Vendor Raspberry PI Foundation, unless you have any other Pi’s currently on your network that will be your device. Don’t worry about not being able to see the hostname.

Copy that IP address down, you will need it for several things later on (such as sshing into the Pi when it is tucked into the base of the robot).

Open this javascript file in a text editor of your choice:

/EmotoKit/control.poser.app/components/AnimationList.js

Edit variable at the top of the constructor function in this file to contain your raspberry’s IP address as a string.

Most routers have an option to configure a static IP for local devices, this is a good idea so you don’t have to constantly keep changing this string each time the Pi’s IP changes. Save and exit.

Now you’re ready to build the app.

cd /EmotoKit/control.poser.app/
npm install
expo start

From there, follow the expo instructions to build the app onto your expo client on either an iOS or Android device, or use a simulator. We found that having multitouch input capabilities enabled more dynamic animations, so using a real phone or tablet is preferred.

If the app loads with no errors and shows a white screen, you’re ready to move on to the next step. Don’t be alarmed by the blank screen, the app will only update when its connected to the Emoto body which we will configure later.

3. Eyes Setup

Similarly to the body setup, the eyes require you to input the IP address of the raspberry pi. To do this, open the file:

EmotoKit/VideoEyes/App.js

Again input the Pi’s ip right at the top of the constructor.

What to look for in the code

After that, npm install the modules:

cd EmotoKit/VideoEyes/
npm install

then browse to EmotoKit/ios in your finder and open VideoEyes.xcodeproj in the latest version of Xcode (at the time of writing: 10.2.1).

Configure provisioning, and then build and run.

If everything builds successfully, you should see Emoto’s eyes blinking up at you from the phone!

4. Putting it all Together

The final steps to running emoto is to get these three systems working together. The first thing you should do is run the PositionController server from the Pi Zero. This starts by taking your computer and ssh’ing into the pi.

ssh pi@<Your Pi's IP Here> // when prompted, enter default pi password: raspberrycd EmotoKit/piServer/pi_zero_libpython3 PositionControl.py

When that script is running, start up the Video Eyes application on the device that is going to Emoto. You should see a connection confirmation in the console for the raspberry pi. If you don’t, check the IP setup steps again and wifi connections.

Then start up the controller application. You will see a list of preloaded eyes animation. Pick one and you will be brought to the remote control screen. From there you will be able to move and animate the robot.

The two controls are an XY slider for the base and main arm, with the second slider on the left used for the phone rotation:

Gifs coming soon!

The recording buttons allow you to save and playback animations of the robot body paired with eyes sequences.

Gifs coming soon!

You can now move and pose Emoto as you please!

5. Epilogue (Advanced)

Adding more personal eye sequences is possible, if a little difficult. The eyes were hand animated in After Effects and are played back as videos. If you wanted to display your own content on the screen, you can create your own sequences of videos with a near seamless transitions between them in the VideoEyes app as follows:

  1. Load videos into the EmotoKit/VideoEyes/assets/ folder in a structure that works for you (you can see our samples)
  2. Create a new JS object in the file EmotoKit/VideoEyes/assets/sequences.js as follows (the number of animation elements can be as long as you want)
My_New_Animation = {
name: 'Custom Animation', sequence: [
{anim: require('../assets/My Video1.mp4')},
{anim: require('../assets/My Video2.mp4')}
]
}

and lastly, append the new JS object you’ve created to the ‘sequences’ array at the bottom of the file.

sequences = [My_New_Animation, opening, ...]

Reload the video eyes and controller app (shake until dev menu comes up and then press reload) and you will see your new sequence will appear in the controller. Now you will be able to record custom movements for that sequence.

--

--