Visualizing an iOS device in Blender Through Quantum Entanglement

I might have the definition of “quantum entanglement” wrong.

The task was straightforward: get the device’s current orientation. Easy.

UIDevice.current.orientation Next.

Wait. That didn’t work. Oh, you need to call beginGeneratingDeviceOrientationNotifications() first. Okay. Next.

Huh, that didn’t work either?

This is when I realized that these notifications aren’t generated when the device is rotation locked. My camera app needs to know the orientation all the time. I realized that I needed to calculate the orientation directly from the device’s accelerometer. Uh oh.

My writings, while technical, tend to be high level and leave out the implementation details. Disappointment has been noted. This article goes in depth.

Getting Core Motion up and running isn’t actually that difficult. A simple import CoreMotion after which you create a motion manager with CMMotionManager() . Then start receiving values by passing a closure and a queue to the motion manager’s startDeviceMotionUpdates .

Simple enough.

Okay so now that we’ve got these values, what do we do with them? That’s the tougher part. If we print everything to console then we’re quickly overrun with data. I figured it would be better to have these values visible on screen.

But wait, what if the values were shown on a graph?

Forget the graph! Blender is open source, I bet it’s easy to extend. What if we visualized the values with that?

After this thought process I went over to Blender.org, and downloaded the latest version. Don’t worry, the API is pretty stable, the upcoming code will work on whichever version.

Once I had Blender downloaded I wasted no time Googling for “iPhone 3D model”. Clicked on the first result, and it even came with a .blend file. Fantastic.

So I opened that up, and Blender launched with a beautiful iPhone 6 model in the middle. Excellent! So let’s get this thing quantum entangled with our real iPhone.

I have no idea what most of this screen means.

I’d never used Blender before, so what followed next was a bunch of Googling and trying to figure out the right search terms to get just enough knowledge required for doing what I wanted.

Eventually I figured out the next step was to use a Text Editor panel. So I changed the Timeline panel at the bottom into one, by clicking on the little clock icon at the bottom left of the Blender window.

Once there clicking the little + button at the bottom center to makes a new text file. If you’re following along, name it Motion Server.

Blender plugins are written in Python. The first thing to do in a new programming environment is to print something to the console. So typing in print("hi") and clicking on Run Script is how that’s done.

You should see that text come up nowhere. I even checked the macOS Console app. Nothing there. Okay so let’s change the panel on the bottom right into Python Console.

Fresh console.

Now we could just type in print(“hi”) in there, but we want to use what we wrote earlier, so those keystrokes were not in vain. That’s done by copying & pasting this into the console: exec(compile(bpy.data.texts[‘Motion Server’].as_string(), Motion Server’, ‘exec’))

Pressing enter runs that snippet, then displays “hi” in the console. Success!

Blender isn’t much of a code editor though, so let’s use our favorite external editor. We can call an external file by replacing the print("hi”) with the following code:

Next step is making new server.py file in the same folder as the .blend file. That’s where our real code will be. Now we can open that in our choice of editor. Be it Atom, Sublime, or Word 2007.

I’m no Python wiz, so I had to Google everything from how to make arrays to how to parse JSON. Eventually, after enough copy and paste, I got a server running.

We’re using Python’s bindings to C code, so it’s actually scarily low level. We bind a TCP socket to listen on all interfaces, then we loop to accept new connections and call select to check for any readable connections. All of this is done in a new thread, as to not block Blender’s main thread. Since if you do that then Blender just locks up and that’s no fun. What’s nice is that when we re-run the script it overrides our global functions, so we can use that to update the functionality without dropping clients. Makes for fast iterations.

Once a connection is readable, the data is read, parsed as JSON and the resulting object is sent to receivedMotionData. That function references the current Scene’s iPhone object. Let’s rename the iPhone using the outline panel in the top right so our code works correctly.

Cube isn’t a cube, let’s rename it.

Find the Cube object, and right-click it and choose Rename. Rename it to iPhone. Now let’s look at server.py

It looks like a lot of code at first glance.

Put the above code into server.py and then press up in the Python Console to recall the last exec statement. Running that, the console should now say “Starting server”. Now let’s run a quick test. Make a new file called client.py and fill it with the following code:

If you’re using Sublime, you can run it from the Tools -> Build menu item. Otherwise run it from a Terminal with python /path/to/client.py

Looking at Blender, you should see the iPhone not change at all. This is because the script above sets the iPhone’s rotation using quaternion values and it’s currently using Euler angles for rotation. Let’s correct that. Change the Python Console panel to Properties, then click on the orange cube icon at the top of that panel. In the middle under Transform, click where it says XYZ Euler and select Quaternion. Now try running client.py again.

This is what success looks like.

You should see the iPhone instantly flip upside down. Don’t be alarmed, this is what we want. Now, without further ado, let’s get this model reflecting the rotation of your real iPhone.

We’ll need to send motion data from the iPhone to the computer running Blender. Thankfully we don’t need to go down to the raw C socket level in Swift, because Foundation has an abstraction.

You can drop the following code into a new iOS project to replace the default ViewController. Make sure to replace the host variable with your computer’s local IP address.

The code is pretty straightforward. It opens up a socket to the host, then for each motion update it creates a MotionData value, sets the properties on it, encodes it into JSON and sends it to the script running in Blender. It reads any data the host sends and discards it.

You should now have a fully functioning visualization. Congratulations.

It works!

So how did I end up getting the orientation information from the motion manager? That’s left as an exercise to the reader. Just kidding, that code is here.

Like what you read? Give John Coates a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.