What Can the Misty II Platform Do?

Anything a mobile computer with cameras, microphones, speakers, a display, and multiple extensible sensors can do.

Johnathan Ortiz-Sonnen
MistyRobotics
7 min readApr 14, 2020

--

There are a few standard options for controlling a Misty II robot.

To control Misty manually and experiment with the robot’s functionality, you can send individual HTTP requests to more than 150 endpoints. Each endpoint invokes a command to do an action, obtain some data, or start a process (move head, move arms, drive, start recognizing faces, take a picture, capture speech, play audio, display image, change LED, and so on).

Then, for autonomous behavior, you can build “skills” with Misty’s JavaScript and .NET SDKs. Skills utilize the same set of commands available as HTTP endpoints, but they run directly on the robot, issuing commands and processing sensor data via the skill system built into Misty’s software. Misty can start skills on command, or automatically on boot. You can run one skill at a time, or several skills at once. A skill can share data with other skills, start and stop other skills, and broadcast events for other skills to receive. Each skill can be as simple or complex as you make it; the only limit is your programming ability and imagination.

And advanced users can do even more. Misty II is not a closed system that limits developers to the capabilities exposed in its API. Rather, the Misty II platform is a collection of sensors, actuators, processors, and applications that inventive developers can extend in countless creative ways.

To see what we mean, have a look at this high-level diagram of Misty’s processing architecture:

Misty uses standard ecosystems like Android and Windows IoT Core that can execute any additional applications a developer installs — not just JavaScript and C# skills. The key point here is that, with the right kind of knowledge, you can extend the functionality of Misty’s skill system by deploying custom UWP apps to Misty’s 410 processor, custom Android apps to the 820, custom .ino sketches to the robot’s Arduino-compatible backpack, and any other apps to any other external system set up to communicate with Misty.

With this in mind, we can restate the question “What can Misty do?” as, “What can a mobile computer with cameras, microphones, speakers, a display, and multiple extensible sensors do?” And that leads to a pretty broad range of possibilities.

Here are a few examples:

Monitor the environment

Misty’s UART serial port lets you stream information into (and out of) Misty’s skill system from any external hardware you connect. This makes the robot an ideal candidate for jobs that require autonomous environmental monitoring. Here’s how you could build out this use-case:

• Connect one or more environmental sensors (temperature, humidity, or CO2, to list a few options) to a Misty Backpack for Arduino.
• Write an Arduino sketch to read the sensor data and transmit it via UART to Misty’s software.
• Write a skill with Misty’s .NET or JavaScript SDK that drives the robot around a predefined path, reads sensor data from the microcontroller, and writes that data to a server.

Create maps and navigate with ROS

Misty’s software doesn’t use the Robot Operating System (ROS), but that doesn’t mean you can’t use ROS with your Misty II projects. There are several inventive ways to incorporate Misty’s data into ROS nodes. For example, you might:

• Install ROS on Misty’s 820 processor.
• Route data from the Structure Core depth sensor in Misty’s visor to a ROS mapping node to create maps.
• Use ROS nodes for path planning and navigation.

Create artificial neural networks (ANN) for object recognition, activity detection, and more

Misty can detect, learn, and recognize faces out of the box. For tasks beyond that (object recognition, gesture recognition, fall detection, and more), consider building and deploying custom ANN to the robot. Here’s an example of how this might work:

• Develop a model in your framework of choice (TensorFlow, for example).
• Create an Android application to run the model.
• Shut down Misty’s camera service in the robot’s application, so that you can access data from the RGB camera directly from your own Android app.
• Use Misty’s .NET SDK to code a C# skill that communicates with the Android app and reads your ANN data via TCP/IP.

Build external “robot applications”

If Misty’s 410 and 820 don’t offer enough processing power (or if it makes better sense to run most of your code on another device), use another computer to do the heavy lifting.

• Build an application for any device that can send HTTP requests to Misty. Use any languages, frameworks, and services you like.
• Stream sensor data to your app from Misty’s WebSocket server, and issue commands to the robot via HTTP requests. Customize your app’s logic and user interface to best suit your needs.
• Run your external robot application on the same local area network as Misty, or configure the environment to allow your app to communicate with Misty from another network.

Get creative with 3D Mapping

You can use Misty’s base API to operate the robot’s Structure Core sensor to create and track within individual maps. These maps tend to max out at ~800–1000 sq ft with the Misty Standard (or ~1600–2000 sq ft with the Misty Enhanced). That might not be large enough for some of the bigger spaces (eldercare facilities, offices, libraries, schools) where Misty works. So, here are a few ways you can build on this base mapping functionality:

• Use Misty’s mapping APIs to create multiple maps with the robot’s Structure Core sensor and built-in mapping system. Then, download the 3D mesh files for each individual map to a workstation, and stitch them together to make a single, large, three-dimensional map of the space.
• Write your own Android application for Misty’s 820 processor. Use Misty’s base API to disable the SLAM service, and use the native Structure Core SDK from Occipital to incorporate the depth sensor’s data into your own app. Utilize this data in your own system for creating and using maps, or put the depth sensor to work in other unique ways.
• Or, write your own Android application that connects to the Structure Core sensor and streams the data to an external computer with more processing power, for even better map generation.

Plug your own sensor (or another device) into Misty’s USB port

Misty’s backpack USB port isn’t wired up to the skill system in the same way as the serial port, and the base API doesn’t yet include functionality for reading and writing data from the USB in your skills. But the USB port is connected to the robot’s 410, and because the 410 runs Windows 10 IoT Core, there are other ways you can incorporate data from the USB into your robot solutions. For example:

• Use the backpack USB port to connect a new sensor or another device to Misty — an Xbox controller, a webcam, a headset, or some other external hardware. Look for devices with USB drivers supported on Windows IoT Core, or for devices that can communicate with Misty’s 410 over an ethernet- or serial-to-USB adapter. (Note that while the backpack port uses a USB 2.0 interface, transfer rates work at USB 1.7 speeds.)
• Build a custom Universal Windows Platform (UWP) app, and deploy your app to Misty’s 410 processor. Use this app to communicate with the new sensor and handle all the processing.
• Use Misty’s .NET SDK to code a skill that communicates with your UWP app to bring this data back into Misty’s own ecosystem of skills and APIs.

Enable inter-robot communications for multi-robot tasks and research

You can write your own application for Misty’s 410 or 820 that shares other sensor and location data from other robots within range.

• Use this to research inter-robot communications, or to build solutions that involve multi-robot tasks (for example, surveillance, mapping, or inspection).
• Combine this pool of shared robot data with data from other, non-robot external devices within range to improve the effectiveness of your robot solution.

Use Misty to control large displays and other multimedia systems

You can deploy a UWP application to Misty’s 410 that sends images and sounds to an external multimedia display for large spaces and events. Incorporate this into use-cases that have Misty act as a greeter, a concierge, or an entertainment provider during events. For example:

• Code Misty to recognize and greet visitors to your office, guide them to a conference room, and boot up any technology that’s required for the meeting.
• Have Misty roam around during trade shows, parties, and other events to take pictures of people and show them on a screen.
• Supplement the information desk at your event by coding Misty to read announcements over the PA system, display maps on a nearby screen when asked where a particular booth is, and perform other useful tasks.

All this is just a glimpse of the different ways a clever person (or a clever group of people) can extend Misty’s functionality well beyond the robot’s base APIs. It’s in your power to adjust, combine, and transform these suggestions, or to build entirely new solutions for your particular use-case.

For more conversation like this, join us in the Community forums. We’d love to hear from you.

--

--