The Misty II Launches Today

Get 50% off until May 31st!

The Misty II launch video

Almost four months ago at CES we announced the Misty I Developer Edition prototype robot, our first step toward our long-term journey of turning robots into our friends, coworkers, and household members.

That journey began with a hand-built, fragile robot intended for the most avid software developers and makers. These pioneers got access to early software and iterating hardware. Since then we’ve been hard at work refining, building, and shipping the Misty I Developer Edition, and we’ll continue to do that until the end of 2018.

Today, though, is our “Apple II” moment. The moment when we shift from the hand-built Misty I to the mass-produced, robust, designed-for-manufacturing Misty II.

Misty II has been in process for the past two years. We’ve been hard at work in partnership with our longstanding manufacturing partner (Jetta) from our Sphero days. Jetta are masters at making robots that actually work. I’ve been on site with them for at least 24 months out of the last seven years and have watched them build over three million robots. Now it’s Misty’s turn.

It’s hard for me to describe to people how hard it is to make a robot, let alone a robot that works well. That can be made repeatably in the thousands or millions. That is a meaningful development platform for everything from block-based programming for students… to simple APIs for software developers solving complex business problems… to deep neural network machine learning for Ph.D. candidates.

It’s hard to make a robot that has quiet brushed motors that last and consume relatively little battery life, so the robot can do more by itself for longer. It’s hard to make a robot that feels alive. (Thanks to Misty’s amazingly mobile, 3-degrees-of-freedom neck, it’s almost freaky how alive the robot can look when the head is twisting and turning so naturally.)

It’s hard to make a robot with two cell phone processors connected by an internal LAN bridge that cooperate seamlessly to do self-driving and face/object perception (on the Snagdragon 820). And handles milli- and micro-second real-time input from sensors, to motors. And also runs third-party developer’s skills and code (on the Snapdragon 410).

Speaking of self-driving… It’s ridiculously hard to do self-driving and other “SLAM” abilities well. Most robot companies use LIDAR, which is a 2D-based system that, most of the time, is about 2–5 inches off the ground. So, it’ll detect the bottom 5 inches of your office or house, but certainly not anything above that. We’re using a 3D depth sensor with stereo infrared cameras (from Occipital) which provides a beautiful and rich three-dimensional map of the environment, along with sophisticated embedded (not cloud-based) computation. So, not only does the robot know what room it’s in, but the 3D mapping of objects (say, an oven) can actually be used to support software skills for those objects. That same self-driving capability is required for the robot to find its wireless resonance charging station when it runs low on juice — another accomplishment that’s not easy.

It’s hard to do face recognition. Ask Apple. While our face recognition is not as powerful as Apple’s (they need to charge $1K just for the processing power to do one person’s face), it is powerful enough to train on the family, or the office of 50 people, or the teacher’s class of 30 and do roll call in the morning. It’s powerful enough that developers can access the Qualcomm Snapdragon Neural Processing Engine for AI to train the robot on particular objects and then recognize them (your refrigerator, or oven, or dog, or fire extinguisher).

And, it’s really hard to make a robot with personality and emotion. We have Ph.D.s at the leading edge of Human-Robot Interaction building our emotion engine. They’ve invented — and we’ve got patents pending for — a system for personality and emotion that goes way beyond state of the art. Most robot companies that attempt to include emotion/personality in their product use, essentially, complex decision trees. So when a robot gets bored, say, or angry, it makes the exact same sound and motion. Our emotion engine is designed in a multi-dimensional space of affect and arousal, and it’s built around OCEAN, the five dominant personality traits. Our robot responds to stimuli in the affect/arousal space — with responses being determined and scaled by the skills it’s running and its environment. So, a given stimulus toward “angry” will cause the robot’s responses to differ each time, because its beginning state is always different.

Hi. Come check me out at mistyrobotics.com

It’s hard to build a robot that can sense the direction from which sound is coming, properly detect the sound, and pass that off to a software developer for use with third-party voice assistant services.

It’s hard to build a robot that has all this and is hardware-extensible. That means that any maker with a big idea who is struggling to build their own robot no longer has to struggle. We’ve built the robot, they can adapt the arms, add a backpack, trailer, headgear, whatever.

And most significantly, it’s hard to make a robot this sophisticated easy to program for regular software developers! From the beginning, though, we wanted all of this advanced capability to be outrageously easy for the average web, mobile, enterprise developer, or STEM student to access. Our mantra has been:

“If you could program a robot in 30 minutes to do something meaningful, would you?”

The developers we’ve spoken with have said “Hell, yes!”

We’ve made our programming system in 3 layers — from “middle school to Ph.D”.

Layer 1 is all about the beginning student (or the prototyping developer). Using Blockly, Google’s visual programming system — beginning developers can choreograph their robot with all of its basic functions. We also provide an “API Explorer” that similarly lets developers test Misty’s behavior and access the live data streams she sends over WebSocket connections.

Layer 2 is built for web, mobile, and enterprise developers who may use Javascript or Python (there’ll be other languages supported for Misty’s API, as well) to easily dive into the sophisticated capability of the robot without getting mired in the challenges of deep robot programming.

Layer 3 plumbs the depths all the way down into the world of the Ph.D., where advanced software developers can build machine learning models, explore the edges of human-robot interaction, or even (eventually) switch out some of the core parts of the robot’s capabilities.

And it just keeps getting better. The Misty II is made for hackers and makers, too. We’ve given it USB and serial port expandability with a nifty “backpack” on its back. We’ve even built an Arduino version of this backpack, so you just magnetically snap it on and then have access to hundreds of Arduino shields. We’ve designed the one-degree-of-freedom arms to be easily detachable, and with published CAD files, you can print your own arms that might carry something specific or might even have a laser or video projector at the ends. A trailer hitch enables the Misty II to pull lightweight payloads, and an earpiece is magnetically attached allowing for a bit of style. Now, your robot can do hundreds of other physical things!

When robot people read about these capabilities, most of them ascribe a pretty high cost to Misty. “That robot must cost five to ten thousand dollars!” Not so. We’ve put seven years of robot-building knowledge into how to create the perfect blend of advanced capability and performance in an affordable package.

I’ve heard some people say “Surely, there have to be cheaper robots that’re easy to program and do all this stuff!” Or, “I could build all of this myself for $800!”

When we set out to build this robot we searched and searched. We found many STEM/education robots geared towards learning programming with sometimes some flexibility on the mechanical side. On the mid- to high-end this could be Lego Mindstorms, MakeBlock, Meccano, or the Alpha 1S as examples. They teach coding, but you can’t really use them as a professional development platform to do useful things in a business or autonomous tasks at home with them.

If you want more advanced capabilities, you can buy a Roomba that does its tasks well, but isn’t highly flexible. You can also buy robots that can start to do useful, flexible tasks, but their price points are well out of reach of consumers: in the tens of thousands of dollars for the PR2, Baxter, Pepper, etc. They also take weeks to learn how to use — even if you’re an expert.

Then you have DIY robots where you’re really on your own. You have to buy a mobile base, select your processor, sensors, etc. and then integrate it all. The closest to a complete solution, largely for academic purposes, is the Turtlebot 3 at $1400-$1800, depending on the version. Integrating components is incredibly difficult, though, and just getting something that can drive around and avoid all of the crazy obstacles you’ll find in a home without getting stuck will take many months. And in the end, the software skills you make for a DIY robot are pretty much one-off creations that can’t be shared with others. Not to mention you haven’t even gotten to useful tasks, because you’re still trying to figure out how to get the robot to not get stuck on your socks.

We recently talked with a FIRST Robotics regional director who told us, “Many of our team leaders have students who are looking for more. They’ve done all they can with the Lego EV3 robot, and they can’t find anything else.” There was real excitement from this person who really knows the STEM student market.

Where we’re at

For Apple, the Apple II marked the turning point from startup to world changer. Compared to other products of the time, it was well-built. Affordable. Super easy to program. It became the standard in schools and small businesses and offices everywhere.

More importantly, it sparked one of the most significant revolutions of our time. Inventors by the thousands were set free by having access to an affordable, easily programmable, personal computer for the first time. They invented hundreds of uses for “regular users” — spreadsheets, databases, games, and, eventually, email, social media, etc.

Announcing the Apple II. By Apple Computer Inc., Cupertino, CA. [Public domain], via Wikimedia Commons

This is Misty Robotics’ Apple II moment. The moment we give you — regular software developers, students, and makers — the ability to access an incredibly advanced robot and do something with it in 30 minutes (and then do really powerful things with it over time). This is the well-built, affordable, and super easy to program robot that will change the world. The “killer app” for robots is waiting for you to invent it.

And this awesome robot will never be more affordable. At 50% off and more affordable than comparable robots, this robot is a great deal.

We’d love to have you join this journey with us. To teach us. To code with us. To hack with us and make with us. I’d be honored and thrilled if you would.

To get yours today, visit www.mistyrobotics.com.