More Dancing Robots — Thoughts and Some Responses

James J. Ward
Jan 4 · 8 min read

The buzz over the dancing* robots hasn’t abated over the last few days, which shows just how many people either loved the video or, like me, didn’t. What’s even more interesting is that so many people have taken to writing about what they saw, rather than simply moving on to the next big meme. Good conversation is all about dialogue, rather than monologue, and so answering some of those responses to my original post seems fair. Doing so, of course, requires that I dive into the comments section — what could possibly go wrong?

I can do what with what??

We Interrupt this Programming…

Quite a few people pointed out that I wasn’t accurate about the nature of the programming and engineering feat in the video. They have a point — I need to clarify that when I say that the intricacy of the movement is a result of programming and engineering, I don’t mean that the movements themselves were expressly set out for performance. Although it’s possible that a team of programmers would take the time to code, line by line, every sequence of the routine, ML systems allow for a far more hands-off approach to this kind of robotic activity. In other words, the engineers and programmers set out the broad strokes of what needs to be done as packages to be executed, but the processors and algorithms that drive the movements do the rest. And there’s no denying, it’s a technical marvel.

But while I can, and should, be more precise in the language I use here, the “programming v. ML system” point is a distinction without a difference. Irrespective of the means by which the robot moves, the ends for which it moves remain the same: an external source directed that it should do so. Think of the difference this way: even if the robots were capable of carrying out a dance entirely of their own design (a process that would likely combine structured and unstructured learning), they still wouldn’t have the means to decide to dance, and that’s both distinct and different than what the humans involved do. As they teach you in the first year of law school, the most important question is always “who decides?”

There’s another important facet to the “AI/ML” response, which is that we’re taking the product of human activity and turning it into a thing that exists like it is a natural object. Humans do this all the time; it’s call reification, and it’s highly problematic because it shields human responsibility from view and, therefore, from criticism or analysis. Consider what happens with AI decisionmaking — humans make choices (what data should be included in a set, how a program should identify associations between the data, and what the outcome of those associations mean) but when the algorithm produces outcomes that are biased, prejudicial, harmful, or even just unhelpful, the humans disappear from view and it is just “the algorithm” or “the AI” deciding. How can a robot be biased, they’re totally impartial! True — and in that way, railroads are impartial too: if you set the tracks in a certain direction, that’s always the way they’ll go. The point is to interrogate who is laying down the rules, and why, and hiding reifying AI, or ML, or algorithms isn’t going to help us do that.

It’s Not You (Robot), It’s Me

That brings us to another criticism — why be so nitpicky about all of this? “Everyone knows the robots aren’t really deciding to dance, so you’re making a lot of fuss over nothing.” Well, maybe. But the vast majority of people I spoke to, whether they liked the video or not, reported the same initial reaction: this is amazing, and so funny.

You may be surprised that that was my first reaction, too. I thought it was funny and really impressive. But the important work for figuring out how to respond to a rapidly changing work doesn’t happen on first impression — we have to consistently challenge how we think and our initial responses, because very often we’re encountering something designed to make a very good first impression. Think about how you respond when someone says that they’re going to show you a magic trick. If you’re like me, you become extremely attentive to what they’re doing, trying to identify the sleight of hand. If they can still pull of the trick, it’s all the more impressive.

Sorry, “illusion.” Not trick.

When you’re thinking critically, then, you’re more likely to identify problems — like whether and how human rights and humanoid robots intersect. For instance, Josh Gellers’s note that we grant corporations rights is telling in the same way, inasmuch as the privileges we grant to businesses reveal our values (efficiency, risk-taking, profit) and our ethical blind spots (absolving personal responsibility, exploitativeness, politicizing business). What those priorities say is telling, and reveals where we need to focus our critical attention if we want to make sure that we’re not only acting ethically, but we’re creating spaces where both people and ethics can flourish.

There’s also definitely a sense that alarmism about robots and ethics in AI is, well, alarmist. I mean, they really are just robots dancing to Motown, wouldn’t you say that a blog post referencing MacIntyre or deontology is a little much?

“Any of you robots tries to dance with me…and I’ll deactivate you.”

I get it — it’s about fun and enjoyment, and leave it to a lawyer to come in and ruin the enjoyment of the robot troupe. The fun isn’t the problem — it’s the uncritical response to the video that worries me. That is, for me, the content of the video can be entertaining, but the reasons behind it and the social systems they both establish and unveil are deeply problematic. “But you can’t criticize this, the robots aren’t used for wrongful purposes.” Even if we assume that’s true (which is a big assumption) it’s an argument against a straw man — my critique never got to the question of uses, because that’s a different ethical question. Consequentialism takes care of pretty much everything there anyway: killer robots are unethical because of what they do and what they might do. My argument is more fundamental, even if it’s no fun.

What’s Really Going On Here?

In Sander van Dijk’s article, he notes that Human Robot Interactions is about understanding and his point about examining how we treat robots is well made: my original article gave short shrift to the field. It’s true that HRI shows the positive interactions humans do have with robots, and how those researchers who work most closely with robots often have the most caring, empathetic relationships. That’s anthropomorphism at work, and it is a sign of the better aspects of human behavior. And scholars like Julie Carpenter or Hiroshi Ishiguro have written extensively on some of these questions.

No choreographers have chimed in yet, unfortunately.

The issue, though, is not that humans who routinely interact with robots have positive experiences and feelings. I don’t think (and I don’t want to suggest) that we should stop building advanced robots or even humanoid ones. To the extent that Sean’s points are procedural, technological, and functional, I agree with him entirely. But my arguments are sociological and philosophical; I’m less worried about how experts and individuals in controlled circumstances treat and experience robots.

Instead, I’m worried about the structural effects of the introduction of humanoid robots and what it says about us. Remember, Boston Dynamics isn’t building spot or the bro-bots for general use, so why create a mass-distributed, meme-worthy video about them? Why is it necessary to make the public feel, generally, better about how agile, fast, and responsive humanoid robots are? Those concepts that animate corporations we discussed above include the profit motive, of course. So why does Boston Dynamics think that the time, money, and effort it put into making this video will be worth it? Because they want to make you feel more comfortable if a humanoid robot starts working at your shop? Because they don’t want you to be generally concerned about what they’re up to? I don’t have the answers to these questions, but, ethically, we have to ask them and not simply assume the video was just for S’s and G’s. If you’re seeing something slick and polished, assume, like Chekhov’s gun, you’re seeing it for a reason.

Let’s talk about something fun, like the Overton window.

Ultimately, the most important aspect to all of this is the conversation. Discussing, disputing, and even disagreeing about the ethics of what we’re presented with is an essential aspect of fighting for the right outcomes, even if we aren’t aligned about what those are. More importantly, it ensures that we don’t unthinkingly build systems and technologies that embed bias, perpetuate wrongs, and undermine human autonomy and wellbeing. Trenchant criticism of AI and ML systems, for instance, notes that, frequently, the training data and architecture used systemically ignore some (racial minorities, women, those with disabilities) or that they favor others (people who, often, match the social, cultural, racial, or economic traits of the programmers and coders).

Often, there’s no evidence of intentional bias: the programmers aren’t setting out to do wrong, to oppress, to humiliate, to diminish. Most of them are just trying to do good work — and the same goes for those who design robots or teach them to move. But biases (even unintentional ones) and the broader implications of the work we do are blind spots, and we can’t recognize our blind spots on our own. We need each other to do that, to keep each other honest, and to identify how we can, and must, do better. That will take technologists, engineers, ethicists and, yes, even lawyers to get right. Even when its dancing right in front of you, it’s never about the robot: it’s about us.

Pictured: Science and Ethics. Not Pictured: Lawyers

Originally published at https://wardpllc.com on January 4, 2021.

The Startup

Get smarter at building your thing. Join The Startup’s +787K followers.

Sign up for Top 10 Stories

By The Startup

Get smarter at building your thing. Subscribe to receive The Startup's top 10 most read stories — delivered straight into your inbox, once a week. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

James J. Ward

Written by

Privacy lawyer, data nerd, fan of listing three things. Co-author of “Data Leverage.” Nothing posted is legal advice/don’t get legal advice from blogs.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +787K followers.

James J. Ward

Written by

Privacy lawyer, data nerd, fan of listing three things. Co-author of “Data Leverage.” Nothing posted is legal advice/don’t get legal advice from blogs.

The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +787K followers.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store