Boston Dynamics — A message of hope

Dr. Adam Hart
the digital ethicist
4 min readNov 22, 2019
Spot (image copyright Boston Dynamics)

It is interesting to observe that, unlike some less grounded factions of the AI software community, where each competing group is perhaps seeking to achieve the “singularity moment”; a grounded Robotics community continues the long tradition of automation alongside human operators and governors.

Since the industrial revolution, when the jacquard loom replicated the need for skilled human weavers by using instructions contained on a punched card; reliably repeating the labour intensive replication of weaving patterns for power looms, there has been the tension between replacement and augmentation.

The natural fear of being replaced, being made redundant, is a very real one that is backed by historical evidence. For example, where longbow men in the middle ages were reputedly trained from a child to pull the ridiculously stiff oaken bows, developing distended bones in their torsos as a consequence, the crossbow overnight made that skill mostly redundant. Anyone within short days of training could operate and reload the crossbow, and its distance had a devastating effect. All thanks to a combination of previously uncombined technologies in a compact package, a cog, a lever and sprung steel.

The reportage of the contemporary software-only AI research trajectory is one that seemingly seeks to discover ever-deeper ways to replace human insight and logic, in the same way that humans are the long bow, and the AI is the crossbow.

The fantastical spectre played upon by the media, of a not-so-far-away future where a never sleeping and pervasive computing intelligence that is faster and better than us, that can out circle human intelligence with black box logic without human say so is a dismal and boring future. Current uses of DNN’s for DeepFakes, facial recognition, voice replication and other surveillance and controlling-type use cases does not inspire confidence in a human-centric augmentation orientation for AI.

Enter Boston Dynamics.

Established in 1992, now owned by SoftBank Group after Google X (now Alphabet) divestment in 2017, and with origins at MIT with DARPA funding, it’s pedigree is stellar, and its product’s locomotion quite like nothing ever seen before.

Most recently, Wired reported they are making Spot, their robotic dog-like assistant, available for lease to companies with suitable use cases.

There are two key statements in this report that impart a sense of utility and a sense of control for this AI-powered robot:

  1. They are going to “figure out” with their customers what is the most suitable use cases for Spot; and
  2. Those use cases fit into an augmentation class, where a human handler has control over the robot.

A continuing research mentality means they are going to work together to figure out what are the best use cases, because unlike some of the AI software community projects (cf. DeepMind) their public statements do not speak of a shimmering future where humans are alleviated from the burdensome task of thought, but are going the more practical engineering route, of augmentation.

If an experienced human “handler”, to follow on with the Spot canine theme, doesn’t have the physical prowess to undertake certain difficult tasks but can “train” Spot’s DNN to undertake those tasks with improved accuracy through positive “reinforcement”, great.

And over time, such a robot can repeat inspections or other labor repetitive tasks, with evidence of accuracy though externalized (not black box) human observation of its behaviour; that makes more sense than an a priori fairy-tale of an AI singularity replacing or controlling forward human decision making.

It makes more sense from the perspective that since old and new technologies also fundamentally co-exist, that advanced robotics technology is used in an augmentation model. Augmentation is more practical. It’s not like we’re in a dystopian scenario where there are only a handful of humans left and AI-aware robots need to do all the thinking and heavy lifting to keep the planet “running”.

There are many hazardous environments where humans have been required to operate. Removing us from those environments may alleviate future human suffering. And there’s still the very difficult question of pricing the cost of sustainable operations to consider.

This is the one key difference in these forward speculative trajectories, perhaps stemming from the difference between engineering that uses AI techniques such as visual pattern recognition as an aid for robotic functioning; and a branch of computer science that invents new AI techniques and algorithms such as RGN for beating computer games.

One operates in the real world, where problems are grounded in the real, and the math adapts to the real. The other operates in a digital world, where problems are framed and bounded by theoretical mathematical realities.

So, if in our tech work we as a profession believe usable advancement is primarily achieved within use cases that related to augmentation, that’s a hopeful message, and one that Boston Dynamics is also pursuing.

--

--