WhAI¹

This post is by Alec Shuldiner, PhD, Senior Product Manager, Advanced Analytics at Autodesk, Inc.

We define previous industrial revolutions by changes in the means of manufacturing: powered production, mass production, and, our current model: “digital” production. Those means continue to evolve, but the 4th Industrial Revolution will come to be defined less by changes in the means, and more by changes in the causes of manufacturing: not just by the how and what, but by the why.

Figure 1: MX3D bridge, March 2018, credit Author

Opaque intentions

Nature displays phantasmagorical complexity in all its parts: it may be grokked but it resists analysis. The man-made world works differently. Here, our assumption is that, with enough effort, we could unearth the original intention, along with the path between it and some eventual outcome. While research and experience alike may turn up exceptions, we hold these beliefs to be self-evident: that actors in the marketplace are rational; that someone, somewhere, knows how a thing works; that there is a little man behind the curtain, and that we could catch him, if we were quick enough.

Artificial Intelligence (AI) is widely employed to boost human mental capacity, thereby facilitating our understanding of nature, our own society, the global economy, and like complexities. Increasingly, AI is becoming a fundamental part of our global operating system. The fact that atomistic analysis of these gargantuan networks of calculated probabilities is typically unrevealing has hardly slowed this progress. We are embedding AI in our software, our private and public operations, and our (smart) things. As a side effect, this opacity is carried into the heart of the made world. We may come to understand old problems better than we have in the past, but we are working at least as quickly to obscure the chain of intention in the new world we are building. This is the world of the 4th Industrial Revolution.

“Things are about to get weird”

A couple of years ago, my employer, Autodesk, began training IBM’s Watson system to understand and to respond helpfully to the questions asked by our customers in online forums. The goal was to replace a significant portion of the interactions and transactions handled by human employees. In this, Watson has been successful. The path to get there, though, turned out to be different than the one we usually follow when creating business capabilities, and much harder to understand. To call attention to these differences, I wrote a memo characterizing them as a set of directional changes:

Deterministic → Probabilistic
Mechanism → Organism
System → Ecosystem
Outcomes → Tendencies
Predefined → Emergent

One of our executives responded, “so things are about to get weird,” a decent shorthand for what the 4th Industrial Revolution portends.

The current Autodesk Virtual Agent is a reasonably helpful bot, well-trained, though of limited range. It isn’t weird.² But more expansive implementations of its type have displayed odd behavior, most recently Amazon’s Alexa, which developed a habit of laughing at customers at unpredictable and irregularly-repeated intervals from the Echo home speakers it haunts. Amazon rushed a “fix” into production, explaining only that the behavior was a result of “false positives.”³ What is truly weird — aside from the customers’ decidedly creepy experiences — is that Amazon’s machine learning experts probably cannot deliver a complete explanation of the phenomenon. In contrast to other “big data” systems — which process comparable volumes of data, but which do so via fully deterministic channels, rather than the recursively nested weightings that characterize the “natural language processing” — this opacity makes Alexa go. For a computer scientist, this is a new model. For an Amazon executive, no doubt likewise.

Raise them right

Though a novelty in the office, the experience of developing a capable operator by encouragement rather than command was by no means new to me, nor, perhaps, to you. Indeed, many people of a certain age have deep experience training massively complex opaque thinking systems from blank slate to sophistication. We call it parenting.

As parents, we come to recognize the impossibility of knowing how much of either credit or blame we deserve for our children’s makeup. This is in part due to the complexity of the system we are training, but also in part because we’re not the only ones doing the training. There are teachers, relatives, nannies, friends, and, increasingly, strangers on the Internet. This, too, is relevant as we consider the degree to which we will, or will not, be able to understand AI actors. Microsoft provided one notorious example when their chatbot “Tay,” intending to model teen behavior, transformed into a “neo-Nazi sexbot” within days of being given a Twitter account.⁴

To learn more about parenting AI, I proposed a project to endow a bridge with awareness and reactivity. The project began with a Dutch company that undertook to use robots to 3D print in stainless steel a pedestrian bridge. Robotic 3D printing, in which a robot wields a welding device to lay down layers of metal that accrete to form a monolithic structure, is groundbreaking, and the resulting objects are barely understood in engineering terms. I suggested we use sensors to monitor the bridge’s performance, and that we feed the resulting data streams to machine learning algorithms designed to make the bridge “smart.”⁵

To realize this vision, I and my colleagues built a prototype using an existing bridge in our workshop in San Francisco. We applied sensors to that structure, channeled the data output through the cloud, and brought to bear a variety of computer vision and other advanced capabilities. The result is a bridge that is “aware” in use, and which reports on that use, publishing the number and position of people on it, moment to moment.

Heroically assuming we overcome the many additional technical challenges entailed in applying the same approach to a vastly more heavily trafficked bridge placed in an exposed, uncontrolled, outdoor setting, largely populated by drunken tourists, we will soon have a similarly capable bridge live in Amsterdam. That city of bridges will then have unprecedented insight into how one of those bridges is used: the beginning, we propose, of a system of data capture-and-analysis operating at city scale. We envision a network of smart bridges generating status reports — n occupants at point in time t for bridge #1, bridge #2, bridge #3, and so on — to be interpolated into static and longitudinal views, showing changes in patterns of usage for individual bridges and for entire neighborhoods or collections of neighborhoods. We anticipate an enthusiastic response from traffic engineers⁶ and city planners, as well as private sector actors such as commercial real estate investors.

This is a classic “smart cities” narrative, and like all such narratives there is, or should be, an accompanying story of unintended consequences. We are concerned about the possibility of this system being used for ubiquitous social and commercial surveillance of individuals: a high density of sensors and suitable AI can probably identify not just that people are using a piece of infrastructure, but who those people are, what they are doing, and even something about their physical attributes or personal character, e.g., using their gait data to determine if they have Parkinson’s disease, or analyzing tagged selfie capture to identify tourists versus local users. These unintended consequences could scale to entire cities. Furthermore, our increasing ability to squeeze more data out of systems of this sort, for example, by oversampling the sensors or combining data streams, means it is impossible to predict at what point precisely a system could be misused in this fashion. For an “open” project such as ours — by mandate and design, our bridge will share its data streams publicly — this poses a real challenge. How to generate enough data to make the bridge useful, but not so much that it becomes the infrastructure equivalent of a neo-Nazi sexbot?

Learning to live with it

One of my colleagues on the bridge project suggests that since the structure will have been built by robots, we should expect it to be altered by robots, too. He imagines them tweaking the design in response to usage patterns revealed in the sensor data, and then coming out at night to add metal here, take it away there: a virtual update followed by a physical one, both driven by robot intelligence free of human decision-making. The idea may seem far-fetched, but is so only in its application to the direct amendment of physical objects.

Amsterdam as an experience has already been significantly reshaped by AI. Consider the city’s massive tourist flows: they are directed to their dinners by Yelp’s AI-powered search and to their beds by Booking.com’s.⁷ My colleagues and I are trying to understand these flows by using a smart bridge to observe them in motion. We may succeed, but, in a kind of intellectual arms race, the subtlety of AI influence will continue to outstrip our ability to understand that influence, despite our own use of AI tools to do so. Amsterdam will change but, beyond a certain point, we won’t know why.

As a parent you begin by training, progress to teaching, fall back on influencing, and finally learn yourself to live with your child as best you can. Hopefully with pride, but perhaps with horror, you eventually get to see that child’s impact on the wider world. I am not one of those losing sleep worrying about the AI children of the 4th Industrial Revolution running amok, but being unable to figure out why something happened can keep you awake nights, too.

The full version of this article is published by Elsevier. ISBN: 9780128176368.

Footnotes

¹ This paper is an invited short essay submission for the 4th Industrial Revolution Workshop, intended to address the question “What’s at Stake in a 4th Industrial Revolution?” It has not previously been published or presented.

² Try it yourself: https://ava.autodesk.com/.

³https://www.pcmag.com/news/359719/alexa-is-randomly-laughing-but-nobodys-in-on-the-joke is the best titled article on the topic.

https://www.technologyreview.com/s/601111/why-microsoft-accidentally-unleashed-a-neo-nazi-sexbot/

⁵ For a quick background on the project see https://mx3d.com/smart-bridge/. Figure 1 contains a picture of the bridge in its near-current state.

⁶ I wrote on the topic of data sources for transportation demand modeling in 2013 (Shuldiner, A.T. & Shuldiner, P.W. Transportation (2013) 40: 1117. https://doi.org/10.1007/s11116-013-9490-5).

⁷ Yelp uses AI for much more than just search. Yelp recommendations, for example, begin with AI interpreting the multitude of photos diners take in the city’s many restaurants (see https://www.fastcompany.com/3060884/yourphoto-of-a-burrito-is-now-worth-a-thousand-words. This use of AI was pioneered by an acquaintance of mine who after much thought realized that his hyper-local recommendation engine was identifying “hipster” bars in part by the disproportionate amount of facial hair seen in photos of male patrons. His startup was, predictably, purchased by Google). Booking.com, which happens to be based in Amsterdam, matches travelers to rooms thousands of times daily across the city — https://www.booking.com/content/about.html states that “Every day, more than 1,550,000 room nights are reserved on our platform”; the Amsterdam-specific number was provided to me by an internal source.

--

--