Engineering Needs Qualitative Methods

Amy LaViers
14 min readNov 20, 2017

--

In my research group, the Robotics, Automation, and Dance (RAD) Lab, we talk a lot about “power” lately. One thing that we realize is that technology — especially the area of technology we are working in (robotics) — gets a bit of a bad rep in accessible literature, like the news (1, 2, 3, 4,…), popular books, and videos made for clicking. And, the more we observe this phenomenon, the more we see that technologists are the root (or at least part) of this problem. The way we portray our work within our community and to others translates directly to popular understanding — and misunderstanding — about technology. In robotics, which is often related to AI, this is coming to an uncomfortable head in which imaginative fiction is linked to real technology. To combat that and to be responsible stewards of the public funds that make our work possible, our lab does regular outreach activities.

Recently, images created by well-meaning group members and shown at one of these activities portrayed computers as the “brains’’ of robots. For one, robots don’t have brains, which are a human organ that is the subject of extensive study and not, overall, well understood. Thus, not only were these images factually incorrect; the metaphors they try to draw are, largely, socially irresponsible. Here are the slides. Do you notice that one is not like the other?

Three of the four depict engineered tools, machines; one of the four depicts a complex human organ (indeed curiously melded with a circuit board). This outlier is more a shot of science fiction than of technology. And, it’s a comparison made in many outlets, even our own university. Such erroneous comparison is counterproductive to the goals of technology: to empower users.

Thus, this is a topic I “harp” on within my group regularly. These students, undergraduates at the University of Illinois at Urbana-Champaign, had just finished a summer-long internship in my group where we regularly discuss the importance of the verbiage used to describe our work and emphasize the differences in humans and robots. It’s part of the material our group studies: we use qualitative methods like Laban Movement Analysis and Bartenieff Fundamentals to understand how humans move as an integral part of our work in robotics. So, we are constantly discussing how expressive and versatile human movement is and how limited and rigid robotic movement is. This is a necessary step in our efforts to create expressive robotic systems.

As such, I was fairly incensed — but not surprised — when I saw the slides. Engineers come to my lab with next to zero training in qualitative methods. They consistently undervalue the role of writing because they aren’t trained to value it: exams don’t feature essays, lab reports are cursorily graded by TAs who check for plots and equations over the quality of writing, and many students describe their interest in engineering as “wanting problems to have a right answer”. Moreover, some students are, often unconsciously, drawn to robotics because of things they read in science fiction books over anything they’ve seen in class.

I think there are a couple unspoken myths swirling around, which exacerbate this tendency to draw biological metaphors. Myth 1: numbers and equations are inherently objective. This is not true — especially in applied science. In research, engineers proffer multiple technical models to explain the same physical phenomena. The “right” model tends to be very dependent on context. Myth 2: engineers only deal with objective things. This is not true. My group is a particular example as we research subjective phenomena, but even in a very classical sense, engineers use their own, personal past experiences to make judgements all the time. It’s easy to believe this myth as an engineer because objective things are easier to grade; thus, engineers tend to be tested on objective topics using quantitative methods, which are also easier to grade. However, there is a whole world of objective description via qualitative methods (paper/report writing), subjective description via quantitative methods (approximating a physical quantity), and subjective description via qualitative methods (describing the perspective of stakeholders for a given engineered system) that professional engineers engage in regularly.

Thus, not only was the slide socially irresponsible; it was objectively wrong. Next my lab uses these slides, the following transformation will occur:

While the “brain” is a tempting metaphor to use, it’s not an accurate one. Robots are powerful tools, but they are well known to not be good at replicating human strengths. For high-volume, repetitive tasks that require precision and repeatability, robots are very useful tools and outperform human counterparts. For low-volume, complex tasks, which include any tasks outside of controlled, factory environments, robots perform extremely poorly where humans navigate easily.

This error doesn’t happen only in our lab. Consider the news coverage of the monumental achievement of Google DeepMind (the company name itself contains another such direct comparison to biology) in which a computer beat an expert Go player with an algorithm named AlphaGo. First, note that the computer did not manipulate its own pieces on the board (this is still not easy). Second, compare the coverage by the Wall Street Journal and the paper in Nature that documented the achievement. The latter is meant for technical experts familiar with similar algorithms. Let’s zero in on the use of verbs originally attributed to human capabilities like “learn” (and “learning”, “learns”, “learned”) and “teach”. Here are few excerpts of coverage of the AlphaGo achievement in the press (emphasis is mine):

The list goes on, and it also includes millions of white-collar jobs formerly thought to be safe. For decades, progress in artificial intelligence lagged behind the hype. In the past few years, AI has come of age. Last spring, for example, a computer program defeated a grandmaster in the classic Asian board game of Go a decade sooner than had been expected. It wasn’t done by software written to play Go but by software that taught itself to play — a landmark advance. Future generations of college graduates should take note.A Guaranteed Income for Every American, The Wall Street Journal. June 3, 2016

Just two years ago, most experts believed that another decade would pass before a machine could claim this prize. But then researchers at DeepMind — a London AI lab acquired by Google — changed the equation using two increasingly powerful forms of machine learning, technologies that allow machines to learn largely on their own. Lee Sedol is widely regarded as the best Go player of the past decade. But he was beaten by a machine that taught itself to play the ancient game.Google’s AI Takes Historic Match Against Go Champ with Third Straight Win, Wired. March 12, 2016

Humanity didn’t stand a chance…. AlphaGo uses programming modeled on neural processes to replicate human instincts, and has also learned through millions of matches against itself. AlphaGo Software Storms Back to Beat Human in Final Game, Wall Street Journal. March 15, 2016

On the other hand, the Nature article never uses the word “taught” or “teach”. It only appears once, in the bibliography. And all but one instance of the word “learning” is modified by either “supervised’’, “reinforcement’’, “policy gradient’’, “temporal-difference’’, or “machine’’. These modifiers rarely make it to popular publications or slide decks.

Like the brain, the verbs “learn” and “teach” are features of biological creatures, humans, dogs, etc. that are not well-understood. Like the brain, it is the subject of intense academic study, and there is little consensus, with many competing theories, on how the process works. In contrast, engineers at Google DeepMind know the exact process they went through to program AlphaGo, and while they leverage randomness in the final instance that beat Lee Se-dol, they can back out the exact computations, transformations, and iterative updates used by the computer to succeed. The process is entirely transparent, mechanistic, engineered. Since people built the computer and designed the algorithm, we know exactly how it works — even if the results sometimes surprise us. Unfortunately, “we” is limited to a select few with a very particular type of training.

Even “among ourselves” engineers fall back to these shortened ways of referring to the mechanisms of technology. A more recent paper on AlphaGo is entitled “Mastering the game of Go without human knowledge” (this title itself is trivially false, since humans designed the algorithms described with in and also the game Go itself). Moreover, the paper characterizes the computer winning against trained Go players as “the most challenging of domains”, which is a nearly impossible claim to verify.

Another example of how these verbs are not just wrong, but counterproductive inside technical research, occurred in my group when collaborating with three dance professionals in a recent workshop. One engineering student was describing a Google algorithm that operates over web images to classify objects. The student said “it looks at these images” in his description. To the technical members in the room, it’s immediately clear what that shorthand means because we have experience in image processing (as most CS 101 courses cover this task in some capacity). But to our dance collaborators, the only association they can make is their own experience of “looking”. When I stopped the student and asked him to unpack this verb for the “uninitiated”, we created fodder for a much more meaningful conversation. By explaining that images are represented as matrices of RGB values that the algorithm checks for correlations within, the artists — and the engineers — in the room could make meaningful suggestions about what trends the algorithm should look for to achieve a stated purpose.

We have to dispel the notion that engineering is fundamentally hard to understand or harder than other disciplines. In working with collaborators outside of engineering, which is increasingly common, it is easy to see the fallacy of this prevalent idea, in which the notion is used to excuse the process of skimming over mechanistic details of a process and superficially labeling it with a familiar verb, like “look”. Jargon exists in every field and, with enough effort, can be decoded for those outside it.

Robotics, as separate from AI, is particularly rife with irresponsible associations. Even the term derives from art, not science. While computers compute (a verb unambiguously designed to describe a process functionally different than human thinking) robots move, learn, think, have arms, have legs, walk, dance, have gender, etc. In the images below, a robotic manipulator — often called a robot “arm”, left — compared to a human arm, center. This association goes into design of platforms as in the “bicep” — which serves no mechanical purpose — on the Aldebaran NAO humanoid platform, right. Indeed, a friend outside the field read an earlier version of this article and said: “A computer is not a brain — that I’m very sure is true — but a robotic arm and a human arm are very similar in everything other than one isn’t connected to a brain.” My brain exploded a little bit, but they were proving my point: such “suitcase words” have a profound impact on the public.

Note that even the “shoulder” joint of a humanoid robot is a far cry from a human shoulder. It is typically formed from three connected servo motors that can rotate freely about a single axis. These servos are limited by software and their mechanical arrangement to roughly mimic the motion in the coronal plane (in jumping jacks), the motion in the sagittal plane (utilized in swinging arms during walking), and the motion of rotation inside the socket. For example the NAO humanoid proximal “shoulder” joint can move between 119.5 to -119.5 degrees in two degrees of freedom and -88.5 to 2 degrees in a third. Raise your hand as if about to ask a question in class; rotate, from the shoulder, so that the palm faces in the opposite direction. There; you’ve outperformed the NAO. If you want to do a little more work — try to feel the sliding action of the scapula across your ribcage; this gives yet another point of motion not even attempted to be created with robot “arms”. Indeed, many factory robots exhibit much greater range of motion (like swinging a full 360 degrees) than any single human joint — further crumbling any comparisons between the two.

At a recent technical conference, I asked a preimenent AI researcher: “how many degrees of freedom do you think a human has?” Eventually, he dismissed the question as “the wrong question to ask”. But I think it’s an important one, and the differences in mechanical capability between robots and humans is, by my estimate, at least millions of orders of magnitude.

Interestingly, in his downgrading of my question, the aforementioned AI researcher first asked, “do you mean the body or the mind?” I replied “they’re part of the system”. And truly, it seems that people who don’t have a regular movement practice — even some that do — forget that. It’s evidenced in my lay friend’s comment (about the brain versus the arm) too. Broadly, we forget that the incredible imaginations of our mind our facilitated through our very incredible, not well-understood bodies. We think of an activity like pursuing a PhD as one of the mind. Yet, how does one get a PhD? By executing a very specific sequence of movements: typing on a keyboard; measuring liquid into a pipette; speaking about one’s research; shaking hands with colleagues; and walking between buildings on campus. By the same token, we forget about the intelligence of an NFL wide receiver who creates physical feats of athletic prowess by first: understanding a playbook; intuiting the motives of those around him; deciding which route to pursue; and creating a training routine for himself. Separating mind and body is an abstraction that is imperfect, and, as it intersects with the field of robotics, seems to buttress the idea that a machine might share common mechanisms or traits with biological systems.

In the RAD Lab, we study how to impart human design processes for curating motion into robotic algorithms (this includes how to choreograph the user’s response to a particular robotic system). I sit right on that line of human and machine, and thus, have realized we have to work double time to combat this. When someone describes my work as “making robot motion more human-like”, I work to articulate all the ways we fail at that, which are easy to see. For example, when we make a desired motion in simulation, it comes off muffled and boring on hardware (we’re adding sound and lights and context to combat this). When I call one of our platforms “he”, my students jump to correct me: “it”! When I give exams, students have to write paragraphs describing where the simplistic systems they answer questions about fail to model real world phenomena (for example, every dynamics class uses a notion of frictionless springs — which don’t exist!). I give embodied movement workshops to help engineers understand the expertise of our colleagues in dance and the complexity of human movement, which roboticists tend to simplify to support the success of their work. I work with dance professionals on a regular basis. That means I publish with them too — despite the fact that their co-authorship has been called a “conflict of interest” on papers where I leverage their observational expertise (this position is predicated on the notion that human motion is simple enough for singular, “ground-truth” readings).

Such experience came in handy when viewing the recent BostonDynamics achievements with the Atlas robot. News outlets are quick to laud the backflip, calling it “stunning”, “perfect”, “jaw-dropping”, “Olympics-ready”, “full-tilt insane”, and “the beginning of the end”. Even more technical outlets have discussed the platform’s performance in terms of death. It is a beautiful feat of engineering, but it is one that is so much more interesting when the behavior is delved into with greater description. Let’s see if I can do it without co-opting too much human anatomy!

The robot progresses over and onto several boxes in the environment with an amazing amount of flexion over the most distal, lower link (“feet”), creating a surprising shift of shape that does not result in a shift of weigh, preceding each jump onto a block. This action seems necessary to keep the weight of the heavy, rigid power unit inside the base of support (or support polygon) while bending low enough to generate sufficient upward force to clear the obstacle. A human engaged in a similar task tends to involve the spine and arms in their action in a more integrated way, which allows for less flexion in the ankles. Next, spinning around upright is achieved through a striking rotation purely in the transverse plane, disconnecting the upper and lower parts of the platform. There is no sense of cross-lateral connectivity through the rigid core in this moment — although it seems that the right upper manipulator (“arm”) is moved toward the lower left manipulator (“leg”), which may simulate this connection commonly discussed in dance and other practices.

The subtle lean forward of the power pack, from the proximal actuators (“hips”), right before the backflip is also interesting. This creates a shift of weight, moving the center of gravity backward off the block, and possibly creating a more compact form before rotation. Then the lower extremities extend before all the limbs extend in sequence — upper (“arms”) then lower (“legs”) — and then pull in tight for rotation. Thus, it appears the fall is initiated by the lowest, most distal actuators, those farthest from the power unit. Can you imagine: initiating a backflip from your ankles?! A human strategy would again involve more coordination in the arms and core. For a human the sequence of action is totally different: arms move first then popping the pelvis up. The legs help with the jump in an integrated manner. And, most important is the hundreds of crisscrossing muscle fibers in the body center, which provide a nearly continuous net of available action next to the pelvis; from this point of view, the feet are just along for the ride.

Finally, the upper manipulators raise on landing in a clear imitation of a gymnast dismount. Here, the lack of rising shape quality in the core creates an ironic twist on the expression of “accomplishment” with the lifted arms. It’s not the expected lifting of chest and arching of back that we’re accustomed to seeing from gymnasts in this familiar gesture in this context (as compared between Atlas and gymnast Nina McGee below).

Thus, at least from one angle, what seems to be so interesting about watching the Atlas backflip is that it is so *different* from our movement patterns — yet still effective. Moreover, observations like the one regarding the difference in strategies of the core shared above have led to technical advances from my group, like this initial design, and have spawned research efforts to develop core-located actuation in humanoids. These observations are inherently qualitative and are best created, understood, and explored (all “brain” terms) through embodied movement (action of the “body”).

Bottom line: to appreciate the vast capacity of our bodies (and minds!) and to contextualize technology’s role in them, engineers need to value qualitative methods and we all need to dance more.

Amy LaViers is an assistant professor of mechanical engineering at the University of Illinois at Urbana-Champaign and director of the Robotics, Automation, and Dance (RAD) Lab and a Certified Movement Analyst (CMA).

An earlier version of this post was first shared on the RAD Lab blog: http://radlab.mechse.illinois.edu.

--

--

Amy LaViers

dancer // roboticist // CMA // asst prof // entrepreneur