Dr. Nancy Cooke Of Human Factors and Ergonomics Society On The Future Of Robotics Over the Next Few Years
An Interview With David Leichner
Have an interest in technology and be an early adopter of new technologies.
With the shortage of labor, companies are now looking at how robots can replace some of the lost labor force. See here for an example. The truth is that this is not really a novel idea, as companies like Amazon have been using robots for a while now. What can we expect to see in the robotics industry over the next few years? How will robots be used? What kinds of robots are being produced? To what extent can robots help address the shortage of labor? Which jobs can robots replace, and which jobs need humans? In our series called “The Future Of Robotics Over The Next Few Years,” we are talking to leaders of Robotics companies, AI companies, and Hi-Tech Manufacturing companies who can address these questions and share insights from their experience. As a part of this series, I had the pleasure of interviewing Dr. Nancy Cooke.
Nancy J. Cooke, Ph.D., is the Director of the Center for Human, AI, and Robot Teaming and a professor in Human Systems Engineering at the Polytechnic School, one of the Ira A. Fulton Schools of Engineering at Arizona State University. Dr. Cooke has a Ph.D. in Cognitive Psychology from New Mexico State University and is a past President of the Human Factors and Ergonomics Society (HFES.org). Her research interests include the study of individual and team cognition and its application to human, AI, and robot teaming, manned-unmanned teaming, and empirical assessments of teams and teamwork. Dr. Cooke specializes in the development, application, and evaluation of methodologies to elicit and assess individual and team cognition. Her work is funded primarily by DoD.
Thank you so much for joining us in this interview series! Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started in robotics?
First, I’d like to be clear that I am not a robotics expert. However, I’m an expert in human cognition and behavior — and my research seeks to understand how robots can best work with humans to complement their skills or do tasks that humans do not want to do. As a Ph.D. student, I conducted research on human expertise and methods to elicit knowledge from experts for use in expert systems which at the time were the “shiny objects” coming out of the artificial intelligence field. In my early career, I was asked to apply knowledge elicitation to teams and started investigating the problem of team cognition. About 10 years later, I started considering teams made up of humans, AI agents, or robots. The Department of Defense has been very interested in human-machine teaming and has facilitated the growth of this research program and the center that I direct at Arizona State University.
Can you share the most interesting story that happened to you since you began your career?
Luckily, there are so many interesting stories to choose from. At times, I am a job voyeur, learning about what people do and observing them at work to bring their job into the lab to study it under controlled conditions. However, in 2015 I had the very interesting opportunity to collect human factors data on the cognitive workload of two gas balloonists — Troy Bradley, and Leonid Tiukhtyaev, who engaged in a record-breaking Two Eagles flight from Saga, Japan to the Cabo peninsula in Mexico. Following the adventurous flight was very interesting, however, the data collection was a bit problematic. I should have anticipated that when the pilots were under high levels of cognitive load, they would decide not to take on the cognitive tasks. I learned a lesson about data collection in the field, but being involved in the Two Eagles project was amazing.
Can you please give us your favorite “Life Lesson Quote”? Can you share how that was relevant to you in your life? “Opportunities don’t happen, you create them.” — Chris Grosser
When I was young, I told myself that I did not want a job as an engineer. I was not interested in public speaking or travel, and I wanted a mindless 9–5 job. Luckily, I kept an open mind, which led me to enjoy graduate school, where I was forced to do public speaking, ultimately making me less scared of it. Today I am a professor of human systems engineering, I give many lectures and talks, travel a lot, and certainly do not have a mindless job. It is important not to shut the door on opportunities, but to seek them out.
Ok wonderful. Let’s now shift to the main focus of our interview. Can you tell our readers about the most interesting projects you are working on now?
I am working on a DARPA-funded project that aims to imbue artificial intelligence with social intelligence so that AI can assist future teams. We use Minecraft to represent a collapsed office building in which three people conduct an urban search and rescue task or a bomb disposal task. Social intelligence involves having a theory of mind, that is, an understanding of the beliefs and intents of others, and is a challenge for AI.
In another project, I’m working toward developing an artificial intelligence agent that can monitor a large, distributed human-machine system, identifying system anomalies and recommending mitigation strategies to a human. These systems could be DoD systems of humans, unmanned vehicles, and robots working across land, sea, air, and cyber domains or teams operating in space including the Mars Rover, the international space station, a lunar colony, and mission control on earth. It is an example of artificial intelligence doing something that it can do (i.e., taking in large amounts of system sensor data and making sense of it), but that a human cannot.
How do you think this might change the world?
Socially intelligent AI would, by definition, be a better teammate, that understands what its human partner needs without the need for explicit communication. AI as a mission manager could be very useful as our world becomes increasingly interconnected. For instance, AI could alert us about weaknesses in a system such as the power grid so that the system could be strengthened before a tipping point is reached.
Keeping “Black Mirror” in mind, can you see any potential drawbacks about this technology that people should think more deeply about?
AI that is socially intelligent could also be used to deceive or influence humans toward evil objectives. Further, if the AI manager was hacked, it could start giving advice that would hurt the system rather than strengthen it. As with all technology, it can be used for good or evil.
What are the three things that most excite you about the robotics industry? Why?
- Robots/exoskeletons are used to enhance human performance or correct disabilities.
- Robots that protect humans by searching for IEDs or bombs.
- Robots that do household chores (i.e., vacuuming).
What are the three things that concern you about the robotics industry? Why?
- Attempts to replicate humans in humanoid robots (instead of complementing them).
- Vehicles that purport to be autonomous, but are not, and end up killing people.
- Robots create more work for humans.
As you know, there is an ongoing debate between prominent scientists, (personified as a debate between Elon Musk and Mark Zuckerberg,) about whether advanced AI has the potential to pose a danger to humanity in the future. What is your position about this?
All technology has the potential to be used for good or evil. Certainly, ethics and regulations need to be considered in the development of AI, but people wishing to do evil will always find a way around them.
My expertise is in product security, so I’m particularly interested in this question. In today’s environment, hackers break into the software running the robotics, for ransomware, to damage brands or for other malicious purposes. Based on your experience, what should manufacturing companies do to uncover vulnerabilities in the development process to safeguard their robotics?
Careful validation and verification and red teaming to attempt to break the security. However, it’s important to be mindful that these techniques are not foolproof.
Given the cost and resources that it takes to develop robotics, how do you safeguard your intellectual property during development and also once the robot is deployed in industry?
My job is doing research and the best way to safeguard that research is through publication in peer-reviewed journals.
Fantastic. Here is the main question of our interview. What are your “5 Things You Need To Create A Highly Successful Career In The Robotics Industry?
- Get a college education. The challenges of AI and robots are multidisciplinary, so the major is not important. Breadth is important at this stage.
- Have an interest in technology and be an early adopter of new technologies.
- Read articles and books about robots and talk to people in the industry.
- Get a job or internship in one or more places.
- Decide on a specialization and earn an advanced degree toward that.
Please note, I’m not in the industry and took a slightly different path — mostly focusing on academia and getting BA, MA, and Ph.D. degrees. Although, I have been interested in AI and have done lots of reading on AI from my early days in college.
As you know, there are not that many women in this industry. Can you advise what is needed to engage more women in the robotics industry?
There are many efforts that are geared toward increasing women in STEM. Hopefully, these efforts will encourage and help bring more women to robotics. Additionally, seeing other women involved in robotics should in turn attract more women.
You are a person of great influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)
I would like to see a strong movement geared toward AI for human well-being.
How can our readers further follow your work online?
Authority Magazine readers can follow my work online via this link: https://search.asu.edu/profile/559491
Thank you so much for the time you spent doing this interview. This was very inspirational, and we wish you continued success.
About The Interviewer: David Leichner is a veteran of the Israeli high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications. At Cybellum, a leading provider of Product Security Lifecycle Management, David is responsible for creating and executing the marketing strategy and managing the global marketing team that forms the foundation for Cybellum’s product and market penetration. Prior to Cybellum, David was CMO at SQream and VP Sales and Marketing at endpoint protection vendor, Cynet. David is the Chairman of the Friends of Israel and Member of the Board of Trustees of the Jerusalem Technology College. He holds a BA in Information Systems Management and an MBA in International Business from the City University of New York.