Photo by Pawel Czerwinski on Unsplash

The Human Factor in AI and Robotics: A Case for Understanding Generative AI

Jackson B

--

By Jackson F. Bremen and John M. Bremen

Artificial Intelligence (AI) dates back almost 90 years to John von Neumann (who invented the paradigm of computers used today) and the logician and computer pioneer Alan Turing whose infamous “Turing Test” highlighted the concept of machines becoming indistinguishable from humans. Today, this test has only grown in relevancy, as technological advancements using AI have impacted nearly every industry and left many business leaders dazed at headlines reading “AI Program Passes the Bar Exam” and “Local Art Show Won by Program.”

One subset of AI that has captured a great deal of media attention comprises generative AI tools which create new text, images, audio, or other media. Dialogue frequently incites concerns that they soon will mean the end of work as we know it. The reality is that AI and robots are both job creators and job eliminators, generally replacing lower-skilled roles with higher-skilled ones. Further, the “automation paradox” suggests that the more sophisticated and complex technology gets, the more vital human users become. AI and robotics have already replaced many types of roles (some production line factory workers, manual machinists, and others), but they have created hundreds of crucial new positions (programmers, technicians, and more). As is true when any new technology is introduced, the role of people will evolve; workers, leaders, and organizations that do not adapt risk getting left behind.

Contrary to popular belief that today’s workforce will be impacted by these tools directly, a more nuanced view is that the future of work will be shaped by those who learn how to use the tools efficiently early on. Generative models have incredible power, but effectively communicating with them, generating prompts to create content, and knowing how to use them most efficiently are skills currently being overlooked by many.

In this article, the authors share an overview of AI for those not experts in the technology to advise on how AI tools can be utilized most effectively alongside the essential contributions made by human beings.

Definition

In general, AI is any system where a computer uses algorithms to solve problems, evaluate information, or make decisions. To many peoples’ surprise, AI is already used everywhere today, in well-known tools such as spreadsheets, autocorrect, and GPS, as well as more advanced tools such as auto-pilot for airplanes and autonomous driving cars. Indeed, AI is at the root of all automation, even within most manufacturing robots and advanced production lines, as many algorithms that control them include AI. Each of these examples uses similar technologies applied in dramatically different ways.

Generative AI

One hugely popular application of AI today is called generative AI due to its ability to create content such as text, images, music, and more. As the work produced by these programs rapidly surpasses their human counterparts, organizations looking to avoid getting “left behind” and protect their employees must first understand how it works technically. Just as no baby is born fluent in language, no AI tool is magically created with the power to generate media– it must be trained using pre-existing data.

The training process of an AI model includes inputting vast amounts of text, images, or other media into a complex system of mathematical equations. As the program “learns” how to distinguish the different pieces of media, the parameters of these equations will adjust to reflect further understanding and optimize the eventual creation of new content. Say a linguistic model was being trained on the English language. As it reads through hundreds of thousands of sentences, its algorithm will observe patterns and grow better at predicting what comes next. For instance, it could learn that when it sees the phrase “The cat in the” it should write “hat.” While this example is highly simplified, it highlights that models are only as good as their input data and create outputs based on what they have seen.

In addition to de-mystifying the capabilities of AI to business leaders, this technological understanding is also crucial to anyone planning to use these tools. Without knowing how these tools function at a basic level, users will continue to endure endless frustration whenever an AI tool produces faulty, unexpected, or even dangerous outputs. However, with the right level of awareness about the data AI models were trained on and their intended uses, users can effectively utilize and collaborate with generative AI tools.

The following are seven actions required to make and use AI tools (including generative AI) function effectively, productively, and ethically during their early stages:

1. Verify AI-produced content: Many users have been surprised by the inaccuracy of generative AI tools, as well as their confident “hallucinations.” The tools seemingly imagine facts that are not true in response to prompts and fabricate confident, factually incorrect statements. By nature of the methods used to train AI, similarities in slightly different pieces of information can cause pieces of information to become linked and merged unexpectedly, causing outputs that contain misinformation or false facts to be produced. For now, human users maintain the critical role of verifying AI-produced content.

2. Keep technology ethical and transparent: Along with verifying accuracy, human users have an important role in respecting privacy, maintaining transparency, and not using information obtained without the creators’ consent. If users and stakeholders are not properly informed of how models work and educated on their limitations, unethical sharing of knowledge and misappropriated information may proliferate. The nature of large AI models means that tools act as “black boxes” where it can be hard to accurately evaluate what the models will produce and what sources or training criteria were involved in a particular output. National governments and organizations that create models are working to address regulations in this area. However, it likely will be years (if ever) until practical and globally consistent standards are developed or implemented, with legislation lagging the cutting edge of tech.

3. Keep confidential information safe: Proper usage standards should include guidelines and procedures to keep confidential data from being exposed to the public. To allow AI to better reference and use proprietary information in generation, models can learn from internal, protected documents. However, this comes with risks such as information being accidentally released by the model– possible avenues for breach include confidential information in generated reports that may go out to a broad audience or protected intellectual property being leaked via a customer-facing chatbot. Organizations must develop and implement policies and checks to prevent leaks and ensure the safe usage of tools and platforms.

4. Provide context and handle outliers: Modern AI effectively synthesizes content on which it is trained but still needs more situational awareness and analysis. A recent real-life example: Bay Area cousins of the authors were featured in an article describing how they got into a stand-off with a self-driving car one evening while driving home from dinner. Like many narrow two-way San Francisco streets, there was only room for one car to navigate between the parked cars on either side. Usually, drivers make eye contact, and someone backs up or into a nearby driveway to let the other pass. The autonomous vehicle kept coming, stopped, backed up two inches, then stopped again and started flashing its lights. The human driver finally backed down the street and turned around. While AI tools learn more each day, there remains a vital role for human users to provide context. Roles such as AI Prompt Engineer — individuals highly skilled at leveraging generative models — have begun to emerge. Individuals in such roles effectively utilize models, including context information and desired output in a highly efficient, structured way for the model but potentially non-obvious for a human.

5. Address bias and provide compassion: Since generative AI models are trained on content that includes the biases of the human users who created it, models learn historical biases (including analytical biases such as recency bias and social biases such as discrimination against marginalized groups). AI is only as good as the data it is trained on, so if bad data is put in, bad outputs will come out. While AI can learn to produce output in a way that mimics empathy, it lacks the subjective feelings that underlie genuine empathy, understanding, and compassion. Human users must first be cognizant of the information they give models to avoid bias. Then, they must interpret and frame outputs with a healthy framework and set of expectations to most effectively address bias in real time and retain a layer of compassion when using AI to produce work.

6. Invent and innovate: While generative AI tools can create new information based on existing information, learn from it, and improve code and processes, AI tools currently cannot invent wholly new concepts or creative ideas. While AI is incredibly good at replicating and contributing to areas in which it has been trained, generative models are less useful when creating novel information. It remains to be determined whether models can create fundamentally new ideas. As such, humans remain crucial in creative fields and when coming up with brand new solutions.

7. Complete the work: There are many different uses for generative AI tools. For some jobs, the tool may perform certain aspects of work, but users still need to complete tasks by adding their insights and shaping outputs based on their skills and experience. In virtually all cases, it is up to the human user to finish the job. For example, initial tests of the GitHub Copilot AI software, a tool that writes large amounts of code as a developer types (like a very sophisticated autocomplete), show that developers may work 55% faster when using the AI tool, as they need to type fewer keystrokes to achieve their goals. Developers, therefore, can spend more time planning the conceptual, macro aspects of software rather than writing lines of code. However, tools make mistakes and cannot yet formulate large, complex applications and pieces of software. As such, human users remain necessary to direct AI at a high level, fix mistakes, and frame work appropriately.

Today, we have access to generative AI tools with many positive benefits. In a business context, tools may assist with writing job descriptions, creating computer code, writing sales plans, developing marketing messages, creating operations task lists, generating research, and answering routine employee and customer questions. As they mature and become relevant in more areas, the role of human users adapts to maximize the benefit of AI tools while mitigating their risks. Moving forward, users should be aware of the basics of how they learn and generate content to effectively counter ethical issues and work with AI tools most efficiently and effectively.

About the authors

Jackson F. Bremen is a third-year student in Northwestern University’s McCormick School of Engineering, majoring in Computer Engineering, minoring in Cognitive Science and Data Science, and serving as co-President of the Northwestern University Robotics Club. John M. Bremen is Chief Strategy, Innovation & Acceleration Officer for WTW, a global advisory company providing solutions in the areas of people, risk, and capital.

The contents represent their views, and none of the information conveyed should be considered advice from them or their respective organizations.

Special thanks to Katie Mumford for helping to proofread and edit the piece.

--

--