Dr Nick Byrd Of Stevens Institute of Technology: How AI Is Disrupting Our Industry, and What We Can Do About It

An Interview With Cynthia Corsetti

Cynthia Corsetti
Authority Magazine
Published in
19 min readJan 1, 2024

--

Don’t just offer criticism, invite it. If we want to prevent what the critics are concerned about, some critics should be invited to advise the development of new technology. Some do not have the foresight to invite criticism, but others are so interested in the benefits of disagreement and criticism that they have standardize and implement techniques like red teaming (CIA 2009).

Artificial Intelligence is no longer the future; it is the present. It’s reshaping landscapes, altering industries, and transforming the way we live and work. With its rapid advancement, AI is causing disruption — for better or worse — in every field imaginable. While it promises efficiency and growth, it also brings challenges and uncertainties that professionals and businesses must navigate. What can one do to pivot if AI is disrupting their industry? As part of this series, we had the pleasure of interviewing Dr. Nick Byrd, Assistant Professor of Philosophy, Affiliate Faculty in the Institute for AI, Stevens Institute of Technology.

Dr. Byrd has been an Assistant Professor at the Stevens Institute of Technology since 2021. He focuses on decision science, experimental philosophy, and applied ethics. A philosopher-scientist, Byrd studies how changes in judgements, decisions, and beliefs affect reasoning and well-being. His pursuits blend philosophical considerations with technology, and in doing so, attract the attention and financing of those who’d like to translate intellectual principles into real-world applications.

Thank you so much for joining us in this interview series. Before we dive into our discussion our readers would love to “get to know you” a bit better. Can you share with us the backstory about what brought you to your specific career path?

Engineering is what I thought I would do, but I ended up doing quantitative cognitive science.

I grew up woodworking, spent some summers in the building trades, and started college in an engineering track. However, an undergraduate logic course made me realize I am interested in more than just engineering decisions; I am also interested in decisions what about what we ought and ought not to do (economically or morally) and decisions about what we should and shouldn’t believe (about geopolitics, medicine, religion, etc.). And graduate courses in cognitive science showed me how much more I could learn about decision-making with computational, scientific, statistical tools. So by the time I earned a Ph.D., I was running psychological experiments about critical thinking, morality, politics, public health, religion, and other domains.

What do you think makes your company stand out? Can you share a story?

Critical innovation may be what sets Stevens apart. New tools always introduce both risks and rewards. Some people focus only on the risks, but Stevens attracts people who care about both: we want to identify the risks, but we also want to improve the risk:reward ratio. For example, Stevens has faculty who not only acknowledge the concerns about generative AI in higher education, but also train large language models to tutor students better than off-the-shelf models. These people who set Stevens apart prioritize both criticism and innovation: hence, critical innovation.

Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?

Probably not intelligence, efficiency, or charm. I’ve never been the smartest, fastest, or most likable of my peers. And, of course, I was born into great circumstances relative to the rest of the world and I’ve benefitted enormously from good luck ever since.

Nevertheless, perseverance has been instrumental in my career. However my persistence may not be rational. For example, I submitted dozens of graduate school applications over the course of about four years before I received anything other than radio silence or rejection from Ph.D. programs. And when I was finally offered admission, it was only because other candidates declined their offer, allowing waitlisted people like me to accept their offer. I had a similar approach to the job market. In my final year at a doctoral student, I applied to nearly 300 jobs. As the figure below illustrates, I received only about three offers before accepting any of them — a “success” rate of about 1%. A more rational and efficient person may have realized a better way to invest the enormous amount of time I spent on these applications. A more charming person may have had stronger applications or interviews.

Let’s now move to the main point of our discussion about AI. Can you explain how AI is disrupting your industry?

The introduction and widespread adoption of AI tools like ChatGPT have continued to disrupt the higher education sector. The generative AI doomsayers have confidently speculated about how generative AI will undermine educators’ goals, while the generative AI influencers quickly pivoted from posts about research to posts about prompt engineering tips and the like. I don’t fit neatly into either of these groups. I have concerns and have been modifying my courses since 2022 — sometimes to thwart generative AI, but sometimes to embrace it.

My current thinking is that we should not immediately trust or deploy what AI generates for us. Rather, I think generative AI is a tool; and the value of a tool largely depends on how we use it. There are surely ways to use generative AI well.

One revelation from the generative AI boom is that many assessments in higher education have probably not been measuring the kind of learning and skill we had hoped. Large language models have finally made it abundantly clear that a decent essay can be created via mimicry rather than intelligence, comprehension, or critical thinking. So I hope one result of the AI boom is that we should be subjecting our educational methods and assessments to valid tests. As educators, we may have plausible ideas about how to educate and test our students. Generative AI may even help us refine those ideas, but until those ideas survive a range of tests, they’re just ideas.

Which specific AI technology has had the most significant impact on your industry?

In addition to disrupting the teaching side of higher education, large language models have also created new avenues in research.

Machine text annotation. Some decisions scientists record transcripts of people’s entire decision to understand each part of the process — as opposed to just the final answer (Byrd et al. 2023). These transcripts are rich with insight. For instance, you may expect that when people encounter a counterexample, they will be more likely to change their mind. However, it takes research assistants months or years to go through thousands of decision transcripts to determine which ones considered a counterexample — and that’s just one question we may have about the transcripts. A large language model may be able to annotate transcripts about as well as human assistants, but more quickly and less expensively. This raises questions about whether and how large language models can be used to annotate and quantify decisions transcripts and other qualitative data (e.g., Ding et al. 2023).

Machine Psychology. Large language models are trained on massive amounts of human thought. This partly why they are so good at mimicking human writing. But are the LLMs so good at mimicking human thinking that they can become psychology research participants? Some researchers have already begun giving LLMs the same prompts that have been given to humans in psychological experiments about law and morality — and the response patterns are often strikingly similar to what has been observed from human participants (Almeida et al., 2023). Results like these raise questions about how LLMs can — or should — replace some human participants (Crockett and Messeri 2023).

Can you share a pivotal moment when you recognized the profound impact AI would have on your sector?

Everyone’s reading, writing, and editing skills have been discounted since OpenAI’s ChatGPT was launched in the fall of 2022. Large language models generate a first draft faster than anyone I know — not just students, but also academics and professional writers. At first, I felt only concern about this. But upon reflection, I felt excitement because I remembered that I have never been an outstanding communicator and relatively poor communicators (like me) may have the most to gain from integrating large language models into tools like word processing software. Of course, the best writers may have the most to lose from the adoption of large language models. I still find that troubling.

How are you preparing your workforce for the integration of AI, and what skills do you believe will be most valuable in an AI-enhanced future?

I have no illusions that my ideas or the skills I teach will be the most valuable in an AI-enhanced future, but here are two ways I have been acclimating to the integration of AI in higher education.

  1. Logic mapping. Since fall 2022, I’ve required students to visually map the logic of both assigned readings and their own papers. My students have always been allowed to consult peers when writing their papers. Now, if they consult a generative AI system (or lazily let the generative AI write the whole paper), they still need to distill the logic of the essay into an argument map (something that requires some understanding and skill). Mapping an argument can also reveal its flaws, allowing the student an opportunity to improve upon the ideas they’ve learned from the reading, their peers, or a chatbot. And, of course, mapping a bad argument or mapping an argument badly can also reveal what a student hasn’t yet learned. So far, I am satisfied with the results. Students’ ability to create good argument maps varies about as much as students’ ability to (re)construct good arguments (before the popularity of generative AI). I am not yet sure if argument mapping provides other benefits, but I am studying this in my research. Until that research is published, interested readers can consult the history of research on the topic to make their own decision (e.g., Twardy 2004). For now, argument maps are relatively standardized and the generative AI systems I have tested cannot yet reliably produce them. So argument map assignments seem better for evaluating students’ critical thinking than traditional essay assignments. Moreover, students seem to understand debates better when they are well mapped, either by them or someone else. Interested readers can see a crowdsourced argument map on the topic of AI in higher education on Kialo.

2. Testing generative AI: For example, in fall of 2023, I applied an idea from Cameron Buckner to my Philosophy of Mind course. Students had to put GPT-4 through various cognitive tests to determine whether it had achieved the “human-level reasoning” that some Microsoft researchers claimed it had. Then students had to map an argument about whether the chatbot has achieved human-level reasoning and then map their best objection to their argument. The class remained divided about whether GPT-4 had achieved human-level reasoning, but convincing my students was never part of my goal. I just want to get students into the habit of engaging critically with assistive technology list large language models. Just like we need to reflectively override automated spelling and grammar suggestions in our word processing apps — actually, Microsoft, that is a word(!) — we need to reflectively engage with and potentially discount the output of large language models. After all, I expect most students to use generative AI — and they should, given that many of their future employers will expect them to use it. I’d like to prepare students to use generative AI well. We often encounter decisions that can benefit from Socratic discussion. However, we often have to make these decisions before we can talk to a human interlocutor. My colleagues’ and my research suggests that solitary reflection often fails to correct some of our faulty impulses in 20 to 77 percent of the cases tested so far (Byrd et al, 2023). There is a growing body of evidence, however, that more social reflection can improve thinking. So if we can train people to use generative AI Socratically, then people may have more opportunities to engage in and benefit from social reflection than their human social networks allow.

What are the biggest challenges in upskilling your workforce for an AI-centric future?

Academia is remarkably conservative: its improvements in research and teaching are often generational (e.g., hiring new generations of faculty as existing generations retire) or else reactive to outside forces (like generative AI). And even when universities are forced to react to new technology or ideas, the reaction may not keep pace with advancements happening outside universities. Think about how online education has been made more accessible and enjoyable by the likes of Kahn Academy of Codecademy. Although many universities have attempted to make their educational content more accessible, I rarely find university content that does as well to combine instruction, assessment, and gamification. So I will not be surprised if the best or first AI-based education and learning platforms are generated outside universities (even if the ideas, skills, and teams were cultivated at universities).

What ethical considerations does AI introduce into your industry, and how are you tackling these concerns?

Ethics of AI is often just an extension of existing ethics. Consider two examples:

Sourcing. Many concerns about generative AI in education seem to trace back to a concern about sourcing — the chatbot will generate responses without being able to explain the source of each part, failing to give credit where it is due (or, just as bad, giving credit to the wrong people or people who don’t exist). However, the average college course was already rife with opportunities for students to assimilate ideas without tracing them to their source. If courses traced all their ideas back to their origin, each class would spend a huge portion of the history of philosophy — even in computer science, business, and biology courses! I doubt that most instructors have spent heaps of time in class tracing the history of each idea their students encounter. And yet I haven’t heard a chorus of concern about this lack of sourcing. This makes me wonder if some concerns about sourcing can be traced to algorithm aversion: a resistance to relying on algorithms even if they perform at least as well as humans (Dietvorst, Simmons, and Massey 2015).

Employment. Technology often automates certain types of labor and knowledge work in ways that can reduce the demand for certain kinds of employment — printing presses and digital calculators probably put a lot of people out of work. Large language models may make professional writers, writing instructors, and editors feel like their livelihoods are under threat. And students who are about to graduate with a degree in writing may be wondering if they will need to pivot before graduation. I can relate to some of this. I have watched the perceived value of traditional philosophy wane for years: the proportion of philosophy degrees has fallen, and some philosophy departments have had to close their doors. I am about invested in Philosophy as you can be and even I have found myself wanting more than the logical rigor and open-minded inquiry that philosophers offer. So I began learning new skills and methods and partnering with people in other fields. Fortunately for me, there are universities (like mine) that have embraced more scientific approaches to fields like Philosophy. Moreover, some evidence suggests that empirical and experimental philosophy have become the dominant and more impactful form of philosophical research (Knobe 2015). Perhaps other fields and professions will experience similar changes as generative AI becomes more widely adopted.

What are your “Five Things You Need To Do, If AI Is Disrupting Your Industry”?

How about four?

1. Critique it. There’s plenty to worry about as new technology is adopted. We tend to adopt technology faster than we imagine or test all of its consequences. Yet the movie Oppenheimer reminded us how developers of nuclear weapons may have been discussing even extremely low probability risks (such as igniting the atmosphere) before deploying the technology. Of course, we will not (and cannot) foresee or prevent all risks. And even when risks are understood, technology can be used in regrettable ways. But insofar as criticism is actionable, in may help people identify and mitigate some risks during the adoption process.

2. Don’t just offer criticism, invite it. If we want to prevent what the critics are concerned about, some critics should be invited to advise the development of new technology. Some do not have the foresight to invite criticism, but others are so interested in the benefits of disagreement and criticism that they have standardize and implemented techniques like red teaming (CIA 2009).

3. Team up with experts, practitioners, and stakeholders. Unlike many forms of criticism, developing technology is a team sport, requiring a diverse set of expertise, skills, experience, and viewpoints. Even the most successful corporations may not be considering enough input when they develop products and services with substantial societal impact. Rather than consider just stockholders, we need to consider other stakeholders as well (Freeman 1994). Otherwise exuberance, profitability, and power will crowd out other important considerations.

4. Test, evaluate, and publish. Universities are a great place for skills and teams to form and for initial testing to begin. But ideas need to graduate from academic research to real-world testing and evaluation — ideally in the fields they will be deployed. And any AI development that has benefitted from public funding should (as a condition of the funding) be required to publish the results of its testing and evaluation — and make it freely available to the taxpayers that paid for it (Tollefson & Van Noorden 2022). Even private funding organizations have realized the need for this kind of open access requirement (Bill & Melinda Gates Foundation 2021).

What are the most common misconceptions about AI within your industry, and how do you address them?

Misconception 1: Students are unduly impressed by generative AI

Some people in Higher Education seem to think that we need to convince students that artificial intelligence isn’t very intelligent. They find stats like “32% of students intend to use or continue using AI to complete assignments” and conclude that students trust generative AI too much.

But students are smart. They probably know that today’s generative AI is not very intelligent or reliable. Their use of generative AI is not a sign that they’re impressed by generative AI. Students do lots of things knowing that they are unwise:

  • procrastinate
  • bring distractions to class
  • skip the assigned reading and/or skip class
  • copy others’ work or pay others to do their homework
  • buy and sell class notes
  • pirate copyrighted course materials (e.g., annotated tests, assignments, slides, etc.)

Sermonizing about how these methods are unwise or risky would waste just as much time as lecturing them about how generative AI can fail them. Students already know.

Misconception 2: Conventional assessments like essays and online tests are valid

One reason that so many students use unreliable methods in our classes is that many can get away with it. They can pass our tests of critical thinking or comprehension with little to no critical thinking or comprehension — so can generative AI, it seems. That means our tests aren’t valid. This shouldn’t be surprising. Very few assessments in education are formally tested.

Curiously, there is an entire academic field dedicated to developing and validating curricula and assessments! Some of these specialists are in academia (e.g., Educational Psychology or Quantitative Methods) and some of these specialists are in industry (e.g., the Educational Testing Service). These specialists can test how well professors’ writing prompts and test questions measure what the professors think they do. They may even be able to develop and validate new curricula that make it easier for people to learn, remember, and apply the material in the first place.

Alas, I have never encountered only a handful of faculty who employs the quantitative methods of educational psychology to validate any part of their curriculum. And I have yet to find a university-wide course assessment program that measures anything beyond students’ perceived learning (as opposed to, say, behavioral measures of learning).

So perhaps it is often rational for students to take shortcuts when completing our’ unvalidated busy work so that they can save time and energy for more rewarding opportunities.

Misconception 3: Full-time human instruction is essential for education

Every university I have worked for has a Writing Center. These centers’ goal is to provide support to students who need to complete writing assignments. You can probably imagine how helpful these centers have been for many populations — and not just first-generation college students, students who learned English as a second language, students from under-resourced high schools, and differently-abled students.

However, you have probably known for many years how to make your writing passable, even without a Writing Center, thanks to software that automatically generates suggestions that improve your writing (e.g., Microsoft Word’s built-in readability scoring and grammar suggestions). With the popularization and rapid advancement of large language models, we have even more ways to get instant feedback on our writing. Crucially, these software writing aids are available even late at night when Writing Centers are closed and many students are starting to write their papers, presentations, and other projects that are due before the Writing Center re-opens the next day.

So Writing Centers and their educated, gifted, and outstandingly patient personnel may feel like they are facing an existential crisis. And that may be true if Writing Centers categorically ban generative AI. However, if they embrace AI and teach students how to use it as a Socratic interlocutor, they may be providing students with habits that they can apply long after graduation when the no longer have access to a Writing Center.

Misconception 4: Using Generative AI Is passive, lazy, etc.

Sure. We can lazily use generative AI. Students can ask ChatGPT to write their term papers so they can do something unedifying instead. Professors can (and do) ask it to generate more unvalidated materials rather than quantitatively test and refine widely used curricula and share the results with their colleagues. However, if I employ someone to indulge my laziness, that is not an indictment of that person — it is an indictment of me.

Of course, we can also use generative AI like a forum where we learn from people with experience that we lack, find out about our blind spots, ask for alternative perspectives or objections, etc. For example, I can sometimes figure out how to code something myself. However, I can often make my code more efficient, reproducible, and comprehensible by seeking input from other coders with different experiences than mine. So, like many coders, I sometimes seek input on sites like Stack Overflow when coding. Many benefits of online forums are the same benefits of seeking input from a generative AI: You can learn

  • any time of day (even when your peers or professors are sleeping)
  • from a wider range of experience and knowledge than yours (or your entire network’s)
  • which of your assumptions are faulty
  • when you’re not even asking the right questions (such as when you’re getting good answers that don’t actually solve your problem)

So it may be best for professors, students, and Writing Centers to think of generative AI like an immortal, never-sleeping Socrates. Its mechanisms and outputs can be as biased and fallible as any instructor’s. But — like any instructor — it can nonetheless serve as a third party that can reframe or even improve our thinking if we ask it to generative objections, alternative perspectives, counterfactuals, and the like. This may help us spot our faulty impulses, reflect on society’s default assumptions, consider overlooked stakeholders, etc.

We already pay handsomely for biased, fallible feedback when we enroll in graded courses, hire consultants, or seek cognitive therapy. It may not be irrational to turn to generative AI for similar feedback when we don’t have access to a human that can provide better input. However, it probably is irrational to categorically ban generative AI, given its potential for Socratic reflection.

Can you please give us your favorite “Life Lesson Quote”? Do you have a story about how that was relevant in your life?

I can’t say I’ve found lasting value in quotes. I’ve found plenty of quotes that felt profound or inspiring, but I’ve never found a quote that provided further benefit. I have found some value in relatively short arguments and quantitative analyses of data, but short quotes of the writing are often uninformative or even misinformative without full context. So, overall, I am unsure whether the benefits of quote trading outweigh its risks, on average.

Off-topic, but I’m curious. As someone steering the ship, what thoughts or concerns often keep you awake at night? How do those thoughts influence your daily decision-making process?

Geopolitics — not that I am steering any geopolitical ships. Humans have developed enormous potential to make the world worse. And even if the world has tended to become better over human history — notice that was a hypothesis rather than a description — the preventable and bad things that still occur seem unacceptable. Moreover, the probability that the world could become dramatically worse seems large enough to worry about it.

You are a person of great influence. If you could start a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

Interesting thought experiment in that opening premise! If I were a person of great influence, I doubt my ideas would be better than the best ideas we already have.

Generally, I am impressed by movements that transparently quantify their impact (good and bad) and iteratively test ways to have a better impact. But I wonder if that kind of movement is possible. After all, it works only insofar as practically everyone would consider some impacts good and some other impacts bad. I hope that such agreement exists and is sustainable, but it’d take enormous amounts of evidence to test my hope. Some of the movements I admire may be associated with the label “Effective Altruism”. However, (a) it is easier to claim to be part of that movement than it is to achieve that movement’s goals and (b) there are probably many people who do not identify as effective altruists that are nonetheless participating in the kind of movement that I have in mind. So “doing the most amount of good” may defy the label even if it largely aligns with proposals that have been dubbed “effective altruism”.

How can our readers further follow you online?

I maintain a website with a blog, podcast, and links to all my online profiles at byrdnick.com, but my university also has a static faculty webpage about me.

Thank you for the time you spent sharing these fantastic insights. We wish you only continued success in your great work!

About the Interviewer: Cynthia Corsetti is an esteemed executive coach with over two decades in corporate leadership and 11 years in executive coaching. Author of the upcoming book, “Dark Drivers,” she guides high-performing professionals and Fortune 500 firms to recognize and manage underlying influences affecting their leadership. Beyond individual coaching, Cynthia offers a 6-month executive transition program and partners with organizations to nurture the next wave of leadership excellence.

--

--

Authority Magazine
Authority Magazine

Published in Authority Magazine

In-depth Interviews with Authorities in Business, Pop Culture, Wellness, Social Impact, and Tech. We use interviews to draw out stories that are both empowering and actionable.

Cynthia Corsetti
Cynthia Corsetti

Written by Cynthia Corsetti

Author | Thought Leader | Leadership Consultant

Responses (1)