The Challenges of Enterprise AI and LLM Adoption — The Human Factor

Szabolcs Kósa
12 min readJul 27, 2023

--

Information technology, since its inception, has presented us with a series of transformative leaps. Each evolution, from the dawn of Enterprise Resource Planning (ERP) and the Internet to Software as a Service (SaaS), cloud technology, agile methodologies, and digital transformation, has remodeled the way enterprises function.

Now, we stand at the brink of another technological revolution brought about by Generative AI and Large Language Models. Their emergence signals the need for a renewed examination of the intricate relationship between advanced technology and its human users within the enterprise environment.

This chapter is a continuation of our discussion on the challenges of AI adoption within enterprises, following the two preceding parts. This time, however, we zero in on the human element of the equation. The integration of Generative AI and Language Models is not just a technical endeavor. It represents a paradigm shift with deep implications for people at different levels in an organization. This shift transcends the usual adaptation to new tools, delving into the nuanced interactions between these intelligent systems and their human users.

Our focus, therefore, is on understanding this complex interplay. We examine the potential barriers to adoption from both human and organizational perspectives, underscoring the intricate dynamics that such technological advancements introduce to the workplace.

Depersonalization of Human Connection

Exploring the realm of AI-augmented communication presents us with an ironic paradox. These digital assistants strive to replicate our modes of communication, but in their pursuit of perfection, they may create an eeriness that disrupts our authentic interactions. There’s a strange beauty in human communication, marked not only by our eloquence but also our occasional faux pas — the typos, the autocorrect mishaps, the unintended puns. Our missteps, our errors, make us human.

Envision a workplace where every chat and email is impeccably polished, with flawless punctuation and absent of any typos. Such perfection, particularly in communication, can become alienating. A robot can mimic our words, but it cannot recreate our shared human experiences. When AI tries to perfectly imitate human communication, it teeters on the edge of the uncanny valley. Consider that well-meaning ‘LOL’ coming from an AI bot — does it know the joy of a belly laugh or the absurdity of a dad joke? Or when it attempts to replicate our distinctive idioms, it’s akin to hearing a parrot quoting Shakespeare — it just doesn’t sit right. The cringe factor comes from the dissonance between the algorithmic rigidity of AI and the fluid spontaneity of human interaction. This is particularly evident in the realm of customer service.

As AI and large language models become more advanced in their ability to respond to customer queries and mimic human-like interaction, they risk falling into the unsettling realm of the uncanny valley. This refers to a concept from robotics where a robot that nearly resembles a human can appear eerie or off-putting. In a customer service context, this phenomenon can adversely affect customer relationships and brand perception. An AI that seems too human-like can unsettle customers, making interactions feel artificial and disturbing, rather than comforting and genuine. Balancing the efficiency and scalability of AI with the need for authentic, human-like interactions is a challenge that businesses need to navigate carefully.

As we increasingly lean on AI assistants within our digital workspaces, we may inadvertently introduce a fresh layer of intricacy. These tools, always ready and working, might end up talking to each other non-stop. One can envision returning to the office the next morning, only to be greeted by a deluge of automated exchanges filling up your inbox. These endless correspondences, a byproduct of an unremitting digital ping-pong match played out by overzealous AI assistants in your absence, could add to the cognitive strain of managing an already complex digital workspace. There is also the risk of critical information being lost or misinterpreted in the automated chatter, not to mention the potential security and privacy implications of such autonomous interactions.

While the automation of communication holds promise for increased efficiency and productivity, it also represents a potential minefield of challenges. Our laughs, our frustrations, our casual banter, even our blunders — they form the melody of our conversations, the rhythm of our interactions. Excessive automation can strip our exchanges of their inherent human elements — the nuances, the empathy, the understanding that makes communication truly meaningful.

Emotional and Social Impact

The increasing incorporation of AI into the workplace has the potential to extend far beyond merely boosting operational efficiencies. It could also fundamentally alter the nature of the human work experience. The move towards an AI-assisted, monitored and measured environment might potentially cause unease and anxiety among employees.

If AI’s role extends to analyzing employees’ written work, emails, or chat communications, it could be perceived as an intrusion into a traditionally human domain. This evolution significantly changes the dynamics of traditional human domains, potentially encroaching on what employees perceive as their ‘safe zones’ of communication. This may result in employees feeling they are under the scrutiny of an ever-watchful digital entity, adding a layer of stress to their interactions.

This technological progress not only opens new doors to employee oversight but also potential paths for misuse and coercive control. The ability to record meetings and transcribe them, complete with name tagging, is an example of such a double-edged advancement. While beneficial for record-keeping and clarity, it can also be misused for invasive scrutiny.

Another intriguing development, fueled by the increasing sophistication of AI, could be the shift in employees’ roles from solely assigning tasks to AI, to also finding themselves on the receiving end of AI-generated tasks. This dynamic could fundamentally alter the power dynamics within the workspace. After all, it’s one thing for a human to issue tasks to a digital assistant; it’s quite another for that assistant to return the favor. This new reality could engender feelings of discomfort or unease as employees adjust to the novel experience of being ‘managed’ by an AI. This shift might not only reshape their daily workflow but could also trigger introspective questions about their role and value within the organization. The concept of AI stepping into a traditionally human space might be disconcerting for many, presenting a unique challenge in the transition toward AI integration in the workplace.

A particularly potent concern is the looming fear of job security, largely fueled by the relentless media hype surrounding AI. This fear often exists independently of any CEO’s agenda or strategic direction; the mere possibility, suggested by speculative news and trend reports, has the power to unsettle employees. And as we’ve noted earlier, such fears are not entirely baseless, given the potential impacts of over-reliance on AI.

These issues could potentially foster a range of negative impacts within the workplace. Employees might resort to disrupting AI systems, either subtly through misinformation or overtly through system interference, as a form of rebellion against perceived AI dominance. The constant comparison with AI might induce a form of ‘imposter syndrome’ among employees, leading to stress and potential burnout. It could also result in an ‘AI divide’ within teams, causing feelings of exclusion and demotivation among those who struggle with the technology.

AI’s expanding influence on performance evaluations and hiring practices can also ripple through HR processes. There’s a risk that an over-reliance on AI-derived data could overshadow essential human attributes such as emotional intelligence and creativity. Similarly, AI algorithms acting as primary filters in hiring processes might overlook potential talent due to algorithmic biases or a lack of understanding of human nuances.

Over-Dependence and Complexity

The landscape of business operations, increasingly dominated by AI, prompts a critical conversation on overdependence and its implications. We find ourselves in an era where the delegation of cognitive tasks to AI agents is pervasive, and this raises legitimate concerns. Are our critical thinking and decision-making abilities at risk of atrophy in the face of increasing reliance on AI? As AI’s presence expands, there’s a very real danger that we are forgetting how to navigate problems without an AI crutch. Like the panic when your GPS dies and you’ve no idea which exit to take on the highway (happened to me multiple times already), we risk being at a cognitive junction, scratching our heads without AI’s directions. Our over-reliance might just drive our decision-making skills off the map!

The evolving workplace landscape is growing exponentially more complex as we integrate an array of autonomous agents, each dedicated to specialized tasks. This evolution paints a picture of a hybrid human-AI ecosystem. Though revolutionary, this transformation can leave human employees grappling to navigate an intricate, AI-saturated work environment that bears little resemblance to traditional models.

This complexity can be disconcerting, potentially intensifying feelings of disconnection and unease. It’s akin to being dropped into a maze, where familiar paths are replaced by a web of AI-driven processes that can seem alien and intimidating. Human employees may experience a heightened sense of being out of place, an anachronism in their workspace. The result is a challenge we can’t ignore: balancing the undeniable power of AI with the need for a workspace where humans continue to feel engaged, valuable, and at ease.

The intersection of complexity and overdependence could become more pronounced in the future. Autonomous AI agents, incorporating large language and other generative models in their architecture, might be used in fundamentally different ways than traditional human workflows. These agents, not yet fully integrated into business operations, could be pivotal in the times to come, replacing manual methods with advanced, AI-driven processes.

Historically, human teams have been able to react quickly, adapt, and find a ‘workaround’ in disruptive situations. However, in a future dominated by AI agents, this immediate adaptability could be compromised. If an AI system experiences issues, our ability to mount a rapid, effective response may be tested.

In such a future, disruptions could escalate from minor inconveniences to severe operational impediments. Entire processes could be halted until the AI agent is restored, impacting business continuity, productivity, and possibly even revenue streams.

This potential future scenario presents us with a significant challenge. We might be forging a path toward highly specialized and advanced AI-powered systems, but this trajectory could inadvertently lead us into situations where we are less prepared to deal with disruptions. The traditional ‘workaround’ strategies we often employ may no longer be applicable or effective.

Shadow AI and Bring your own AI

The continuous development of corporate technology brings with it complexities that mirror recurring obstacles faced in the recent past. As we explore further into the world of artificial intelligence, we’re experiencing a similar sequence of challenges that accompanied previous IT innovations. Much like the infamous ‘Shadow IT’ phenomenon of yesteryears, a new form of subterranean activity could emerge in the corporate landscape. Let’s call it Shadow AI, for want of a better term.

Shadow AI can be likened to its IT counterpart in more ways than one. When the wheels of corporate governance turn too slowly, or when red tape becomes a hindrance, some internal teams may choose to take matters into their own hands. These well-intentioned yet unsanctioned AI initiatives can lead to the same kind of risks and dilemmas that Shadow IT once posed — lack of transparency, unforeseen security risks, potential compliance issues, and problems with operating the technology in the long run.

Meanwhile, as AI devices become more personal and ubiquitous, another trend begins to unfold. This could be seen as the AI equivalent of employees bringing their personal laptops or tablets to work. In the quest for efficiency and smart work tools, employees may independently introduce AI assistants into their professional space. For the purpose of our discussion, let’s refer to this as Bring Your Own AI.

The idea of Bring Your Own AI (BYOAI) poses a unique challenge in an era where AI assistants are easily accessible and affordable for forward-thinking employees. The clear advantage of this undisclosed BYOAI practice lies in the enhanced productivity it can foster. Employees can leverage their AI tools to work more efficiently, potentially outperforming their colleagues who don’t use such aids. When these personal tools are discreetly utilized within traditionally rigid corporate settings, significant control issues arise.

These personal AI devices operate outside corporate boundaries, posing risks of unintended exposure of sensitive information. Although interoperability issues aren’t a concern, as these devices work independently of the corporate system, this autonomy intensifies the security risk.

The intricacies of “shadow AI” and “Bring Your Own AI” (BYOAI) may first manifest in subtle yet profound ways within a traditional corporate structure. It may not immediately dawn on you why your outsourced or international workforce has suddenly exhibited a marked improvement in language proficiency and communication finesse. Or why your team of analysts consistently delivers their findings ahead of schedule with unexpected efficiency.

These subtle transformations, while potentially enhancing productivity, can be the symptoms of the undisclosed use of AI tools. Such a scenario underscores the dual nature of AI — it carries the promise of superior work outcomes, yet simultaneously heightens the risk of exposing sensitive information.

These developments, while not explicitly defined or widely recognized yet, highlight the persistent tension between fast-paced technological innovation and the sometimes slower-moving corporate structures.

In a growing trend of caution against the unchecked use of AI, several major corporations have imposed restrictions on the use of AI assistants like OpenAI’s ChatGPT. Tech giant Apple has laid down a firm stance, prohibiting employees from utilizing AI assistants like ChatGPT as it develops its own rival model. Meanwhile, retail powerhouse Walmart has issued clear directives, urging employees not to disclose any company information to these AI tools and emphasizing a meticulous review of AI-generated outputs. South Korean conglomerate Samsung has made a sweeping move, banning the use of most generative AI tools, including ChatGPT, following an incident where sensitive data was uploaded to the system. This burgeoning trend of restrictions signals a cautionary approach to AI integration in business operations and sets the tone for how corporations may navigate this challenging terrain moving forward.

Productivity paradox

The concept of the Productivity Paradox was first articulated by Nobel laureate Robert Solow, who famously stated, “You can see the computer age everywhere but in the productivity statistics.” This paradox, later detailed by Erik Brynjolfsson, highlighted a scenario where increased use of computers did not immediately translate into productivity gains. Brynjolfsson hypothesized that this was due to a necessary lag phase, a time needed for the integration of additional innovations and process adaptations. This framework can be very relevant when considering the adoption of enterprise-grade Artificial Intelligence and Large Language Models.

It’s important to clarify that this paradox isn’t an established reality in AI adoption, but rather a plausible scenario that might surface, considering our experiences with technology. The correlation between tech investments and productivity outcomes has often proven complex and less linear than anticipated. Adopting AI and LLMs in an enterprise is not a simple plug-and-play exercise. Similar to the lag phase experienced in the early days of IT, we can expect a delay before witnessing their full productivity impact. The real benefits of AI and LLMs will start to appear as organizations restructure their business processes, evolve their strategies, and upskill their workforce to exploit these advanced technologies effectively.

Potential missteps or myopic approaches during this implementation phase could, hypothetically, result in unrealized benefits or unintended consequences. For instance, it’s conceivable that a company might automate a process without factoring in the necessity of human-AI collaboration, leading to disarray and diminished efficiency.

The danger of cargo culting, exemplified by failed Agile experiments and digital transformations reduced to rigid, familiar project boxes, is a significant concern as AI adoption expands. These transformations fail when organizations superficially imitate new methodologies without grasping their fundamental principles.

The term “cargo cult” originates from the behavior of certain indigenous groups in the South Pacific during World War II, who observed air force personnel performing routine tasks on the landing strip and mimicked these activities in hopes of attracting cargo planes with valuable goods.

In the context of AI adoption, a cargo cult approach would involve hasty implementation of AI technology without considering the necessary organizational, procedural, and cultural changes. The consequence could be the creation of a façade of AI functionality while failing to realize its true potential and encountering unforeseen challenges such as biased decision-making and security vulnerabilities.

This chapter concludes our initial focus on the challenges of adopting Generative AI and Large Language Models within the enterprise in our “Pragmatist’s Guide to Enterprise AI and LLM Adoption” series. Starting with regulatory and control considerations, moving on to technical, security, and privacy issues, and most recently, exploring the human and organizational facets, we’ve touched on a wide array of complex aspects.

I realize that the narrative is far from complete, given the sheer complexity and rapidly evolving nature of the topic. Nonetheless, I hope that my perspective has served to highlight some key concerns and to illustrate that the issue extends beyond the often simplistic interpretations presented by media hype.

Having spent a good amount of time shedding light on the myriad challenges involved, I believe it’s time to shift our focus toward the opportunities that lie ahead. The software ecosystem is in a state of constant evolution, and it offers a fascinating landscape for enterprises willing to navigate its complexities.

That’s the direction we’ll be heading in the next segments of this series. I’ll be discussing the variety of strategies enterprises can employ to not just cope with but to effectively leverage the power of this digital transformation. I’ll aim to present insights into the evolving software ecosystem and outline practical tactics to help tackle the challenges we’ve identified.

I look forward to continuing this journey with you, turning the page from challenges to strategies, from obstacles to opportunities, in the world of enterprise AI adoption.

--

--

Szabolcs Kósa

IT architect, digital strategist, focused on the intersection of business and technology innovation