AI’s 2023: A Breakthrough Year Unlike Any Other

Vinit Tople
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
6 min readMay 9, 2024

The applications of AI have been around us for the past 20+ years and the concept over half a century. Even then, there is no denying that the world will look at AI as before and after November 2022 for obvious reasons. Going by that pivotal moment, 2023 was arguably the first year of AI, or at least that of Generative AI (and Large Language Models that enable it). This article is a look back at the year with an overview of the key highlights.

One-of-a-kind Breakthrough

History is replete with examples of technologies having breakthrough moments before going on to transform the world in some way or another. Nothing, however, comes remotely close to AI’s breakthrough in terms of what followed within the next 12 months itself. Breakthroughs trigger an interest spike, debates about pros & cons, expert predictions about the timeline of actual impact, investor, and corporate assessment of hype vs. real potential, and so on. 2023 saw all of that but what stood out was how expansive, yet how short this phase was, relative to the historical equivalents.

While debates about its promise swelled, the AI juggernaut climbed the wall of skepticism, relegating it to the periphery relatively quickly. More specifically, the tone shifted from “if it will have a serious impact” in the first half of the year to “when and how much” by the second half (and much sooner for those in the know). A major contributor to that swift change in tone was likely the consumer impact, which was disproportionately high for a first year. The swath of industries penetrated right off the bat — software, medical, legal, education, marketing, sales, and others — left little room for debate about the credibility of its long-term promise.

Unprecedented Pace of Consumer Adoption

Within two months, ChatGPT acquired 100M users, the fastest ramp of any product of the internet age; TikTok had taken 9 months, Instagram 2.5 years, WhatsApp 3.5 years. From songwriting to seeking medical or legal advice, users thronged to the tool but invited the fury and concerns of the purists in every field. Doctors, lawyers, employers, teachers rushed (understandably) to warn about the limitations and undesirable effects of Generative AI — unhelpful or unethical at best and harmful at worst. Nevertheless, the adoption continued and only accelerated. The undertone of this unabated progression, despite the warnings was best summarized through an adaptation of a timeless MasterCard ad campaign from the 90s — “There are some things that Generative AI can’t solve. For everything else, there is ChatGPT, Claude, Bard and so on.” The ‘everything’ in this context was a tremendous opportunity, even on a personal level. For example, you don’t have to take legal action based on an LLM powered bot’s advice, but you could limit your time with the lawyers to a necessary minimum by leveraging these LLM tools for doing the pre-meeting research significantly faster and more effectively. There are examples galore of such “middle ground” applications. There were 5, 10, 15% productivity gains in a host of jobs and tasks. The temptation to secure these was and will remain too hard to resist, despite the limitations, warnings and even the inevitable harms. It’s not just high school kids ignoring the warning bells; ask any software programmer or marketing analyst and the proof is evident in their response.

Corporate Panic

While the above represented the swarming of ‘users or the consumers’ to Generative AI, there was a parallel rush in the corporate boardrooms. There were three broad categories of companies with an immediate impact –

  • First, the big tech (META, AMZN, AAPL, GOOG, MSFT, NVDA) — AI and LLMs were not new to them but the expectations were sky high. Anything less than being considered the leader or one of the leaders was a serious brand risk. To compound that, the margin of error was extremely low — Google lost $100B in market capitalization after its chatbot gave a wrong answer. Given these risks, along with the ‘land grab’ situation and subsequent opportunity cost, it was ‘code-red’ across the board at Big Tech. Product strategy and roadmaps were radically reshuffled. Herculean tasks on extremely tight timelines bordering on the unrealistic, expectedly became the norm.
  • Second, the AI startups — this was their moment. Whether those that built the latest LLM or the ones with components surrounding the LLMs (the middleware so to say) — the interest poured in, in the form of funding, acquisitions and partnerships.
  • Third, companies from non-IT industries — this included everyone from leading universities to Consumer Tech to Smart Home to Automotive manufacturers. There was a brand and financial risk to not participating in the Generative AI race or at least demonstrating an intent to. For these companies, AI and LLMs were not historically woven into their fabric (understandably) and corporate boardrooms filled with presentations on “What’s an LLM?” and “What does it mean to us?.”

The above three categories were within the epicenter of the 2023 AI quake, but the ripple effects continue to spread beyond these first responders.

The Magical Black Box of LLMs

Next, the technology itself — LLMs — the magical black box, the AI models that enabled Generative AI and triggered it all. AI Models have been heard of for decades, but Large Language Models were a relatively recent stage of their evolution. This stage was enabled through a specific approach called the ‘transformer architecture,’ introduced in a now iconic publication from 2017 titled — ‘Attention Is All You Need’. Contrary to the initial claim, which was quickly and rightly dismissed for the most part, LLMs are not ‘just another’ technology. Every other technology from the wheel to the steam engine to the computer was designed by humans to do what humans wanted it to do. Not LLMs. For LLMs, humans only provide the data and a way to learn. The result is a brain (the model) that has learned from the data provided to it, but exactly “what it learns” is not just beyond the full control but even beyond the full understanding of the very scientists who gave the data and defined the learning algorithms. The consequence is the creation of an incredibly powerful brain with an element of unpredictability in its behavior. While the industry continues to make inroads on this aspect (understanding of what the model has learned and how it uses it), this ‘unpredictability’ in the meanwhile, remains and will continue to remain the biggest impediment to LLMs’ adoption and its most potent risk when adopted.

While the ‘inside’ of the LLMs remained a black box, that did not deter the industry from learning everything around it. Terminology that was esoteric until now basked in its newfound identity — terms such as Foundational Models, Model Parameters, SFT, RLHF, RAG, Prompt Engineering, Instruction Tuning, Hallucination, Responsible AI, and so much more littered conversations from office hallways to social meetups. 2023 for AI was 1970s of computing & software as new terms were being coined on the fly. Mouse was only an animal until the 70s and hallucination was only a medical condition for most people until last year. The surge in new terminology and a guarantee of its future relevance resulted in employees and even executives forming a beeline to online courses in an attempt to ‘upgrade’ themselves with the latest technology.

Macro Implications

The AI breakout was not limited to companies, employees and its consumer or corporate applications and nor was it associated with only benign anecdotes. Concerns about its ethical applications or job losses are legitimate and serious. Even existential threats to humanity are not just a movie concept or a hyperbole anymore; 350+ AI experts and business leaders, who had much to gain from it, came together to sign a public statement, “Mitigating the risk of extinction from AI should be a global priority”. It is not without reason that countries and governments rushed to propose changes, not just to regulations but potentially even to country constitutions. Finally (and perhaps shallow given the broader context), AI contributed meaningfully to keeping the stock market afloat in a year that was destined to be a recession by all measures at the start of 2023.

Just the beginning

Despite this major impact, 2023 just scratched the surface of Generative AI. As mentioned earlier, debates continue about whether it is a force for good or otherwise. The verdict is not out yet but given the extent of the awareness and investment in the space, I personally remain an optimist. What is beyond debate is that a tectonic shift has been set in motion (arguably for humanity), the promise of which cannot be overstated. And 2023, though an epochal year, was just the first of Generative AI.

If you enjoyed the read, please show love through claps or by sharing/subscribing!

#ArtificialIntelligence #GenerativeAI #GenAI #ImpactofAI #AIBreakthrough #AI2023 #LargeLanguageModel #LLM

--

--

Vinit Tople
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

I'm an ex-Amazon Product Leader. Passionate about simplifying concepts for non-technical folks using stories, analogies and FAQs.