AI & I — Personal Reflections on A Decade of AI Mainstream Adoption

Alexander Kremer
Picus Capital
Published in
17 min readFeb 28, 2023
Figure: In 1997 chess grandmaster Garry Kasparov lost against IBM’s Deep Blue in a highly publicized match; this high-profile approach to showcasing advances in AI to a lay audience became a role model for many scientists in the years to come (image from AI Business)

My personal experience over the past decade has allowed me to witness the mainstream adoption of artificial intelligence (AI) in both consumer and enterprise across Europe and Asia. These observations are useful to make sense of the current hype surrounding AI-generated content (AIGC) systems, such as ChatGPT, Stable Diffusion, and Co. Moreover, this article identifies opportunities and challenges that lie ahead for the field of AI.

The wider-spread adoption of AI has been going through boom-and-bust cycles. The launch of ChatGPT in November 2022 sparked a boom in public interest in AI not seen before, resulting in the OpenAI system reaching 1 million users in just five days and 100 million users within two months. Whereas enthusiasts see the end of human superiority over intelligent machines looming; pessimists question the ability of ChatGPT and other systems to solve basic tasks requiring no more than common sense, while yet others consider them merely writing aids.

It is worth noting that the field of AI was established as an academic discipline almost 70 years ago. From theoretical concepts to powerful systems used by millions of people, progress has been aided by advancements in three key areas: algorithms, data, and hardware. Therefore, this piece will first examine these three building blocks of modern AI systems. It will then analyze how AI has made its way into our everyday lives over the past decade through a first-hand account. Finally, it will conclude with important lessons for the current AI boom and specific areas of interest moving forward.

Figure: Figure: Summary of lessons from witnessing a decade of AI mainstream adoption

From Alan Turing to ChatGPT

ChatGPT was developed by OpenAI, which was founded and initially funded by Sam Altman, Greg Brockman, Elon Musk, Ilya Sutskever and Peter Thiel in 2015. The company aims to promote friendly AI while making research and patents available to the public. Initially a non-profit, OpenAI transitioned to a “capped” for-profit organization in 2019. This transition marked a turning point for the company, leading to the creation of systems such as GPT-3 (2020), a language model trained on internet datasets, and DALL-E (2021), a model that generates images from text.

However, it was the release of ChatGPT at the end of November 2022 that sparked the current hype around AI. Unlike many of its predecessors, ChatGPT was not released as a research paper or API but rather as a product, a chatbot-type interface that allowed millions of people to experience AI first-hand. This innovation via distribution has demonstrated the potential of AI to the wider public, marking a significant milestone in the development of AI technology.

Already back in 1950 computer scientist Alan Turing developed a test to validate if a machine would be able to exhibit human-like intelligent behavior — or AI in other words. In Turing’s test setup, a human evaluator has text-based conversations with two agents and is to judge which of them is a machine. If the evaluator is not able to distinguish the machine from a human, the machine passed the test. As such, many projects in the following years, including IBM’s Watson (evolving from the DeepQA research project) and now OpenAI’s ChatGPT, have focused on showing that interactions with a machine can feel human. With ChatGPT possibly being able to pass the Turing test, we may have finally reached a point where interactions with AI systems are indistinguishable from ones with a human.

What on the surface appears like a singular achievement, however, is actually the result of years and years of progress in multiple domains, first and foremost in algorithms, data and hardware.

More Sophisticated and Tailored System Architectures

Over the past decade, the field of machine learning has witnessed a significant shift in the types of models being used. While traditional linear models such as linear regression and k-nearest neighbor have been able to detect patterns in data, the focus has now shifted towards Artificial Neural Networks (ANNs). ANNs have been around since the 1940s and are loosely modeled after the human brain, with neurons and synapses connecting them across layers. The ability of ANNs to be self-trained based on example data is a key feature, particularly when built with multiple hidden layers, known as deep learning. Various forms and architectures of ANNs have been developed to suit specific use cases. For instance, Convolutional Neural Networks (CNN) are effective for image recognition, while Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNNs) are ideal for speech recognition. A more recent and significant development is the Transformer deep learning models, introduced in 2017 by a Google Brain team, which have replaced RNNs in the domain of natural language processing (NLP). These models form the foundation for systems such as GPT-3, which has over 170 billion parameters. ANNs and Transformer models, specifically, are the first key pillars enabling Chat-GPT. However, the training modes for these models, such as supervised and unsupervised training, can differ and are now being used differently across the stages of system development, such as pre-training versus fine-tuning. This illustrates the versatility and potential of ANNs for creating more sophisticated and tailored system architectures.

An Ocean of Data

When ANNs were first conceptualized in the 1940s, their creators could hardly have imagined the digital world we inhabit today. For text, audio, pictures, videos and other media, digital turned out to be the most convenient way to store information, replacing much of the analog technologies. The Consumer Internet, IoT and other technologies have further accelerated this trend with trillions of digital interactions and signals logged every day. Therefore, this year alone humankind will probably create and store more data than in all years 2010–2017 combined. Modern AI systems such as ChatGPT, Stable Diffusion and DALL-E notably have been trained on large-scale (labeled) public data sets taken from the internet (e.g., Getty Images as a training dataset for fine tuning the Stable Diffusion system). With that, data has become the fuel driving the current AI boom, and it represents the second key pillar of modern AI systems.

Cheaper and Customized Hardware

Coming from decades of general chip design, the current AI boom is powered by Graphical Processing Units (GPUs). Built with parallel architectures, GPUs are more suitable than Central Processing Units (CPUs) since mathematical matrix operations of ANNs are very similar to the ones required to compute graphics. Particularly, Nvidia’s A100 series has proven to be powerful chips in this regard. These GPU chips alongside reconfigurable field-programmable gate arrays (FPGA) achieve more than 100x higher pace at deep learning model training compared to traditional CPUs, which was commonly only realized after 2009. Additionally, the last few years has seen the increasing emergence of application-specific integrated circuits (ASICs), such as Google’s Tensor Processing Unit (available for external usage since 2018), which come with more specific design and thus further accelerate ML-related calculation tasks. While these advances have allowed for tremendous progress in AI, training and maintaining large ML systems is still expensive. OpenAI, for instance, reportedly initially spent over 1B USD to build ChatGPT and still spends 1M USD per day on hardware costs alone to maintain it. To conclude, more tailored and cheaper IT infrastructure can be considered the third key pillar enabling the current AI boom.

Enabled by the above-mentioned advancements in algorithm, data and hardware, ChatGPT and other AIGC systems show the outstanding progress made in fields such as NLP, reasoning and knowledge representation. However, beyond that, ChatGPT & Co are also advancements in the field of human-computer interaction (HCI). Indeed, these systems are establishing a new form of how humans can interact seamlessly with a computer via interactive dialogues.

The Mainstream Adoption of AI Across Consumer and Enterprise: A Personal Journey

Throughout the past decade, I have witnessed first-hand how the key developments outlined above have translated into the mainstream adoption of AI across consumer and enterprise products.

IBM: Analytics Showing Up on the CEO Agenda (2011)

When I joined IBM over 10 years ago, the company was considered a pioneer in AI. IBM had developed that reputation mainly by bringing AI to greater public recognition, following the victory of IBM’s AI system Deep Blue over the leading chess-player Garry Kasparov in a public match in 1996/1997. Building on this reputation, IBM launched the Smarter Planet campaign in 2008, demonstrating how information technology and analytics can make decision-making of businesses and governments smarter. The success of IBM’s new product lines Advanced Analytics and Cloud showed how making sense of data and dealing with rising needs for flexible, stable and cost-efficient IT infrastructure (AWS was still a c. 0.5B USD revenue business vs. 60B USD+ today) were key trends at that time, and arguably remain so until today. A major breakthrough followed in 2011 when IBM’s latest AI system, Watson, won in the American quiz show Jeopardy! against the all-time highest-earning contestants Ken Jennings and Brad Rutter. This was considered such a major breakthrough because Jeopardy! requires contestants to reason backwards from given (humorous) answers to asking the right question.

Figure: IBM’s Watson coming out on top competing against Ken Jennings and Brad Rutter in 2011 (image from Computer Weekly)

Following the success in the game show, IBM launched a major suite of ML systems targeting various industries and use cases. For a time Watson became synonymous with AI and IBM used the strong brand name, catering to an increasing demand to bring ML into every aspect of enterprise life with first implementations announced in lung cancer treatment at Memorial Sloan Kettering Cancer Center. While Watson was an externally-facing business serving IBM’s clients, I was working with the internal applied analytics group called Business Performance Services (BPS) for a period of time. At BPS, we were a group of analysts working with senior business leaders on the one hand and IBM Research on the other hand. The group’s mandate was to develop systems (often built on ML models) and streamline processes to solve key business challenges in areas such as sales force efficiency, enterprise resource planning and project delivery. It was a fascinating work environment with long-running projects often requiring both extensive thinking and intense stakeholder management. Indeed, the group was able to make significant contributions to enterprise productivity at IBM.

These developments made it very clear to me that by solving major challenges, such as beating a human champion in a particular game, researchers have a much easier time demonstrating the tangible advancements in AI. Also, I believe those years were the start point for a much broader interest by enterprises into ML/AI. Last, my own experience at BPS showed me that implementations of ML do require much more than just good algorithms.

McKinsey & Company: Automation and Making Sense of Data Hits the Mainstream (2014)

Joining McKinsey in 2014 from IBM, I continued witnessing the trend that had started at IBM with the Advanced Analytics and Watson service lines, i.e. companies collecting ever-increasing amounts of data and trying to make sense of it. It was also around this time that Google’s DeepMind delivered the next AI breakthrough when the AlphaGo system beat Lee Sedol, notably one of the best Go-players, in a high profile live-streamed five-match series.

Figure: Lee Sedol lost four out of five matches against AlphaGo in 2016 (image from The New Yorker)

Over my tenure at McKinsey, there were a number of observations with regards to changing client needs: a) McKinsey Analytics was a small internal practice when I joined but experienced significant growth during my four year tenure with the Firm as clients were increasingly seeking this type of service; b) McKinsey also acquired the British Data Analytics Firm QuantumBlack in 2015 when it had roughly 50 people but expanded the practice significantly by hiring more than 500 data scientists, consultants and ML engineers in the years to come; and finally c) McKinsey further equipped all of its consultants with new data visualization and modeling tools such as Tableau and Alteryx. Furthermore, training sessions were conducted to enable consultants to deliver more insights to clients by leveraging data, modeling and visualization. To conclude: The fact that McKinsey, serving all types of large traditional enterprises including those with low IT spending (as opposed to IBM), optimized its internal organization in the ways described above showed that data and analytics had hit enterprise mainstream even for traditional enterprises.

I realized that analytics / data science / ML (terms often used interchangeably) was increasingly a subject of interest for more traditional companies, too. However, despite the tools we were equipped with, the value we could create as consultants was still often limited to pilot use cases, given time constraints and limited enterprise readiness in terms of the overall stack of the clients we served.

Mobvoi: Bringing a Consumer AI-First Product to Market (2017)

The technology advancements in AI, however, did not stay with large corporations only. It spread into smaller enterprises and, at the same time, emerging start-ups were pushing the boundaries. Mobvoi is an AI-pioneer in China, backed by some of the leading VC firms and even Google, that has developed superior technologies in NLP, speech-to-text, and others. Mobvoi embedded its technology in a set of consumer hardware products such as a smart watch and a smart home speaker, competing against Huawei, OPPO, Xiaomi, etc. However, as it turned out, while it was initially exciting to consumers to interact with such an AI-first product, it was a hard sell in the long-term in a highly competitive space, lacking clear use cases and unique apps/content. Moreover, plans for monetization via content and third-party-apps never fully materialized. In fact, the same later happened to Amazon which, apparently, lost billions of dollars with its Alexa division and finally announced major cuts in 2021/2022.

Figure: TicWatch leveraged advances in ML made by Mobvoi (image from GSRMArena)

At the same time, Mobvoi was pursuing other avenues to bring the technology to market. These included — following an investment by Volkswagen (VW) — developing an inside-car voice assistant with VW. Eventually, the Movboi technology was made available in cars of the Passat series in China — and finally a full acquisition of the unit (VW-Mobvoi) by VW (then integrated into CARIAD) followed.

My experience at Mobvoi showed the limitations of AI as a main product feature in itself. Rather, a product must incorporate AI seamlessly into a broader set of key differentiators — at least in consumer products — to really convince customers not only to purchase but to keep using it.

JD.com: AI Superpowers and the Personalized Product Feed (2018)

From Mobvoi to JD.com, one of China’s largest internet companies and e-commerce players, 2018 was also the year in which Li Kaifu launched his book AI Superpowers which sparked a renewed debate and interest in AI. Independent of that but in the same spirit, JD.com had just announced its ABC strategy a few months earlier: A for Artificial Intelligence, B for Big Data and C for Cloud. Following the launch of this strategy, three new senior executives were hired and large departments established. However, over the following years the strategy only partly achieved its aim of infusing the three major technologies deeply into the business, both internally and externally.

Figure: Li Kaifu sparked a renewed mainstream debate about AI when he launched his book in 2018 (image from London School of Economics)

As a Technical Product Manager in the more traditional Search & Recommendation department at that time, I was involved in dealing with the aftermath of a major product change the company had just executed: changing from a curated product selection on the app main page to an endless recommended product feed, enabled by a deep learning model. This change was inspired by the content feed that Douyin (TikTok) had successfully pioneered and Alibaba had later imitated in the Taobao app. This not only changed the user experience but also triggered a change in customer behavior with an increasing share of traffic now coming from Recommendations, following the change.

An important conclusion at this time was that embedding ML into existing products has the potential to fundamentally change user behavior — something we witnessed first hand at JD.com with traffic share shifting from Search (linear model) to Recommendation. However, the “black-box” nature of deep learning also created certain challenges for us internally. In particular, business users and brands were asking questions on why the model did what it did and it is indeed hard to provide sophisticated answers on why an ANN is doing what it does.

Picus Capital: Pre-AIGC, ChatGPT and Beyond (2021)

In 2021, I joined Picus Capital as a Partner to invest in early-stage start-ups in the Greater China region. Around the same time, Google’s DeepMind achieved a new breakthrough in their work around predicting protein structures with AlphaFold 2 and subsequently published a Nature paper, OSS and a searchable database. Since then, we have invested in a number of exciting companies which are dealing with ML/AI and related technologies. Amongst others, these include BodyPark, which is an AI-enabled online group work-out service; Naturobot, which enables enterprises to automate repetitive processes leveraging AI and other technologies; and recently, TensorChord, which is building tools for ML engineers to increase their work efficiency.

Figure: BodyPark has deeply integrated AI into its product offering (image from BodyPark)

Since last year, following the launch of AIGC systems such as DALL-E and Stable Diffusion, we have seen an uptick in entrepreneurs building applications on top of these models to target specific domains, e.g., game design or content generation for marketing purposes. Furthermore, there certainly has been a renewed interest in building the tool-suite / infrastructure layer to provide a better experience for ML engineers (MLOps). With ChatGPT though, we have also seen a rush of entrepreneurs focused on building local versions of large language models (e.g., notably Wang Huiwen, one of the co-founders of Meituan; and Li Zhifei, CEO /Co-founder of Mobvoi) and big tech players like Baidu getting involved. Building a local version of ChatGPT is obviously a much larger undertaking but it is inspiring to see that both longstanding entrepreneurs and investors now have the confidence and vision to back such large entrepreneurial projects.

What I can witness through my work nowadays is that ML has become an essential part of many digital consumer and enterprise products. Consequently, often it is not even necessary to mention it anymore as a defining feature, as it is simply embedded in the offering. It also shows how these major boom cycles help to funnel resources, talent, and money into the AI space to achieve aspirational goals.

Key Takeaways and Implications

After more than a decade spent in various companies and regions, it is clear to me that ML/AI has become an integral part of our daily lives, both as consumers and in the enterprise world. Throughout my journey, I have learned that the challenges of building and deploying ML/AI systems are complex and multifaceted and way beyond pure technical issues. The many takeaways from my personal journey during the past decade might reveal themselves as more universal than one might think and surely some of these learnings will apply to the current AI boom sparked by the launch of Chat-GPT and other AIGC systems last year.

Lesson 1: The Definition of AI Is Evolving as Systems Become More Sophisticated

Over the past decade, our understanding of AI has rapidly evolved. When Alan Turing proposed the Turing test in 1950, punch cards were the state-of-the-art technology for data processing. Today, we are amazed by the generative work of AIGC systems such as ChatGPT, Stable Diffusion, and DALL-E, pushing the boundaries of what we consider AI to be (this is sometimes also referred to as the AI Effect). As machines continue to learn from data, automate tasks, make predictions, and become generative agents, we will undoubtedly continue to redefine AI and raise the bar.

Lesson 2: Progress Towards AI Is Not Really Going Through Booms and Busts

The hype around AI is often driven by high-profile breakthroughs like IBM’s Watson, DeepMind’s AlphaGo and AlphaFold, and OpenAI’s ChatGPT. However, behind the headlines, there is a long-term process towards smarter machines driven by advancements in algorithms, data collection, and hardware. While significant breakthroughs may occur in waves, it is important to recognize the ongoing progress that is pushing AI capabilities forward, step by step.

Lesson 3: Increasingly, AI Is Accessible for Everyone

As AI adoption grows, access to ML technology is becoming easier. One key factor is the increasing availability of MOOCs and online training programs, which make it easier for people to learn the necessary skills. In addition, open-source frameworks and algorithms with large supporting communities have made it easier to develop and deploy ML models. Furthermore, large collections of labeled data are readily available for training purposes. Finally, hardware for AI/ML model development, training, and maintenance (such as GPUs, FPGAs, and ASICs) can be accessed through cloud computing in a shared manner, so that owning the hardware physically is no longer needed.

Lesson 4: AI Is a Feature, Not a Product Itself

When it comes to B2C and B2B scenarios, the user experience delivered by a product or service matters most, rather than the technology used to build it. The user might not distinguish between data processing / automation / analytics / ML / Generated Content or even humans in the back end performing certain tasks per se. While AI-enabled products may initially attract users out of curiosity, they will only continue to use them if they provide a seamless integration of the technology alongside other features. In fact, advertising the AI-first nature of a product may not be the best approach. In the spirit of this learning, it is exciting to see how Microsoft and OpenAI have moved quickly to integrate ChatGPT into Bing.

Lesson 5: Building on Top of a Premature Stack Limits the Value Extracted From AI Projects

To truly leverage the power of AI, companies need to build customized systems on top of a cutting-edge stack that is deeply integrated with their business processes and infrastructure. The critical prerequisites include components such as hardware, data infrastructure, talent, and enterprise processes that incorporate insights from ML systems. On the contrary, building on top of a premature stack without the right infrastructure in place limits the value extracted from AI. As a result, enterprises require a strategic approach to infuse AI systems into their internal processes and external products, that takes into account the organization’s long-term objectives and builds a roadmap for success.

Lesson 6: AI May Change the Way a Product Is Used

One of the most significant advantages of using ML-powered systems is their ability to perform tasks more efficiently and effectively than humans. However, the transformative power of AI extends beyond that. By implementing ML-infused algorithms, companies can fundamentally change how their products are used. For example, a ML-powered recommendation system can drastically shift and increase website traffic and sales. It is important to understand the impact of such changes and implement ML systems step-by-step while closely monitoring key performance indicators to ensure they align with business objectives.

Lesson 7: The Explainability Problem Is Real

ANNs, like ChatGPT, are capable of learning and refining their model parameters from massive amounts of input data to provide highly sophisticated answers to queries. However, these answers can sometimes be surprising and difficult to explain with no sources being directly identifiable, creating a challenge for both B2C and B2B scenarios. Until this challenge (often referred to as an explainability problem) is addressed, users may be hesitant to adopt AI in critical applications.

In light of these learnings, I am extremely excited about what the current generation of founders are building in the ever-evolving ML/AI space and affiliated areas such as data, analytics and automation. Looking forward, at Picus Capital, we are particularly enthusiastic about new ML/AI ventures that: a) leverage systems like ChatGPT and Stable Diffusion to develop deep, domain-specific implementations (“application layer”); b) build orchestration layers for apps that utilize multiple generative models for customized implementations and maintenance (“LLMOps”, others); or c) create tools and toolchains that enable engineers to streamline their ML system development and deployment, reducing skill requirements and costs (“MLOps & infra”).

About: I am a Partner at Picus Capital, where I am the Head of China. Previously, I was a Director at JD.com, one of China’s largest e-commerce companies. Before that, I worked at Mobvoi, a Chinese AI company, McKinsey & Company, and IBM. The opinions in this article reflect my own views only. Please reach out if you are an entrepreneur, industry practitioner, or fellow investor.

--

--

Alexander Kremer
Picus Capital

Global investor based in China with a proven track record as a business leader; 10+ years of work experience in VC, Tech and Management Consulting