Artificial Intelligence: The Next Dot-Com Bubble?

Waterloo Business Review
Waterloo Business Review
8 min readMay 15, 2020

From IBM Deep Blue’s chess victory against Kasparov in 1997 to the democratization of natural language processing like Siri to individual users, history has borne witness to the promise of artificial intelligence and its ability to herald a transformative future paradigm. With a 200% increase in first round early stage funding for AI startups in 2017 to over 9.3 billion USD in venture capital last year, the field has amassed critical attention by building the narrative of the “data economy”, and spouting the possibility it has to to design truly intelligent consumer products at scale. However, with claims from public company earnings reports about business implementations of “AI” rising 700 times in 2015 while the gap between usable machine intelligence and theoretical computing still places the field at a nascent stage, a conundrum about the sustainability of the domain’s investment practices rises. Are we headed down a recurrent cycle of momentum investing that has decoupled the market value of AI assets from their intrinsic expectations and worth? While not a technology bubble, the disconnect between investing patterns and research trends in AI poises it as the next economic bubble, paralleling patterns from the 2002 dot-com burst.

Unprecedented investor interest in AI is the primary agent that has seeded inflation of market value, with Q3 2019 data from the National Venture Capital Association highlighting over 13.5 billion USD in venture capital being raised across 965 AI startups in the United States alone. With China’s most valuable AI startup, Sensetime, clinching a 7.5 billion USD valuation and PwC placing North American VC growth for AI near 72% (Figure 1), the number of market makers, and consequently stakeholders and equity deals, has exponentially grown over the previous decade (Figure 2). Furthermore, government research funding has proven catalytic for public interest, as both Germany’s Industrie 4.0 Project in 2011, and the United States’ R&D Plan for AI in 2016 garnered significant media traction. Similarly, China’s commitment to build an AI industry exceeding 150 billion USD by 2030 serves as a prime example of the sheer magnitude of government support. In addition, MMC Ventures estimates that firms claiming “AI products’’ attract over 50% more funding on average, all while 40% of these corporations do not have any fruitful artificial intelligence architectures designed in the first place. This investment euphoria, specifically enhanced business and funding interest, is starkly reminiscent of the dot-com bubble in the 1990s. As with AI, investors were drawn to the internet era as a beacon of monumental societal transformation. While claims like heightened social interconnectedness and the growth of behemoths in big-tech do stand validated, several investors lost billions due to excessive speculation in Internet stocks that fed a 400% rise in the NASDAQ Composite. The implosion of the structural bubble was particularly devastating since their euphoria clouded their judgement on the fundamental principles of value investing. Hence, such momentum investing is a cautionary tale.

There are three notable “AI winters” which have occurred over the last century that serve as potent parallels, all predicated on research funds upheavals. The first wave traces to the onset of the Cold War when the US government poured millions into developing AI for machine translation of Russian documents. The National Research Council subsequently eliminated funding, criticizing the project for being “more expensive, less accurate, and slower” than human translation, since the theoretical struggles of word-sense disambiguation had been skirted by the promise of potent optimism. The second dates to the 1973 Lighthill Report of the British Parliament, which provided a scathing critique on AI’s inability to achieve its “grandiose objectives”. This was demonstrated in the underestimation of the “combinatorial explosion” which rendered most algorithms inapplicable to real-world scenarios, and caused a dismantling of research in England. History repeated itself, with Roger Schank and Marvin Minsky cautioning against excessive business enthusiasm for AI in 1984, only for the LISP machine market to collapse in 1987, thus serving to reiterate the potential dangers of burgeoning interest also evidenced today.

The booms and troughs of public sentiment in the field point to only two credible outcomes for well-funded AI companies. Firstly, the field could truly revolutionize the technology ecosystem, making investments in the field pay expected returns to VC firms through acquisitions or IPOs. Secondly, the true growth rate of research may not match its overzealous hype, causing not only companies to take longer to exit than traditional investor horizons but also potentially resulting in external funding drying up. The disconnect between expectations and reality in the field points unequivocally to the second. Lengthening of investor exit times is owed to three critical factors dictating limitations on AI’s deployment to society: practical implementation and data infrastructure constraints for businesses, difficulties with generalized and transfer learning, and legal and ethical constraints of model decision-making processes.

The most significant implementation constraint stems from AI being viewed as a “technology strategy” instead of a “business strategy”, endangering economic returns. Several firms attempt to build-out complete in-house analytics teams and invest heavily in data infrastructure and talent to “cash in” on the AI hype with a focus on slashing costs and gaining shift returns — a recurring strategic misstep in failed technological endeavours. Building end-to-end AI solutions that add bottom-line value to firms is a long-term project that involves holistically integrating analytics into the existing network of people, ideas, and communication. Fundamentally, it mandates a rewiring of the firm and addressal of the cultural and organizational barriers that AI faces in company-wide adoption. But as most businesses that aren’t inherently digital, traditional mindsets and business practices run counter to the data-driven approach necessitated by AI, debasing long-term broad AI adoption. Long-term horizons are reiterated by supervised learning requiring large datasets of business activity. However, most firms lack adept data warehousing mechanisms, and as a result limits the insights that can be drawn to streamline operations. Thus, newer collection systems must be designed, lengthening project timelines for AI until enough data is collected to generate actionable insights. Such implementation constraints are compounded by the difficulty and cost of obtaining such large data-sets, coupled with the excessive human intervention needed to label individual data-points. Thus, as firms continue to misapply technologies for short-term tactical advantages instead of long-term benefits, they risk premature and unstable AI initiatives, and impede the ability to capture expected investor returns. Such failures force businesses to reconsider the premise of AI, forcing a market correction to take effect.

The second significant constraint on artificial intelligence is that the research in the field still exists at a nascent stage, untethered from investor sentiment. This is particularly true for generalized learning, which refers to broad and adaptable machine intelligence not narrowly-scoped to learning a specific task. Central to the concept is the Moravec’s Paradox, stating that while AI algorithms often exceed human ability in a restricted classification or regression problem, they have significant difficulties in generalizing across drastic changes and thus lack perception and mobility innate to the human experience. Dr. Andreas Tolias similarly argued the absence of an “evolutionary drive” coupled with poor inductive bias in training algorithms prevents AI from adapting its experiences of societal interaction to enhancing learning across unrelated tasks. Poor transfer learning has two inevitable results: the need for excessive learning experiences to comprehend small-scope tasks, and an inability to reason with purpose, by mimicking human psychological processes. As a result, constant human supervision is necessary to monitor model health. The inability to reason effectively causes shortcomings in real-world deployment where ‘edge-cases’ need a more nuanced deductive approach — the setbacks in IBM Watson’s objective of cancer diagnosis or self-driving car accidents involving Waymo reiterate the daunting trajectory the field must yet embark upon. A potent example is adversarial attacks, where researchers from KU Leuven discovered sub-par ‘edge-case’ handling in person detection systems for security surveillance, such that standing still with a rectangular multi-color patch renders the human invisible. Such gaping inexplicability lends to resistance in immediate implementation, furthering the lengthening of investor exit times and consequently the structural disconnect explored.

These scenarios warrant greater model interpretability, lending to the third challenge we identified — the host of legal and ethical considerations that ensue from explainability. The use of artificial intelligence to answer subjective, value-laden, and open-ended questions with poor interpretability is a subjugation of the moral compass as well as underpinned ethical and legal practices. When algorithms attempt to address questions surrounding whom to hire, which convicts are likely to reoffend, and which applications have good credit without explainability or reason underlying their predictions, they magnify technological asymmetry and societal inequity. Such models face resistance in being used in society due to the underlying ‘proxy’ variables detected to make predictions: models use zip codes to deploy police personnel, grammar to determine loans, or credit scores to judge responsibility, whilst zip codes may be the confounding indicator of race, credit scores of wealth and grammar of immigrants. Such inexplicable models with their seeded injustice become, as Cathy O’Neil poignantly declares, “weapons of math destruction”. However, there is currently no legislation to combat these consequences which severely limits their sustained acceptance, prompting the “need for the government” to enact “regulations in Canada” (Mahdi Amri, Deloitte Canada AI). As Rashida Richardson, the Director of Policy Research at New York University’s AI Now Institute equivocally argues, “When you’re using any black box algorithm, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias”. Without appropriate legal infrastructure and guiding principles for effective AI, recommendations from machine learning systems face rampant distrust and resistance from an ethical lens that subsequently delays the scaling and embedding of such intelligent algorithms into society.

To conclude, the frenzy in the AI ecosystem is an indisputable parallel to the economic swings underpinning the dot-com bubble — an accelerating adoption of new and potentially revolutionary technologies, excessive interest, capital flow predicated on disconnected targets, followed by a burst. If the AI winters linked to the 1975 DARPA funding cuts and the 1987 LISP market crash provide any indication, we might be at the tipping point. Businesses love narratives that can be sold to investors, but such narratives often get ahead of themselves — practical considerations around transfer learning, labelling, deployment and legal compatibility are often skirted to adhere to perceived euphoria. As the gap in expectations and reality continues to widen, the bubble will implode, and the deck of cards will dismantle.

--

--

Waterloo Business Review
Waterloo Business Review

A student-run publication dedicated to providing insights into business strategy, entrepreneurship, and global current events.