An art demo image of a representation of AI as an employee surrounded by different decices and entities interacting with it.

Some musing about AI. Part 1 — Introduction

Navigating the Deepmindfield

How to cut through the noise of AI in the media

Drew Lyall
8 min readMay 6, 2024

--

Regardless of where you source your news, chances are you will have seen at least a few articles lately about AI.

If my news feeds are to be trusted, in recent weeks AI has solved nuclear fusion, developed treatments for various diseases, identified how memory works in the brain, and delivered revolutionary new defence systems. Meanwhile, it is also going to deliver the most challenging US election campaign ever, has insurmountable gender and race bias, will destroy our workforces, and is little more than plagiarism on a mass scale.

All of this is, of course, propelled further by the big characters at the helm of the AI companies and their outlandish claims. My personal favourite being that Generative AI systems (the colloquial, catch-all term for the systems that have recently sprung up that use patterns in training data to inform new creations) are actually windows into alternate dimensions.

As at many points in human history, we tend to either aggrandise or demonise the things in life we don’t quite understand. So how do we parse through the headlines to find the facts amongst the fantastical? What will its impact actually be? Let’s start by breaking down the contents of the mysterious black box which is ‘AI’ with a brief history and a few definitions.

Acronym Soup

Like most tech movements, AI is dripping in acronyms and jargon. Here are the main ones you should know:

  • AI: Artificial Intelligence. The process by which a computer simulates human-like intelligence (how successfully it does it, is another matter)
  • AGI: Artificial General Intelligence. This is the point at which an AI has become capable of acting like a human brain. It is currently highly theoretical but typically it is the stated goal of most AI research companies such as OpenAI and Google Deepmind.
  • ML: Machine Learning. Algorithms that use large quantities of data to make assumptions and predictions.
  • Neural Network. No acronym for this one but important to define nonetheless. Neural Networks are a component of Machine Learning in which the AI processes data in an interconnected way that simulates the neurons in the human brain (hence the name).
  • NLP: Natural Language Processing is a set of technologies that attempt to allow a computer to at least appear to understand normal written prose and respond in kind. E.g. Chatbots and Home Assistants.
  • LLM: Large Language Model. These are what have caused the recent explosion in news around AI and include OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Meta’s LLaMa. A language model is the means by which a machine attempts to go further than NLP and rather than just parsing a command into a predictable result, they attempt to semantically understand what is being asked.

AI was Born in the 50's

Throughout its seventy-year history, AI has undergone periods of hype and scepticism, but its trajectory has, in general, been one of steady progress and innovation. Originally suggested by Alan Turing in a paper on “Computer Machinery and Intelligence”, the term itself, however, is credited to John McCarthy who coined it during a workshop he ran on the subject at Dartmouth College in 1956.

A timeline showing a high level of the major events occurring in the field of AI between the coining of the term in the 1950s and the emergence of generative AI in recent years.

Over the next decade or so, the field saw significant strides as researchers demonstrated the capabilities of AI algorithms to handle a wide range of problems or to prove mathematical theorems. However, the 1970s and ’80s saw a period called the “AI Winter”. Unrealistic expectations were never met, funding was cut, and the overall limitations of the current technology were exposed. Advances in computational power and algorithmic techniques saw a fresh resurgence throughout the remainder of the 20th century. Neural networks gained prominence as powerful tools for pattern recognition and machine learning laid the groundwork for modern deep learning techniques. The early 21st century saw exponential growth in AI research and applications. Deep learning algorithms revolutionised fields such as computer vision, natural language processing, and speech recognition. Companies like Google, Meta, and Microsoft invested heavily in AI research, leading to the development of advanced AI-powered products and services.

Impact on Work

Some of the most alarming reporting about the impact of AI is on employment. The Institute of Public Policy Research recently suggested that eight million UK jobs could ultimately be lost, with back office, entry-level, and part-time jobs being at the highest risk (source). However, this would be far from the first time that society has had to cope with the impact of transformational technology. This pattern has repeated throughout history and comes up roughly once every generation as we went through the industrial revolution, the shift from manufacturing to service economies, the rise of digital, and now AI. While this reads positively at a macro level, it comes at a very real individual cost. Jobs that were once considered “safe” and highly skilled become repeatable and automated, forcing previously higher-earning workers to either re-train or accept lower-paying roles.

A representation of an AI as a typical business employee

However, it isn’t always a straight line between technology and the unemployment queue. A good example of this from history is the Ford Model-T. When Henry Ford introduced production lines to his factories, the number of cars being produced per worker almost trebled from eight to twenty-one. However, the increased efficiency reduced the cost by more than half, dramatically increasing consumer demand and leading to an increase in overall jobs. In the 1960s the National Commission on Technology, Automation, and Economic Progress concluded that while technology certainly has the capability to destroy jobs, it does not destroy work. Perhaps the most challenging question to answer in this era of such rapid change is whether or not that statement holds true. Or more specifically, are we getting closer to a point where there simply isn’t enough work to go around?

How Intelligent is it Really?

AI applications are increasingly pervasive, impacting industries ranging from healthcare and finance to transportation and entertainment. However, how intelligent actually are they?

In an article written in 1980, the philosopher John Searle proposed a thought experiment called ‘The Chinese Room’. Imagine yourself in a room with a single door. Someone outside is slipping messages written in Chinese characters underneath it. Inside the room is someone with a set of rules on how to respond. By using the instructions they can write a perfect response and post it back. They may not even know a word of Chinese, but to the person outside the door, they would appear fluent. This is effectively how any computer functions, it receives external input (the Chinese characters coming in), performs some kind of processing (the instructions), and provides some output (the characters going back out). Searle plainly demonstrates that, ultimately, a machine following nothing more than instructions will never be able to understand Chinese. It is an apt analogy when you consider large language models. ChatGPT is the one inside the room and we are the ones on the other side of the door. So the question remains in our world dominated by image, does it matter if the system is actually intelligent when the appearance is enough? It is also helpful to consider the ethical implications, for example, what if the Chinese characters were technically correct, and written flawlessly, but at the same time were highly insulting?

Extracting Facts from the Fantastical

This analogy highlights the dangers of mindlessly accepting the messages that come out from under the door as truth.

We all need to have the tools to help us tell fact from fiction, or even the downright fantastical.

1. Avoid sensationalism. In a world where clicks translate to revenue, headlines and reporting trend towards extremes. News that make bold claims about the transformative powers of AI might well have a sprinkling of truth but more often than not are overhyped or reactionary.

2. Check your sources. This rule obviously goes far wider than just the field of AI, but we should always look to reputable sources, publications, journals, or industry reports. Watch out especially, for articles that may appear objective but are actually paid promotions.

3. Be aware of the regulatory landscape. A useful measure to calm the peaks and troughs is to stay informed on the regulatory landscape. The recent European Union’s AI Act, for example, lays the groundwork for the responsible use of these technologies, ensuring trust while trying not to stifle innovation. Certain high-risk or potentially harmful products that are emerging are being banned or placed under significantly tighter controls.

4. Watch for non-explanatory writing. AI is an enormously complex field of study and the reality is, very few people are in a position to deeply understand how some of the technology works. Be wary then, of reporting that is filled with absolute statements about what AI can or will achieve without a word of explanation on how. Often, a limited summary or a handful of acronyms will be followed by a non-sequitur statement about how AI will solve a persistent problem or translate to revenue for a business smart enough to adopt “AI”.

What Next?

It’s very easy to have a knee-jerk reaction to the flood of poor-quality AI-generated articles and “think-pieces” that have started to clog up my various news feeds. The accessibility of the first mad dash of products and services in the LLM space (if we can call these light-touch GPT wrappers products) has done little to endear me to the technology and there’s an easy argument to be made that it’s pulling interest and, more importantly, funding, away from more interesting AI research topics.

However, I tend to be more optimistic and believe that the rising tide will lift all ships. I expect that when taken as a whole, the increased interest will result in a net benefit. Capitalism will do what it does, the initial flood of products will die away and we’ll be left with those that provide real, tangible benefits. Meanwhile, the companies behind the successful tools will reinvest and the whole sector will continue to advance.

Jobs will certainly change and maybe we really are walking towards the fabled post-work society (regardless of whether you believe that’s dystopian horror or utopian bliss). But throughout history, we have repeatedly proven that humans are enormously capable of adapting to change.

This is the first of what will be a series of posts on the subject of Artificial Intelligence. In my next article, I’ll explore some of the deeper philosophical issues as AI increasingly becomes the end consumer of many of the IT industries’ products and question the benefits AI brings versus its impact on our planet.

--

--

Drew Lyall

Head of Technology for Ascent. I've been leading teams of engineers and running small businesses for two decades.