Mind Amplifiers: The Cognitive Revolution Ascending to New Frontiers Beyond AI

Lars Nordenlund
Survival of The Strategic Fittest
8 min readJun 21, 2023

--

Chapter 2: Disruptive Shifts in Technology: AI Automation & Generation

Few would argue that Artificial Intelligence is completely changing the business landscape across most industries, while pushing these industries to gather greater volumes of data than ever before. In this chapter we will go behind the AI technologies and see how this disruptive change is impacting the cognitive behaviors of people and how it becomes important for the Survival Of The Strategic Fittest.

To understand the impact and radical shifts the transformative AI can have on businesses and customer behavior in the global market place, we need to understand the fundamental drivers behind the AI technologies and the intent which it’s created, developed and applied thought centuries of evolution.

AI’s Evolution To Get To Today’s Disruptive Revolution

Let’s review the (relative long) AI history of evolutionary stages marked by significant milestones leading to a tech revolution in the era of today’s AI Cognitive Computing.

AI emerged as a field of research in the 1950s, focusing initially on rule-based systems and symbolic reasoning. Over the years, advancements in computer processing power, data availability, and algorithmic breakthroughs have propelled AI into new groundbraking territories.

Several factors have contributed to making this tech revolution possible. First, the exponential growth in computing power and the advent of specialized hardware, such as graphical processing units (GPUs) and tensor processing units (TPUs), have significantly accelerated AI training and inference processes. This has allowed for the processing of vast amounts of data in real-time, enabling AI systems to make intelligent analysis and propose recommendations at lightning speed.

Second, the explosion of data storage and cloud access in the digital age has played a crucial role. The availability of massive datasets has provided the fuel for training AI models, enabling them to extract patterns and insights from complex information. Furthermore, the advent of big data technologies and cloud computing platforms has facilitated the storage, processing, and analysis of enormous volumes of data, making it accessible to AI systems.

Lastly, advancements in algorithmic research and techniques have been pivotal. Deep learning architectures, combined with techniques such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have enabled breakthroughs in computer vision, natural language processing, and speech recognition. Reinforcement learning has also allowed AI agents to learn through trial and error, leading to significant advancements in areas like robotics and autonomous systems.

The convergence of these factors has set the stage for the tech revolution we are witnessing today. AI Cognitive computing has the potential to transform industries and societies, with applications ranging from personalized healthcare and autonomous vehicles to smart cities and virtual assistants. As research and development in AI continue to progress, we can expect further advancements that will shape our future and drive the next wave of innovation.

AI Cognitive Computing It Mimicking Human Functions As The Mind Amplifier With More Power And Capacity

AI Cognitive Computing, shows particular promise for applications where a computer shouldn’t necessarily replace the decision-making of humans but should instead interact with humans in a way that augments and supplements their decisions. Like cognitive mind amplifieres.

AI cognitive computing aims to replicate human thought processes and decision-making capabilities. It involves the development of algorithms and models that can perceive, reason, learn, and interact with the environment in a manner similar to human cognition.

In recent years, the rise of deep learning and neural networks has revolutionized the field. Deep learning algorithms, inspired by the structure and function of the human brain, enable machines to learn from vast amounts of data and make complex recommendations. This breakthrough has fueled the development of AI Cognitive computing, where machines possess the ability to understand, reason, and learn like humans.

Scientifically proven examples of AI cognitive computing is Generative AI which include natural language processing systems that can understand and generate human language generator like ChatGPT which by it’s own words is “an AI-powered chatbot developed by OpenAI, based on the GPT (Generative Pretrained Transformer) language model. It uses deep learning techniques to generate human-like responses to text inputs in a conversational manner.

Also, next level machine learning can recognize patterns and make predictions, computer vision systems can interpret and analyze visual data, and autonomous vehicles can navigate complex environments with multi-sensor systems. These examples demonstrate the ability of AI cognitive computing to mimic human cognitive functions and solve real-world problems across various domains.

Let’s dive in to what this means.

Cognitive Computing vs Traditional AI/Machine Learning Approaches

Cognitive computing is all about interaction with humans, often providing insights that are discovered or “learned” autonomously (we call these insights “unsupervised” — more on that later) which can then be used by humans to make better decisions. These could be broad-reaching business decisions, such as choosing an emerging market to increase portfolio investment; or decisions related to safety, such as locking down accounts that appear fraudulent; or even life-saving decisions, such as choosing the right treatment for a cancer patient.

Traditional approaches to machine learning and artificial intelligence (which do not involve AI cognitive computing) learn from labeled training data. Humans must manually add these labels and then test whether the model can use them to accurately predict the labels on new unlabeled data. We call this type of learning “supervised”, because humans are supplementing the data provided to the machine, rather than the machine supplementing the data a human needs to make a decision. In the case of fraud analysis, humans would supervise the model by manually labeling transactions as either fraudulent/nonfraudulent, and then test whether the model can accurately predict the label for new transactions. A recent, example of this in the medical field involved researchers labeling thousands of female breast biopsy slides with benign and malignant cancerous tissue to successfully train a neural network to recognize them on new patient biopsies. While this traditional approach may help detect cancer earlier and lead to better treatment, it is also tedious. New cognitive computing approaches are proving to be even more helpful because they don’t require labeled data.

“Unsupervised” Neural Networks Shouldn’t Make Decisions But Provide Strong Data-driven Recommendations

Cognitive computing models are commonly implemented using neural networks because such systems learn to perform tasks by considering examples- similar to the way the human brain would learn to perform a task. The difference is that AI can scale the data complexity and time processing to superpowers on inhuman levels.

Complex data sets are fed in, and these models must automatically learn certain characteristics of the data which may be too complex, tedious, or costly for humans to do themselves. Without labels the neural network discovers information about the underlying structure of the data. We call this learning “unsupervised” learning. Just like if a human observed certain situations and correlated present with historic data (experience), the high processing AI model can just handle and correlate unlimited amount of data compared to a human. Again, just like a human mind amplifier.

Going back to our fraud detection example, it might look like this- the neural network learns the underlying structure of data that represents “regular” behavior in a dataset of transactions. When abnormal behavior occurs, the neural network has no ability to recognize it. Since these abnormal transactions are not understood by the neural network, it automatically labels them as fraud (or suspicious to be reviewed). In the case of biopsy tissue analysis, a neural network would learn the “structure” of healthy tissue slides, so when it sees tissue it doesn’t recognize, it automatically labels that tissue as cancerous (or irregular to be reviewed).

As you can imagine, this approach is not foolproof. Accuracy of the labels, predictions, or classifications that a neural network can assign to data in an unsupervised way is often lower than supervised methods- prompting the need for human “review” or decision-making. While removing the need for tedious human-labeling of thousands (or even millions) of datapoints is enticing with unsupervised approaches, the outcome of their analysis can often only be a suggestion to the user, who must then make a decision. This is where cognitive computing comes in.

AI Cognitive Computing Brings It All Together

Let’s look at a very practical example of cognitive computing, using anomaly detection in physical spaces.

Consider a large oil refinery, where hard hats are worn around most spaces of the site. Cameras placed around the refinery could use machine vision models to detect whether people within each space are wearing hardhats, and then feed that information into a neural network which learns whether each space requires a hardhat or not and gather historic data that correlate accident occurrence in each areas.

Then, when people are recognized not wear hardhats in spaces that generally require them or in a high risk area, the neural network alerts users, who decide how to ensure the regulations are met within those spaces and minimize risk for the workers and lower insurance cost for the business. Once again, the key here is that the neural network does not make a decision, but simply suggests to the end user that workers should probably be wearing hard hats in a certain area. This is what cognitive computing is all about: using unsupervised machine learning techniques which supplement the human decision-making process.

Now let’s look at a much more radical and game-changing AI cognitive computing use case in the field of entertainment; the development of highly realistic virtual actors. With AI algorithms capable of analyzing and understanding human behavior, speech patterns, and facial expressions, filmmakers can now create entirely digital characters that are indistinguishable from real actors.

This technology has opened up a world of possibilities in filmmaking, allowing directors to bring back beloved actors from the past or create entirely new characters that push the boundaries of imagination. For instance, imagine a science fiction movie where a virtual actor convincingly portrays an alien creature with intricate facial expressions and emotions, captivating audiences with its authenticity and blurring the line between reality and fiction.

This use of AI cognitive computing in entertainment has revolutionized the filmmaking industry, expanding creative possibilities and immersing audiences in extraordinary cinematic experiences.

How Do You Get On This Big Wave of AI Cognitive Computing?

This is already high impact and disruptive for most industries. The strategic opportunity opens up when you understand the underlying patterns of a big wave like AI to be able to ride it instead of being crushed.

First it is about impact assessment of your specific context of industry, business model and organizational capabilities to understand the near horizon for emerging AI technologies and what the disruptive shifts means for the cognitive behaviors with customers and markets.

Secondly you can start exploring the transformational vision for what the new market position and business architecture looks like, rather than traditional strategy focusing on yesterday’s obvious or tomorrow’s distant future.

The you are ready to design the business model by translating insights into winning strategies, innovation excellence and pragmatic achievable plans for transformation and become the next category leader.

--

--

Lars Nordenlund
Survival of The Strategic Fittest

Strategist, advisor, and entrepreneur with a global mindset. 20+ years of CxO experience building companies in Silicon Valley ventures and global enterprises.