Generative AI? Not so fast.

The Singularity Group
SeekingSingularity
Published in
5 min readJan 20, 2023

Genius or parrot? While AI is writing its own history, experts are cautioning that all that glitters is not gold (yet). An update on our ongoing research into Generative Artificial Intelligence.

What happened?
Since our last report on the shifting AI innovation landscape, generative AI (GAI) has been causing quite the hype. GAI-based Software as a Service (SaaS) applications are mushrooming across areas of creative work such as text, video-, audio-, and image- generation, image-to-image, text-to-image, and text-to-speech translation, and code writing. In large part, these rapidly emerging services are based on a handful of increasingly powerful AI models, such as OpenAI’s GPT-3 and Dall-E 2, Google’s LaMDA, BERT, Stable Diffusion, and Midjourney, some of which share an underlying neural network architecture, Transformer, which was open-sourced by Google in 2017.

AI expert, Dr. Yash Raj Shrestha, Head of the Applied AI Lab at the University of Lausanne and Academic Director of the Strategy and AI Laboratory (SAIL) at ETH Zurich, notes that “GAI can potentially take assistive technology to a whole new level by enabling users to incorporate generated outputs into their creative work, reducing application development time and bringing powerful capabilities to even non-technical users.”

Popular tools such as OpenAI’s ChatGPT are receiving increased attention from the public and large corporations, and the news media report widely on a generative AI “breakout.” Yet the hype has more and more AI experts voicing critiques about how “intelligent” and truthful current versions of GAI really are. “Large language models that power GAI recombine bits and pieces of existing knowledge. The generation pipeline of existing models lacks an understanding and verification step for the creation of new knowledge,” explains Shrestha.

So what?
The main concern with generative AI is a misunderstanding of what it is modeled to do — which in practice, comes closer to plausibly mimicking existing content rather than telling the truth or understanding the world–a property that experts refer to as “stochastic parrots.” Current generative AI models are great at generating plausible content, especially content optimizing search advertisement, though as users increasingly note, its creations often turn out to be factually false.

In a little experiment, we asked ChatGPT to provide us an overview of scientific papers with typologies of innovation. The answer seems confident and convincing, but upon closer inspection, the references that ChatGPT spits out appear to be made up.

Users of ChatGPT are met with a warning of its limitations, which include the risk of incorrect information, harmful and biased content, and limited knowledge of recent events. OpenAI’s CEO, Sam Altman, recently came out on Twitter with a statement that “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.”

These issues aside, Shrestha notes that several other concerns plague the current generation of GAI: “Another, and possibly more detrimental problem, is that GAI generated text can be easily used for the large-scale spread of misinformation. Finally, from a sustainability perspective, large language models require large amounts of energy for training, raising environmental concerns.”

Whats next?

Research into more traceable and trustworthy GAI is ongoing. Ultimately, fundamental shifts in the capabilities of GAI will be needed to come closer to critical human abilities such as learning rules and symbol manipulation. For the moment, even though one cannot fully rely on its output and its applications are still limited, GAI is rapidly gaining ground and showing promise in creative work. Other domains will undoubtedly follow suit in the years ahead.

Singularity Think Tank AI expert Alexander Stumpfegger, Head of Consulting at CID, notes that “a main obstacle for successful AI applications has often been the need for extensive training of machine learning models. ChatGPT and other language models come pre-trained on a more general level. While they are not ready to chat freely and write as correctly as it might seem at first glance, they can help business and creative software manufacturers provide smart assistance to users at scale, without needing individual training. Improved user experience and efficiency gains will help retain and expand their business.”

We are closely monitoring this exciting field as it reshapes industries and the way humans and machines collaborate.

About The Singularity Group

The Singularity Group (TSG) makes applied innovation investable in listed equities. TSG is the initiator of the Singularity Index™ (Bloomberg ticker: NQ2045), a global, all-sector benchmark and gold standard for applied innovation. The Singularity Strategies include The Singularity Fund (UCITS Lux) and the Singularity Small&Mid (UBS AMC). The Swiss investment boutique works closely with the Singularity Think Tank, a network of entrepreneurs and academics with deep insights into innovation value chains. Their input forms the foundation of TSG’s proprietary innovation scoring system that quantifies the engagement of companies within a set of curated Singularity Sectors worldwide across all market capitalizations and industries. The Singularity Score defines how much value listed companies are generating through applied innovation.

For more information on how our Think Tank’s insights fuel our strategy, please visit www.singularity-group.com.

--

--