How To Think About Generative AI? Part 4

Kushal Bhagia
All In Capital
Published in
9 min readJul 6, 2023

Eight Long-Term Trends Amidst the Short-Term Hype Cycle — Part 4

This is final part of a four part series with all three previous parts released here on Medium and Twitter. Follow us to stay updated or share any feedback! If you have not read Part 1,2 or 3, read them here.

Twisted and tangled question marks — The Future of AI, by MidJourney

Last week, we covered how marginal cost of intelligence will approach zero and the abundance in services created by it. We also covered a roadmap of which service will see disruption first, which will see later, and why. Next, we covered how AI Tooling and Services will present a golden opportunity for founders to build stuff that will power the gold rush of AI adoption. If you missed it, read it here. Now on to the next two trends —

Trend 7: Cambrian Explosion

The Cambrian explosion, was a rapid diversification of life that occurred during the Cambrian period, about 541 to 485 million years ago. During this time, most of the major animal phyla that exist today appeared in the fossil record. This event was so rapid that it is often referred to as the “biological big bang.” Scientists estimate that life on earth went from a few 100 species to millions of species at the end of it and suspect that this was caused by some key enabling factors like higher oxygen levels on earth, new ocean basins, harder exoskeletons like shells which protected from predators and new locotion capabilities like swimming and walking which allowed animals to explore new habitats.

What we are seeing now in the AI world is a Cambrian explosion of ideas, companies, people and markets at a never before seen pace in the technology world. A similar set of enabling factors (Base models, GPU power, large sets of data, capital etc) have all come together to propogate a Cambrian explosion in the technology world through recombinant innovation.

Recombinant innovation is the creation of new applications by combining existing technologies. It links and leverages assets to define new solutions previously unimagined. For instance, iPhone, which was a huge leap forward for technology can be credited partially to combining multi-touch screens with camera technology with cellphones and so on. Apple’s genius came in designing and guiding complex supplier networks to produce components and assemble the iPhone, and the rest is history.

In the context of Generative AI, recombinant innovation will stem from combining the knowledge of specific fields such as pharmacology, bio-informatics, finance, and more. Some of the early examples we are seeing are with AutoGPT and BabyAGI.

Let’s Talk About AutoGPT and BabyAGI

Both AutoGPT and BabyAGI have taken the developer and tech community by storm. Both are autonomous task management systems built using APIs such as OpenAI’s and Pinecone’s. Their unique ability is that they can create their own set of tasks based on a pre-defined target, prioritise those tasks, execute them, and verify their completion. While they are not (yet) perfect, they show how several complex applications and functions could be performed by building on top of existing tools and models.

Here are three use cases of how Generative AI can be combined with other fields to create value:

  1. Generative AI in drug discovery: Combining generative AI with computational chemistry and pharmacology can accelerate the discovery of new drugs and therapeutic compounds. For example, generative AI algorithms can create large libraries of potential drug candidates based on desired properties, allowing researchers to identify promising candidates more efficiently. We are compiling a list of the most promising research in Generative AI & Drug Discovery, which can be found here.
  2. Generative AI in materials science: Integrating generative AI with materials science can revolutionise the discovery and design of new materials with unique properties. For example, generative AI algorithms can simulate the synthesis of new materials and predict their properties, leading to the development of advanced materials for various industries. We are compiling a list of the most promising research in Generative AI & Materials Science, which can be found here.
  3. Generative AI in personalised medicine: Integrating generative AI with genomics and proteomics can enable the development of personalized treatments based on an individual’s genetic makeup. For example, generative AI algorithms can predict drug responses for specific genetic profiles, leading to more effective and tailored medical treatments. We are compiling a list of the most promising research in Generative AI & Personalised Medicine, which can be found here.

Multi-modality and Inter-operability

The next wave of growth in Generative AI applications will come from Multi-modality as well as Inter-operability of AI models.

Multi-Modality

Today, ChatGPT “reads” text. But imagine if it could “see” images, videos, or games and “hear” audios. Now imagine it could give output in all those formats and not just in text. That’s multi-modality — the integration of distinct AI capabilities such as natural language processing (NLP), computer vision (CV), video processing, data chart interpretation, etc. into a single, cohesive system. This convergence allows AI models to process, analyse, and generate information across various forms, enhancing their versatility and applicability.

Inter-Operability

Now imagine this new AI model described above can also access IDEs, apps like MS Excel, and any other tool/website/app/etc. you can think of. That’s inter-operability — the ability of AI models to seamlessly talk to and use conventional apps, exchange data with them, and control their functions. This capability is essential in boosting the efficiency and efficacy of AI’s output and functions. A case in point is OpenAI’s launch of “Plug-ins” that can be added to ChatGPT.

Implications of Multi-Modality and Inter-Operability

  • Enhanced decision-making
  • Better user experience
  • New apps otherwise impossible without AI
  • Streamlined workflows
  • Empowerment of non-expert users

Trend 8: Next Wave of Growth Will Come from Non-Large Models

Adding More Compute is Not a Permanent Source of Growth

Source: Wired

Recently, the CEO of OpenAI, Sam Altman, said at an event at MIT that the “Age of giant AI models is already over”. His intent behind saying this reflected a general growing consensus among the tech community that indefinitely scaling the models won’t help. The founder of Cohere too agreed with Altman and stated that there were “lots of ways of making transformers way, way better and more useful, and lots of them don’t involve adding parameters to the model.”

Even the technical report on GPT-4 by OpenAI suggests diminishing returns on scaling up model size and feeding higher data. (Source) [2303.08774] GPT-4 Technical Report (arxiv.org)

There is a school of thought which states that back-propagation, a method commonly used in training the transformers in AI models such as GPT-4 and others, may not be the future of how Artificial General Intelligence is created. Ultimately, transformers are a type of a statistical autoregressive model that were introduced in the paper “Attention Is All You Need” by Vaswani et al. in 2017. However, unlike biological learners that continuously learn from and adapt to uncertainty, these systems are rigid, inscrutable and fixed in time, requiring large amounts of data and compute to update themselves and learn anything new.

Is Plasticity the Next Big Approach?

Plasticity is a term used in both neuroscience and AI, but with slightly different meanings.

In neuroscience, plasticity refers to the brain’s ability to change and adapt in response to different experiences and environments. It is the brain’s ability to reorganize itself by forming new neural connections or altering existing ones, based on changes in behavior, environment, or injury.

Neural plasticity in the brain is a fundamental process that underlies learning, memory, and recovery from brain injury. It plays a critical role in shaping the development of the brain, as well as in allowing the brain to adapt to new situations throughout life.

In AI, plasticity is the ability of an artificial neural network to modify its structure and adapt to new tasks or input data. Plasticity in AI is a relatively new concept that has emerged in recent years as a way to address some of the limitations of traditional machine learning algorithms.

In contrast to traditional machine learning algorithms, which are typically trained on a fixed dataset and are optimized for a specific task, plastic neural networks are designed to be more flexible and adaptable. They can learn from new data and adjust their structure to better fit the new task, allowing them to generalize to new situations and perform well on a variety of tasks.

Plasticity in AI is still an active area of research, and there are many different approaches to implementing plasticity in artificial neural networks. Some of these approaches include dynamic neural networks, neuro-evolution, and meta-learning, among others.

An important paper on plasticity in AI is “Learning to learn by gradient descent by gradient descent” by Andrychowicz et al., which was published in 2016. The paper introduces a meta-learning approach that allows neural networks to learn how to learn, by optimizing their own learning algorithms.

Sources: The future of deep learning (keras.io) ; [1606.04474] Learning to learn by gradient descent by gradient descent (arxiv.org) ; 2002.06177.pdf (arxiv.org)

TruthGPT(?)

The word on the street is that as current LLMs are probabilistic models are trained on specific data, their answers reflect only that data and not “the world”. In other words, the error margin or difference between the training data and reality could result in Generative AI internalizing several inconsistencies and deficiencies in how it thinks, identifies, decides, and reproduces.

While there is no clarity on how the gap between AI models running on training data vs on “reality” be bridged, it’s a core area of focus for technologists. An ideal AI that is able to identify the true reality of a complex situation rather than reproduce what humans have explained about that situation. Think of a hypothetical situation in the 5th century — An LLM in the year 450, trained on the data of that world, including orders of Kings, inscriptions, manuscripts etc. If that LLM were asked about the shape of the Earth, it may have said that the Earth is flat as its training data would have heavily favoured an Earth that were flat. At best, it may have added that even though the Earth is “flat”, some cultures and scholars have “theorised that it may be a round”, as only a handful of training data of that era would have touched upon the non-flatness of the Earth.

Going forward, it is essential to create systems that can investigate and identify truth. While this would occur in degrees of progress, and uncovering celestial mysteries is still an incredibly long shot, the emergence of a truth-seeking ability in AI models may be able to guide policymakers better, pass better judgements than at present, enable corporate boardrooms to take better decisions, and so on.

The other side of Truth — Eurocentrism in LLMs

While this is still an active area of research, present day LLMs including GPT-4 have been observed to be Eurocentric. What this means is that every question, every prompt, every idea, is answered through a Europe/Anglo-first approach.

Take this as an example — when asked to suggest 10 names for a startup inspired by mythology, all 10 were derived from European mythology (Nordic, Celtic, Greek or Roman) while none was derived from Indian, East Asian, Sumerian (Middle-East) or Aztec mythologies.

You will find similar answers when asked about who theorised the Earth as round. While Egyptian, Indian, and Greek scholars had theorised the shape of the Earth as early as 500 BCE, the answers by LLMs cover only the Greek theorists, unless specifically asked about other cultures.

If GPT-4 is the first step to AGI — we should ensure it represents all of us equally and not treat Europe/USA as “us” and rest of the world as “the other”.

On that note, we have come to the end of our series on Generative AI! This space is evolving so fast, and there’s so much more to learn and keep track of that we are sure we would have missed something despite trying our best to be as exhaustive as possible.

So do write to us with your thoughts/feedback/brickbats! And if you think jamming with us will help you with your startup — please feel free to reach out to us! We are always excited to chat with Indian founders pushing the boundaries of what’s possible! 😊

This is the final part of a 4 part series on long-term trends in Generative AI. If you are building in this space or need to bounce off ideas, we are happy to chat.

Thanks to Sibesh from Maya and Rohit and Harish from Segmind for helping proof read this!

Connect with the authors:

Kushal Bhagia (LinkedIn, Twitter, kb[at]allincapital.vc)

Sparsh Sehgal (LinkedIn, Twitter, sparsh[at]allincapital.vc)

--

--

Kushal Bhagia
All In Capital

Founder @ All In Capital | Love championing founders!