The Next Frontier in AI: A new central nervous system for the Internet

Sebastian Jorna
7 min readApr 8, 2023

--

Bridging the Gap between LLMs and Existing Software Tools

As artificial intelligence continues to rapidly evolve, large language models (LLMs) hold the potential to revolutionize the digital landscape. However, to truly unleash their full potential, we must build an infrastructure that seamlessly connects these powerful AI models with existing software tools. Drawing inspiration from Peter Thiel’s concept of vertical innovation, we can construct a “spinal cord” for the internet, linking LLMs with existing software ecosystems and fostering a new era of intelligent web applications.

Current AI trends

There are 4 main trends that together create the urgency for this new central nervous infrastructure.

  1. Compressing information into knowledge: Solved
  2. LLM Wars: The marginal cost of intelligence is dropping to zero, fast
  3. LLM Agents: Hooking into the existing software cloud
  4. LLM End-End workflows: The distance between the ends is growing

1. Compressing information into knowledge: Solved

LLMs have condensed vast human information into accessible knowledge. By scaling up the transformer architecture from the 2017 landmark paper, “Attention is all you need”, and feeding it most of today’s internet.
How good are these models and their downstream applications actually? Instead of losing ourselves in the avalanche of interesting demos, let’s instead focus on the actual adoption of these models as a proxy for their perceived value.

OpenAI, the company behind ChatGPT and Dalle-2 has recently been valued at nearly $20bn. Moreover, not only did ChatGPT break a record by reaching its first 1m users in only 5 days, it also got to 100m users in only 2 months!

GitHub’s Copilot, powered by OpenAI’s Codex, has been used by 1.2 million developers, with 40% of code being written by Copilot when enabled. Microsoft’s integration of GPT-4 into its core 365 Office suite further highlights LLMs’ capabilities.

We have figured out how to compress information into knowledge, and it is so good that we see the distribution into the real economy at an unprecedented speed and scale.

2. LLM Wars: The marginal cost of intelligence is dropping to zero, fast

Unlike Operating systems like iOS or Android, in the Limit LLMs will converge in terms of capabilities. Why is that? For one, research has shown that the LLMs become better, the bigger they are and the more data they have been trained on. As the race for supremacy among LLMs intensifies, their capabilities will converge as they train on similar data, namely the internet.

If this is all there is to it, we would end up with a couple of huge LLMs that will have some sort of oligopoly. In essence the dynamic we have witnessed the last couple of years. See the following analysis of compute trends in Machine Learning

However, there is more to it. Recent research such as SparseGPT has demonstrated how LLMs can be pruned to at least 50% sparsity without any retraining and at minimal cost of accuracy. Very similar to our own neural pruning between infancy and adulthood.

There are many other engineering efforts that make the creation of powerful LLMs more accessible. One of those efforts was the release of the 7bn parameter Alpaca model out of Stanford which cost just $600 to get it to very similar performance asOpenAI’s GPT-3.5!

As the value and accessibility of LLMs increase, so does supply, driving the marginal cost of intelligence closer to zero.

3. LLM Agents: Hooking into the existing software cloud

LLMs can benefit from the same software tools that amplify human capabilities. Luckily In the last +10y we have seen a massive migration of these software tools from on-prem to the cloud. This means that most are now accessible via APIs and an internet connection!

The Feb 2023 Toolformer paper demonstrated the impressive results of LLMs that teach themselves to use external tools via simple APIs.

Just a little over a month later, OpenAI shook the world again by releasing its Plug-in store. Essentially implementing the Toolformer approach.

On the open-source side, langchain has been making it easier to leverage LLMs with agents that access and interact with other sources of computation or knowledge, including 3rd party apps. The meteoric rise in GitHub stars speaks for itself.

Hooking into third-party tools is becoming table stakes for LLMs and downstream AI-workflows.

4. LLM End-End workflows: The distance between the ends is growing

The work LLM applications can do without human interference is catching many off guard. In the early chatGPT demonstrations, the models were primarily focused on simple one-shot chat interactions, where they could provide straightforward answers to user queries. However, many clever design decisions had a tremendous effect on increasing the independent steps the AI can take between the prompt and the final goal.

One catalysts was the ReAct framework as described in the following paper, ReAct: Synergizing Reasoning and Acting in Language Models. This framework of Reasoning and Acting allows for chained questions and answers. The LLM can now break down complex problems into smaller subproblems, just as a human would. The resulting increased accuracy of the LLM across benchmarks is quite profound.

I personally wonder to what extent we can emulate Yann LeCun’s framwork for “Autonomous Machine Intelligence” by combining LLMs with existing 3rd party tools and clever ReAct-based prompt templates. How far can LLMs push us on the road to AGI?

AutoGPT which took Github by storm in the beginning of April is a powerful demonstration of how far the AI can run between the initial prompt and the delivery of the final project.

Putting it all together — The opportunity to build the central nervous infrastructure for the new internet

There is an opportunity to build the spinal cord for the new internet. The nerve highway which connects the brain to the rest of the body by allowing signals/instructions to be sent back and forth. Using this analogy, this new “spinal infrastructure” connects the LLM brains to the rest of our digital world via API calls.

Due to the trends outlined above, I expect to see:

  • An explosion of capable LLMs
  • A need for LLMs to connect to third-party tools
  • An increasing AI-user base interacting with existing software, as end-to-end workflows expand

To facilitate and monetize this new user group, one can create an independent infrastructure that enables massive API communication flow between AIs and existing SaaS businesses, similar in a way to what Google did with search or Stripe with payments.

While OpenAI is building its plug-in store, the LLM appstore analogy is a dangerous one. The reason we only have two large Appstore is that they are tied to both tied to a specific operating system IOS and Android. Compared to an OS, LLMs’ lack of long-term differentiability means many LLMs will want third-party software plug-ins access. However, these third-party software providers won’t write dedicated plug-ins for all those different LLMs. As such, there is a strong case to be made to build an independent, standardized infrastructure.

Quasar — Massive API information exchange between AIs and existing software tools

Win-Win and network effects

Win for LLMs:

  • Easy access to the power of third-party tools as per the toolformer paper
  • A single, standardized platform for finding the best APIs for specific use cases.
  • Enabling API usage without end-user subscriptions to specific software, opting for a pay-per-use model.
  • Practical understanding of live API usage costs for AI projects to automatically stay within budget.
  • Abstracting and resolving API authentication issues for LLMs

Win for Existing Software businesses:

  • Monetizing API usage for non-subscription members, with the freedom to set API pricing
  • Streamlining the process for launching API plug-ins based on their existing documentation
  • Getting instant distribution access to millions of monetizable AI workflows. This is especially interesting for the new generation of “dark kitchen” software companies that will exclusively focus on AI users, instead of humans who require expensive UI/UX.

Network effects:

  • The more plug-ins are available, the more attractive it becomes for LLM-based applications.
  • The more LLMs use the platform, the large the instant and monetizable distribution for the plug-in owners.

How we win:

  • Implementing a low-friction, small tax (%) on the API flow that AIs are already paying for on a per-use basis.

Conclusion

By building the spinal cord of a new internet, we can create a thriving business with significant network effects, riding the tailwinds of AI-driven innovation. Unlocking the full potential of LLMs will usher in a new era of intelligent web applications, revolutionizing the digital landscape through a unified and independent API infrastructure.

The time is now to seize this opportunity and become the driving force behind the AI-powered future of the internet. Together, we can shape the next generation of technology and ensure a more connected, intelligent, and efficient world.

If you are interested to discuss or join in building the future of our digital realm feel free to drop me a message.

Twitter: @Sebastian_rtj

--

--