Iceberg sitting in the water. The submerged half is a computer core that suggests Artificial Intelligence while the sky is littered with small squares that resemble apps.

Build Web Apps with No Backend Using LLMs

Alex RG
6 min readJul 3, 2023

Imagine building a web app without writing any backend code or setting up any database. Imagine just telling a powerful Large Language Model (LLM) what the app does and letting it handle the rest. Imagine how amazing and convenient that would be? One LLM Backend, a couple lines of code, some nice CSS, a few buttons and 🧨BAM💥 you’ve got a working app! Perhaps you’d even say that would be a revolution for web development?

As of this writing there’s only a few proof-of-concept (POC) projects out there testing exactly this. If you want to check them out there’s React-Redux-ChatGPT, LLM-Strategy and Backend-GPT: GPT is all you need for the backend (a humorous play on the paper that introduced transformers called Attention is All You Need).

If three POCs seem like a paltry amount, fear not, as time marches on, LLMs capacities increase, costs go down and more people pile into the space we’ll be guaranteed to see more prototypes like these pop up until we one day reach production ready status.

When it comes to innovation, business has much to learn from design. The philosophy in design shops is, ‘try it, prototype it, and improve it’.
Roger Martin — Author

The Benefits of a LLM Backend

The main benefit of a LLM Backend is simple: It reduces the development time and complexity of building web apps. There is no need to write any backend code, set up any database, manage any schemas, or implement any routes. The LLM handles all of that based on the user’s either very detailed, or super simple instructions.

This could be a great tool for front-end developers to develop and test against realistic (but fake) backends without having to worry about coordinating with the API team. It certainly would not be the first time the different disciplines were not yet at the same stage in the project.

It could also be useful for prototyping and experimenting with different app ideas quickly and easily.

Another benefit of LLM Backend is that the data is flexibly “stored”. The LLM can create sublists, convert to key-value pairs, manage multiple linked todo lists, etc. Because it was simply fed an idea of the data at the beginning, the data schema is entirely malleable on the fly!

Even more, the backend logic is also entirely flexible. The LLM can handle any API call that makes sense for the app, even if it was never written before. For example, it can add_five_household_chores() or deleteAllTodosDealingWithFood() or sort_todos_by_estimated_time() and it will do exactly that! (Although adding 5 household chores is an interesting problem 🤔, is it going to add ONE todo with the text of “Do five household chores” or 5 todos with one new chore each such as vacuuming, dusting, etc…)

A todo list app has been tested but it’s just one example of what’s possible. Other ideas include a Translating App, NoteTaking app, Boardgames App such as Chess or Checkers, Tutor App for history or languages… Honestly with LLM backends and the way they are improving, the possibilities will (perhaps) one day be limitless. Fortunately for us humans, this approach is not quite there yet.

Two robots playing soccer in a field. The first robots kicks the ball into the net and the second robot hilariously jumps after the ball has already rolled by.
LLMs not quite up to the task, but the effort is there!

How an LLM Backend (currently) works

The web app backend has a single catch-all API route. The backing store is a simple JSON file that contains the initial state of the app. In this example, the app is a todo list with some predefined items.

Here’s the current state of the database:

{
"todo_items": [
{
"title": "buy eggs and BACON for breakfast",
"completed": true
},
{
"title": "mow that damn lawn",
"completed": false
}
]
}

The route and payload (along with the JSON database) feed into a templated prompt that interprets the route into state operations on the database. The prompt uses OpenAI’s Codex as the LLM, but any other model that can generate and execute code could work (looking at you Hugging Face).

For example, if someone wants to add a new item to the list, they can send a POST request to /todos with a body like this:

{
"title": "write a business proposal",
"completed": false
}

The LLM then receives a prompt like this:

This is a todo list app. 
The current state of the database is:

{
"todo_items": [
{
"title": "buy eggs and BACON for breakfast",
"completed": true
},
{
"title": "mow that damn lawn",
"completed": false
}
]
}

The user wants to add a new item with title:
'write a business proposal' and completed false.
Update the state of the database and
Return a response for the client.

The LLM then generates and executes some code to update the state and return a response, like this:

{
"state": {
"todo_items": [
{
"title": "buy eggs and BACON for breakfast",
"completed": true
},
{
"title": "mow that damn lawn",
"completed": false
},
{
"title": "write a business proposal",
"completed": false
}
]
},
"response": {
"message":
"Added a new item with title "
"write a business proposal "
"and completed false",
"success": true
}
}

The client then updates the UI with the new state and displays the response message to the user.

Of course it doesn’t look like much when you simply read the code being created by the LLM. There’s nothing quite like experiencing the magic that is a functional application without the use of a backend. That’s really something special!

The Challenges of LLM Backend

Of course, LLM Backend is not perfect. There are still many challenges and limitations to overcome before it can be used for production-ready apps.

Here’s a few of the limitations that a real Backend has already solved but that challenges LLMs in their current form.

Technical Challenges

  • Maintaining user state: Keeping track of different users and their data, as well as handling authentication and authorization efficiently.
  • Tackling complex tasks: Tasks that require more than one API call or involve multiple data sources, and ensuring that the LLM can handle edge cases and errors gracefully.
  • Data persistence: Need to ensure data is not lost or corrupted, and the LLM will provide a way to backup and restore the data if needed.
  • Single point of failure: This involves ensuring that the LLM is always available and reliable, and handling network failures or model outages.
  • Token limit: This involves handling inputs and outputs that exceed the token limit of the LLM, and chunking and concatenating the data without losing information or coherence.

Security Challenges

  • Defending against malicious users: This involves preventing users from injecting malicious instructions or accessing sensitive data, and ensuring that the LLM does not generate harmful or inappropriate responses.

Cost and Performance Challenges

  • Cost and performance: This involves optimizing the cost and performance of using an LLM as a backend, and dealing with the latency and throughput of the LLM API calls.

These are not insurmountable problems, but they require careful consideration and creative solutions. Fortunately, as LLMs become more advanced, accessible, and affordable, these challenges will become easier to solve.

Three stacks of servers in their metal shelving sitting on top of a lush green hill with a sunrise off to the left. Light sky with only a few clouds and distant forests in the background.
LLM Backend Development has a bright future ahead!

The Future of Backend Development

LLM Backend is a glimpse into the future of backend development. It shows how LLMs can handle most of the backend logic and memory for web apps, while developers can focus on the frontend design and user experience.

This does not mean that LLMs will replace traditional backends entirely, but rather complement them as a new layer in the tech stack. Developers might one day use LLMs for rapid prototyping, experimentation, and customization, while relying on conventional backends for stability, security, and scalability.

Some systems just don’t deserve the time investment of a full-blown app up front. Worse, good ideas never get started because the upfront cost is high.
-Simon Hørup Eskildsen — Developer & Consultant

Want more info?

Using LLM in the Frontend in combination with the Backend could really open up some interesting avenues. If you’d like to read more about it, check out The Future of Applications.
If you want to dive into the Backend stuff, go read the blog post about the React-Redux-ChatGPT example here.

_________________________________________________________________

Blog post slug: build-web-apps-with-no-backend-using-llm

Meta Description: Explore the revolutionary concept of LLM Backend and its potential to reshape backend development. Discover the flexibility, adaptability, and possibilities it offers, and get ready to challenge traditional paradigms. Dive into the world of language-driven backends and unlock a new era of software innovation.

--

--