The Future of Application Development is Generative

Ben Lower
Fixie.ai Blog
6 min readAug 4, 2023

--

Unless you’ve been on a six month technology detox or one of those extended isolation retreats, you’ve no doubt heard about how AI is eating the world. More specifically, how Large Language Models (LLMs) have gotten good enough to power truly natural-language computing. We believe that LLMs are transforming how we think about building applications.

We believe LLMs, over time, will enable developers to write and maintain less code. Instead of developers and designers having to create and maintain the user interface (UI) for applications, new software development frameworks built to harness the power of LLMs will be able to generate the user interface for the application. Gone will be the days of endlessly tweaking and testing new UI elements based on ever evolving user requirements. Generative UIs will enable end-users to have dynamic, bespoke experiences tailored to their exact needs…all in the moment.

Three Building Blocks for Generative Apps

We think there are three major building blocks that are needed to power generative AI applications. Two of these building blocks are becoming established best practices in the AI community. The third is more experimental but something we think will become commonplace. These building blocks are: Document Question + Answering (DocsQA), Tools, and Generative UI. Let’s look at each of them and why they are important.

Building Block #1: Document Question + Answering (DocsQA)

LLMs: Finite Knowledge, Stuck in Time

Every LLM is trained on a particular set of information. Granted, the amount of information is massive (at or near internet scale), but the data captured represents a snapshot in time. This means that models must be updated as new information becomes available. This can be a limiting factor if you are trying to use a model to help with a specific domain that is new or has recently changed. For example, models that came out a couple of years ago don’t have any training on LK-99 (the newly claimed room temperature superconductor), or other current events.

Adding Contextual Knowledge with DocsQA

Remember that scene in the Matrix when Neo instantly acquires the ability to do Kung-fu? DocsQA is effectively the same thing (note: this is also known as Retrieval Augmented Generation (RAG) in the AI community).

A clip from the move “The Matrix” where Neo states, “I know kung-fu.”

DocsQA gives the LLM access to new knowledge and information. Let’s take a look at how it works.

Overview of DocsQA. A diagram that shows a simple interaction between an end-user and an AI-powered application and how this is enabled by an LLM and content that is chunked into a vector database.
Overview of DocsQA

In this example, we have crawled all the support and product web pages for a company and turned them into what we call a “corpus”. In practice, our corpus could be made up of content from documents, PDFs, websites, or even media like audio and video. The corpus consists of chunks of all the content. The chunks are turned into embeddings in a vector database.

When an end-user asks about returning a damaged product they received, our application is able to find relevant information from the corpus and then have the LLM generate the right response. We’ve effectively taught the LLM about our company’s return and support policies and procedures. Perhaps not as fun as kung-fu, but much more practical in this case.

Building Block #2: Tools

We just looked at how DocsQA can give LLMs access to more knowledge and information. Now, let’s look at the next building block: Tools. This refers to enabling the LLM-enabled application to take action on behalf of the end-user by calling out to conventional APIs or software functions.

Here’s an example of what this might look like in our application:

Tools overview. Diagram shows end-user engaging in an AI-powered application. Also shows pseudo code for how external APIs can be provided to the LLM as tools can provide more functionality in the application.
Overview of Tools

Tools can be used standalone or in conjunction with DocsQA. For example, we might want our app to be able to give great answers based on a corpus of support and help documentation and also be able to use tools to complete meaningful actions (e.g. start the return process).

In the example above, we are providing new tools to the LLM. These tools are based on conventional APIs and take our app from simply informing the end-user about how to do a return to enabling action in the moment by generating the return label and letting the customer get on with their day.

Building Block #3: Generative UI (Gen UI)

The third building block nicely fits in and extends what we’ve seen above with DocsQA and Tools. Consider the following:

Generative UI (“Gen UI”) diagram. Shows pseudo code for providing new JSX components that the LLM can use to dynamically create a rich UI in the application.
Overview of GenUI

Here we have provided a set of JSX components to the LLM and enabled a richer user experience to be dynamically created in response to what the end-user needs. Instead of rendering the UI only in markdown, we are now providing radio buttons for selection and rendering the shipping label in a more visually appealing way.

We believe GenUI will fundamentally change how applications get built and consumed. When the model can produce UI dynamically, the app becomes much more flexible (versus hand-crafting the UI). This means apps can handle a much wider range of use cases. We are no longer limited by what we are able to implement by hand, but, instead, only by the model’s capabilities and the breadth of the components we provide.

Introducing AI.JSX

AI.JSX is a new framework designed by Fixie from the ground up to be completely natural for anyone familiar with React and JSX. AI.JSX makes each of these three building blocks easily accessible. AI.JSX has support for DocsQA, enables declaring and giving the LLM access to Tools, and makes it easy to create UI components and then have the LLM use those to do GenUI. We think this makes AI.JSX the most exciting way to build AI-powered applications.

AI.JSX, open sourced under an MIT license, makes it easy to bring conversational AI into new or existing applications written in JavaScript. AI.JSX (like LLMs) is still evolving as we work with the developer community and our customers to figure out the best way to bring the tremendous power of AI into existing and completely new apps.

Welcoming JavaScript + React Developers to the Party

Most of the action for developers and data scientists working with AI and machine learning (ML) has been in the Python ecosystem. Python is a terrific language and there are tons of libraries and resources available to work with LLMs and to build applications on top of them.

We love Python and make extensive use of it here at Fixie. However, the action in creating leading end-user experiences is in JavaScript and React. That’s why we built and released AI.JSX: to empower the tens of millions of JS + React developers to easily add LLMs to their collective tool belt. AI.JSX seamlessly integrates with existing or new React apps. Our goal is for AI.JSX to feel completely “javascripty” and delightful to JS + React devs.

An AI.JSX application is built using JSX components. Prompts for the LLM are also constructed via components which provides a big productivity boost. The JSX components in an AI.JSX app can be combined to generate complex prompts, LLM integrations, and even UI elements. Because it’s just JSX, AI components can be blended with UI elements. This enables us to declare some UI components in React and then let the AI generate the UI for our application.

Here are a couple images that show AI.JSX and GenUI in action:

Example: Rendering without GenUI

Animated GIF showing a chat experience that is text/markdown only. Not using Generative UI.
Text-only Rendering

Example: Rendering with GenUI

Animated GIF showing a chat experience tha uses Generative UI to have the UI be more visually appealing and better designed than text/markdown only.
Generative UI in Action

Start Building with AI.JSX

We are in the early innings of generative apps and things are moving quickly. We want your help and input in evolving AI.JSX! You can start building with AI.JSX today. We welcome contributions. Check out the Getting Started Guide and join us in Discord if you have questions or any suggestions on how we can improve AI.JSX.

--

--