Artificial Intelligence — How to create the new generation of applications

Mayda Kurdian
6 min readJun 24, 2024

--

How It Started

In November 2022, OpenAI not only released its new generative AI model but also introduced a chat interface to interact with it.
This addition made an enormous difference!

It democratized access to AI, allowing anyone to experience its power without any technical knowledge

Only two months later, ChatGPT broke all adoption records!

Other companies accelerated their AI efforts, so today we have many AI models (LLMs) and tools at our disposal.

Generative AI was beginning to transform everything, particularly how we conceive solutions and how to create the new generation of applications.

How It’s Going

Let’s look at a case to understand the nature of these changes.

Building an Application in 2022

My client, Walter, asked me to develop a nutrition application to help users simplify meal preparation and achieve their dietary goals.

The key requirement was:

Users should be able to input the ingredients in their fridge, and the app would provide recipes based on them, considering their dietary profiles and goals.

1st Version

I started working on this application in mid-2022.
The app follows a classic schema.

  • Users can enter their profiles and nutritional requirements, which will be stored in the database.
  • When they need a recipe, they input the ingredients they have.
  • The program then finds recipes in the database that can be made with those ingredients and displays them to the user.

Note that we have to collect ingredients and a large set of recipes to populate the database and keep that information up to date.

This is what the app does. I made the app and installed it for Walter so he could start testing it!

2nd Version

A few days later, I called Walter to ask how it was going.

Pretty well, said Walter. However, I have a little problem:
I input avocados into the app, but it’s not giving me any recipes.
I have a tree full of avocados in my backyard and they’re going to waste!

Let’s see, I said.

I looked into the problem and found that the ingredient “avocado” was indeed in the database but listed as “aguacate.” That’s another name for avocado; in fact, in some places, it’s also known as “palta”.

To avoid this problem, we needed to add a new feature to the app: synonym handling for ingredients.
Therefore, I modified the database structure to support synonyms, adapted the program logic to manage them, and delivered version 2 to Walter.

3rd Version

A week later, Walter called me and said, “Mayda, a friend tested the app. He is on the keto diet, but the app doesn’t give him appropriate recipes.”

Although I considered various dietary needs, I missed keto.
So, I had to go back and update the database to tag keto-friendly ingredients and find a new set of keto recipes to populate the database.

Building the Application in 2023

If Walter had asked me for this application in 2023, I would have done differently.

In 2023, I knew AI models (LLMs) were available to include in my apps.
I also knew these models have extensive knowledge about ingredients, diets, and countless recipes, and they keep this information updated.

So, I decided to incorporate an LLM into my application to manage ingredients and recipes. To do that, I had to interact with the model, create prompts, and process results. However, it saved me from having to program a lot of functions and eliminated the need to maintain a large amount of data in my app.

This approach allows me to achieve a much more powerful app at a lower cost.

I would still use traditional methods to record user profiles, dietary preferences, and restrictions. This information is user-specific, must be maintained by the app, and is vital as input to the LLM.

My app schema looks now like this:

It works differently, mainly in these aspects:

  • I don’t need to populate the database with ingredients and recipes. I will use all the knowledge that the LLM already has.
  • When users need a recipe, they enter their ingredients, as in version 1. However, now the app will do something different: It will create a prompt with the user profile + ingredients + some instructions, and ask the LLM for recipes.
  • The app will receive and format the recipes generated by LLM, show them to the user, and allow them to continue interacting: users can ask for more suggestions or information.

Additionally, I would include other valuable functions at little cost, such as providing calorie counts, managing substitutions, preparation methods, and more.

With AI, I could build my application at a lower cost, faster, and more powerfully!

Despite all these benefits, I was not satisfied that users had to enter their ingredients every time they wanted recipes. How could I simplify that?

What if I add this new input method?

If the user takes a photo of their fridge contents, the app can infer the ingredients based on the photo. The user might need to adjust the inferred info, but this simplifies the task greatly. I could even add options for input via video or audio too!

I ended up with an app that is easier to build, use, and maintain, and infinitely more powerful!

Walter was over the moon!

This example shows how AI has transformed how we conceive and build technological solutions. Incorporating AI components into our applications has enormous benefits:

For developers: It simplifies creation and maintenance, allowing us to add features that were previously impossible.

For users: They get a much more powerful application with a greatly improved UX at a lower cost.

The first step to imagining and creating the next generation of applications is to deeply understand the capabilities and limitations of AI.

For this, the best approach is to experiment!
It’s not even necessary to program; you can experiment with chat interfaces and playgrounds offered by different LLMs.

Some AI features that might spark your creativity

Natural Language Processing: Text analysis, text generation, summaries, main ideas extraction, search-specific content, translation to various languages.

Interaction: Maintain and process context, and solve requirements to sustain conversation.

Vision: Image and video analysis: Identify objects, persons, situations, contexts, and emotions from them.
Image and video generation: Generate from text, and vice versa

Audition: Transcription, translation, extract conversations, analysis of emotions. Audio generation.

Data Analysis: Identify patterns, trends, complex relationships, suggest and perform analysis. Make Predictions.

Problem Solving: Identify problems, and consider and evaluate options.

Learning: Adapt to new situations.

Final Thoughts

We are witnessing a paradigm shift.

Applications are evolving their role: they not only automate tasks and process data but also become intelligent assistants that expand the capabilities and possibilities of their users.

Many people wonder if AI will take jobs away from knowledge workers. The answer is maybe, but only for those who do not use AI to enhance their skills and expand their capabilities.

Therefore, we now face two critical responsibilities:

IT professionals must provide this new generation of applications to our users.

Users should demand them from us.

AI Series

This article is part of a series on the fundamentals of AI. I will be posting more in-depth explorations of these topics, analyzing various use cases, including some that my team and I have worked on.

If you’re a techie, this can help you determine when and how to use AI in your apps and realize its value.

If you’re not, you’ll understand its potential and know what to ask for and expect from the new generation of AI-driven applications.

Found this article useful? Follow me (Mayda). I post periodically about App Design, AI, UX, R&D, and Neuroscience, aiming to turn complexity into clarity — first in my mind, and hopefully in yours.

--

--

Mayda Kurdian

Engineer in Computer Science, creating technology for people. Design, AI, UX, R&D. Passionate about turning complexity into clarity. Writer in progress.