Integrating LLMs Into Mobile Applications Using Gemini and Flutter

Amorn Apichattanakul
KBTG Life
Published in
14 min readJun 19, 2024
Image generated by LLMs

LLMs stands for Large Language Models, which is trained on a very large dataset using a type of machine learning called “deep learning.” This enables the model to detect language patterns and knowledge to answer questions effectively. Creating an LLMs is nearly impossible for startups and small to medium enterprises due to the extensive data, time, and resources required, which can take over 5–6 months of training. For example, GPT-4 uses around 50 gigawatt-hours. That is approximately 0.02% of the electricity California generates in a year! (Though I will set aside the topic of AI and the energy crisis for now since it’s a complex area that I need to study further)

For this reason, only large companies such as OpenAI, backed by Microsoft, and Gemini by Google have the resources to develop such models. In this article, I will focus on Gemini as it supports both iOS and Android, as well as Flutter, which extends to four other platforms (Windows/Mac/Unix/Web).

Why not use OpenAI? Both have their strong points, but I prefer Gemini for its developer-friendly features. With GPT, you have to call network backend services and pass your data into their system, which is more challenging for mobile developers.

Let’s begin!

To get started, you need to register at Google AI Studio, click on “Create API key,” and replace YOUR_API_KEY in the following cURL command to check if it works:

curl \
-H 'Content-Type: application/json' \
-d '{"contents":[{"parts":[{"text":"Explain how AI works"}]}]}' \
-X POST 'https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash-latest:generateContent?key=YOUR_API_KEY'

This was my first result. It’s so easy! Just register to get a key and you can start using your first LLMs.

Now that you can use cURL to get information, let’s implement it into our application. To start with Flutter, follow these steps to integrate Generative AI:

1. Add Dependency: Open your pubspec.yaml file and add the following dependency:

dependencies:
generative_ai: latest_version

2. Install the Dependency: Run flutter pub get to install the new dependency

3. Implement in Flutter Widget: Here is a basic example of how to use the generative AI package in a Flutter widget:

import 'package:google_generative_ai/google_generative_ai.dart';

const apiKey = ...;

void main() async {
final model = GenerativeModel(
model: 'gemini-1.5-flash-latest',
apiKey: apiKey,
);

final prompt = 'Explain how AI works';
final content = [Content.text(prompt)];
final response = await model.generateContent(content);

print(response.text);
};

And there you have it, your first LLMs in Flutter!

Okay… now we have a chatbot, so what? What’s the difference compared to installing a regular chatbot, you might wonder. The power of the LLMs lies in how you prompt it.

Prompting Technique

This is general knowledge that you can apply to any GenAI tool, not just Gemini. To make your answers most effective, you need techniques on guiding the model and have it produce the answer you really want. Here are some tips:

Be Clear and Specific

State exactly what you want, provide some context, and explain the details of what you want to get.

For example, don’t say “I have an error popup in my application, what do I do?” without any context or background. The model won’t be able to answer you correctly. Even a human would have trouble understanding when you present the issue out of nowhere.

Instead, say “I used the XX application and went to the login page. I entered the correct username and password, but there was a popup showing a connection problem. What should I do?” This question is clear, specific, and provides background, enabling the model to help you effectively. Think of it as talking to a human: give them enough contexts to understand the situation in order to assist you.

Add Examples

Give them a few examples of what you expect to receive; this is called providing few-shot examples.

For instance, don’t just ask a question if you expect the answer to be in a specific format.

If you want the answer to be a list showing which item is a fruit or vegetable, you need to provide a few examples like this.

Use Delimiters

I recommend using delimiters such as quotation marks (“”), angle brackets (<>), or colons (:) to identify the context and data that you want to provide.

In some use cases where you want to provide extra information for the model to identify, you should use delimiters to help it understand where the data and context are. While the example below works, there are cases where it may not.

This is a good example.

These are the main techniques, but there’s a few more that can make your chatbot even more appealing and human-like.

Assign Persona and Story

Give the model a persona and story before asking any questions to make it more of a character. Additionally, you can provide constraints on what to do and what not to do.

LLMs have something called ‘system instructions,’ which create a persona for the bot and provide it with specific knowledge to answer questions more effectively. In Gemini, you can add this information at the top where ‘system instructions’ are specified.

In my case, I assigned a ‘teacher role’ to the model and gave it instructions on how to answer, as well as providing constraints to use only simple words.

Provide Customized Data

You can provide customized data before the user even asks a question, tailoring the knowledge to give specific answers.

As you can see, I provided the JSON data to the model. You can feed this data hiddenly so that when the user asks a question, the model can answer based on the pre-loaded knowledge and assist the customer more easily. Of course, this JSON data is from the API call I made earlier.

After seeing the techniques mentioned above, you might wonder how they work in Flutter. Let me demonstrate how to apply all these techniques in a Flutter application.

I will use the case of the MAKE by KBank application, which I worked on, to create an AI assistant for users, making it more intelligent by applying the techniques mentioned above.

This is ‘MAKE,’ the mascot of MAKE by KBank, who is a cheerful 10-year-old boy

1. Be Clear and Specific: Since an LLMs is trained on broad data, if we allow it to answer anything it wants, the responses might not satisfy customers. By adding some constraints, we can ensure it sticks to the desired answers. Since we can’t guarantee that users will always make clear and specific prompts, what should we do? As developers, we will make it clear for the user instead!

import 'package:google_generative_ai/google_generative_ai.dart';

const apiKey = ...;

// Add the preprompt here, user won't see it but the model will
// understand it
const prompt = '''
Your task is to chat with customers who need helps about financial mobile
application which called "MAKE by KBank". if users ask the question not relate
to financial, customer information, or about application. You have to response
that you only know about financial and banking only.
''';

void main() async {
// Add pre-prompt by using 'systemInstruction'
// They have a different kinds of Content but we will use
// Content.system for this case
final model = GenerativeModel(
model: 'gemini-1.5-flash-latest',
apiKey: apiKey,
systemInstruction: Content.system(prompt)
);

final question = 'Explain how AI works';
final content = [Content.text(question)];
final response = await model.generateContent(content);

print(response.text);
};

From the sample above, the model might answer with “I can’t explain it.” Now that you provide clear/specific domain knowledge and some constraints to the model, here’s what it would look like:

Example from AI Studio

2. Add Examples: Provide examples of the answers you want beforehand, so the model knows how to follow them. I will change the model to a chat format to reflect the real use case of the chat functionality in the app.

import 'package:google_generative_ai/google_generative_ai.dart';

const apiKey = ...;

// Add the preprompt here, user won't see it but the model will
// understand it
const prompt = '''
Your task is to chat with customers who need helps about financial mobile
application which called "MAKE by KBank". if users ask the question not relate
to financial, customer information, or about application. You have to response
that you only know about financial and banking only.
''';

void main() async {
final model = GenerativeModel(
model: 'gemini-1.5-flash-latest',
apiKey: apiKey,
systemInstruction: Content.system(prompt)
);

// Use a chat with history to include a few-shot example to tell
// model how to response
final chat = await model.startChat(history: [
Content.text("hello"),
Content.model([TextPart('''Hello, I'm MAKE AI,
The financial assistant in your hands. What I can help you today''')]),
Content.text("Teach me how to hack mobile financial app"),
Content.model([TextPart('''Sorry, I can't help with that since I only know
about financial and how to use our app''')]),
]);
final message = "I want to transfer money from my account to my friend";
final response = await chat.sendMessage(Content.text(message));
print(response.text);
};

3. Use Delimiters: Use delimiters to separate information when you want to feed personalized data.

import 'package:google_generative_ai/google_generative_ai.dart';

const apiKey = ...;

// Add the preprompt here, user won't see it but the model will
// understand it
const prompt = '''
Your task is to chat with customers who need helps about financial mobile
application which called "MAKE by KBank". if users ask the question not relate
to financial, customer information, or about application. You have to response
that you only know about financial and banking only.
''';

void main() async {
// Call API to get information into the app and feed into the model
final userInformation = '''
{
"name": "Amorn Apichattanakul",
"age": 30,
"gender": "male",
"account_balance" 30000
}
''';
final promptWithData = '''
$prompt

Use the JSON below for customer information
JSON:
$userInformation
''';

final model = GenerativeModel(
model: 'gemini-1.5-flash-latest',
apiKey: apiKey,
systemInstruction: Content.system(promptWithData)
);

// Use a chat with history to include a few-shot example to tell
// model how to response
final chat = await model.startChat(history: [
Content.text("hello"),
Content.model([TextPart('''Hello, I'm MAKE AI,
The financial assistant in your hands. What I can help you today''')]),
Content.text("Teach me how to hack mobile financial app"),
Content.model([TextPart('''Sorry, I can't help with that since I only know
about financial and how to use our app''')]),
]);
final message = "I need to check my account balance. How much do I have left";
final response = await chat.sendMessage(Content.text(message));
print(response.text);
};

// Reponse should give me a customer about "Hi, Amorn. Your account balance is 30000"

4. Assign Persona and Story: Add a persona and story to make the responses more specific. In the prompt below, I added the persona of a bot to be a 10-year-old boy with a friendly, cheerful character and I add the FAQs for answering question. I add just a few to create an example, but you can more than this.

import 'package:google_generative_ai/google_generative_ai.dart';

const apiKey = ...;

// Add the preprompt here, user won't see it but the model will
// understand it
const prompt = '''
Your task is to chat with customers who need helps about financial mobile
application which called "MAKE by KBank".

You will be an AI chatbot name "MAKE" and call yourself with this name.
You are a 10 years old boy who has a character of friendly, cheerful but
still helpful assistant. You have to answer the question perfectly.
If any question that you can't answer or you're not sure, you can ask to contact
call center or facebook page for more details.

Here's the JSON format that
JSON:
[
{
"question": "How to use MAKE by KBank application"
"answer": "Customer can open bank account by yourself very easily.
Just download and search 'MAKE by KBank' in Google Play or App Store"
},
{
"question": "Which devices that can you the application?"
"answer": "MAKE by KBank support from iOS 12, and Android 9.0, Not support
Huawei device yet"
}
]

if users ask the question not relate
to financial, customer information, or about application. You have to response
that you only know about financial and banking only.
''';

void main() async {
// Call API to get information into the app and feed into the model
final userInformation = '''
{
"name": "Amorn Apichattanakul",
"age": 30,
"gender": "male",
"account_balance" 30000
}
''';
final promptWithData = '''
$prompt

Use the JSON below for customer information
JSON:
$userInformation
''';

final model = GenerativeModel(
model: 'gemini-1.5-flash-latest',
apiKey: apiKey,
systemInstruction: Content.system(promptWithData)
);

// Use a chat with history to include a few-shot example to tell
// model how to response
final chat = await model.startChat(history: [
Content.text("hello"),
Content.model([TextPart('''Hello, I'm MAKE AI,
The financial assistant in your hands. What I can help you today''')]),
Content.text("Teach me how to hack mobile financial app"),
Content.model([TextPart('''Sorry, I can't help with that since I only know
about financial and how to use our app''')]),
]);
final message = "I need to check my account balance. How much do I have left";
final response = await chat.sendMessage(Content.text(message));
print(response.text);
};

// Reponse should give me a customer about "Hi, Amorn. Your account balance is 30000"

The image below shows that the bot can receive JSON information and respond correctly with the specific persona and content from the FAQs.

The last cool thing I want to talk about is ‘function calling’

5. Use Function Calling: Enhance the integration of the LLMs into your application by using ‘function calling.’ This allows the application to know what to do next, turning text into action.

Basically, you have a chatbot that can call functions within your mobile application! This means it integrates tightly with the application and can perform automated tasks with the smart assistant. Not only can it answer FAQs and respond to questions, but also call your APIs!

you can follow the Codelabs below:

and here’s the code:

import 'package:google_generative_ai/google_generative_ai.dart';

const apiKey = ...;
const prompt = '''
Your task is to chat with customers who need helps about financial mobile
application which called "MAKE by KBank".

You will be an AI chatbot name "MAKE" and call yourself with this name.
You are a 10 years old boy who has a character of friendly, cheerful but
still helpful assistant. You have to answer the question perfectly.
If any question that you can't answer or you're not sure, you can ask to contact
call center or facebook page for more details.

Here's the JSON format that
JSON:
[
{
"question": "How to use MAKE by KBank application"
"answer": "Customer can open bank account by yourself very easily.
Just download and search 'MAKE by KBank' in Google Play or App Store"
},
{
"question": "Which devices that can you the application?"
"answer": "MAKE by KBank support from iOS 12, and Android 9.0, Not support
Huawei device yet"
}
]

if users ask the question not relate
to financial, customer information, or about application. You have to response
that you only know about financial, banking, and information about pokemon.
''';

Future<Map<String, Object?>> getPokemonInfo(
Map<String, Object?> arguments) async {
final pokemonName = arguments['pokemon_name'].toString().toLowerCase();
final url = Uri.parse('https://pokeapi.co/api/v2/pokemon/$pokemonName');
final response = await http.get(url);
if (response.statusCode == 200) {
final pokemonInfo = jsonDecode(response.body);
return {
'type': 'pokemon',
'name': arguments['pokemon_name'],
'element_type': pokemonInfo['types'][0]['type']['name'],
'height': pokemonInfo['height'],
'weight': pokemonInfo['weight'],
};
}

void main() async {
final userInformation = '''
{
"name": "Amorn Apichattanakul",
"age": 30,
"gender": "male",
"account_balance" 30000
}
''';
final promptWithData = '''
$prompt

Use the JSON below for customer information
JSON:
$userInformation
''';

// get a pokemon information
final pokemonTool = FunctionDeclaration(
'getPokemonInfo',
'Query the information of Pokemon by name and get information about their physical attributes',
Schema(SchemaType.object, properties: {
'pokemon_name': Schema(SchemaType.string,
description: 'name of the pokemon that we want to know about'),
}, requiredProperties: [
'pokemon_name',
]));

// Add new function into parameters 'tools'
final model = GenerativeModel(
model: 'gemini-1.5-flash-latest',
apiKey: apiKey,
systemInstruction: Content.system(promptWithData),
tools: [Tool(functionDeclarations: [pokemonTool])]
);

final chat = await model.startChat(history: [
Content.text("hello"),
Content.model([TextPart('''Hello, I'm MAKE AI,
The financial assistant in your hands. What I can help you today''')]),
Content.text("Teach me how to hack mobile financial app"),
Content.model([TextPart('''Sorry, I can't help with that since I only know
about financial and how to use our app''')]),
]);
final message = "I want to learn more about pokemon";

// We don't print response directly but map response to funcationCall first
var response = await chat.sendMessage(Content.text(message));
final functionCalls = response.functionCalls.toList();

if (functionCalls.isNotEmpty) {
final functionCall = functionCalls.first;
final result = switch (functionCall.name) {
'getPokemonInfo' => await getPokemonInfo(functionCall.args),
_ => throw UnimplementedError(
'Function not implemented: ${functionCall.name}')
};

// Result will be JSON format from function getPokemonInfo
response = await _chat
.sendMessage(Content.functionResponse(functionCall.name, result));
// This response will be include JSON format so the model can sumarize
// information
print(response.text);
}

// in case that model can't map with any registered function
// just return text
if (response.text case final text?) {
print(response.text);
}
};

Finally, you can create a chatbot with a persona that only answers specific tasks and has constraints on what to do and what not to do. Moreover, your chatbot can understand personalized data from the API and call functions in your Dart code. You can instruct it to make API calls or perform automated tasks for users. Imagination is no longer limited.

To be honest, I studied machine learning on devices for some time now but didn’t have many use cases because I needed a lot of data to make accurate predictions. LLMs are a lifesaver for me because you can perform basic to advanced machine learning without having to train on large datasets. You can start with LLMs and feed them content to learn how to communicate and respond!

To learn more about Flutter and Gemini, you can see examples from the link below:

If anyone wants to get started, I have put all the examples into my GitHub below. The instructions are in Thai, but you can play around with it by inserting your key:

Here’s a sample video that I made from my GitHub:

Cool, right?! To go even deeper into this, Google has two more things for you.

Gemma 2

An open model behind Gemini that you can integrate into your app without needing an internet connection! Of course, with this setup, we have lower performance, but it works well for local tasks with faster response times. Google has provided a nice tutorial here:

Vertex AI for Firebase

To go a step further, you can use this tool to have a chatbot with the ability to query your database and provide answers. You can custom-train your bot with a large amount of data instead of just using JSON. You can feed it entire books, PDFs, or other sources for specific knowledge.

With the integration of Gemini and Flutter, the possibilities for creating intelligent and interactive applications are endless. As AI continues to evolve, the future looks even more exciting. 😇

For those who enjoy this article, don’t forget to follow Medium: KBTG Life. We have tons of great pieces written by KBTG people in both English and Thai.

--

--

Amorn Apichattanakul
KBTG Life

Google Developer Expert for Flutter & Dart | Senior Flutter/iOS Software Engineer @ KBTG