MVVM Android and ChatGPT integration.

Guilherme Cunha
6 min readFeb 1, 2024

--

Hi Devs!

Today we'll build a Android Jetpack Compose app with MVVM architecture and a ChatGPT integration. This will be a simple app, in which we are going create a random Text Generate app.

First of all I'll create the project on Android Studio. Let's create an Empty Activity with Jetpack Compose and create an name to the app.

Look at symbol on the image

Well, now let’s create our packages to organize our MVVM architecture. All packages come from our main package, in this case com.example.yourappname. In the image below we can see how to create a package, so here we'll create a Model, View and ViewModel.

First let's look at our ui.theme package, it's created automatically when you start the project. Inside this package there are three files, Color, Theme and Type so we'll customize the Color and Theme file. Defining our colors and adding them to our theme.

Let’s create two colors in the Color file.

val background = Color(red = 40, green = 225, blue = 222, alpha = 0xFF)
val purple = Color(red = 135, green = 40, blue = 225, alpha = 0xFF)

And we'll add them to the Theme file, remembering that we can replace the automatically created colors with our colors.

private val DarkColorScheme = darkColorScheme(
primary = background

)

private val LightColorScheme = lightColorScheme(
primary = background
)

@Composable
fun TalkEnglishTheme(
darkTheme: Boolean = isSystemInDarkTheme(),
// Dynamic color is available on Android 12+
dynamicColor: Boolean = true,
content: @Composable () -> Unit
) {
val colorScheme = when {
dynamicColor && Build.VERSION.SDK_INT >= Build.VERSION_CODES.S -> {
val context = LocalContext.current
if (darkTheme) dynamicDarkColorScheme(context) else dynamicLightColorScheme(context)
}

darkTheme -> DarkColorScheme
else -> LightColorScheme
}
val view = LocalView.current
if (!view.isInEditMode) {
SideEffect {
val window = (view.context as Activity).window
window.statusBarColor = colorScheme.primary.toArgb()
WindowCompat.getInsetsController(window, view).isAppearanceLightStatusBars = darkTheme
}
}

MaterialTheme(
colorScheme = colorScheme,
typography = Typography,
content = content
)
}

Now let's structure our code. Let's start with the Views, here we'll use two views to understand the screen transiction on Android using Jetpack Compose.

The first view will be called StartView.

@Composable
fun StartView(viewModel: MainViewModel) {
Box(
modifier = Modifier
.fillMaxSize()
.background(background),
contentAlignment = Alignment.Center
) {
Button(
onClick = { viewModel.navigateTo("GameView") },
modifier = Modifier.size(width = 200.dp, height = 50.dp),
colors = ButtonDefaults.buttonColors(purple)
) {
Text("Start")
}
}
}

@Preview(showBackground = true)
@Composable
fun PreviewStartView() {
val viewModel = MainViewModel()
StartView(viewModel)
}

Our first view is a simple screen, with just one button on the center, which will take us to the second screen. You can see that we used our colors: background and purple.

@Composable
fun GameView(viewModel: MainViewModel) {
val sentText = viewModel.generatedText.value
Box(
modifier = Modifier
.fillMaxSize()
.background(background),
contentAlignment = Alignment.Center
) {
Column(
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center,
modifier = Modifier.padding(16.dp)
) {
Text(
text = sentText,
modifier = Modifier.padding(bottom = 15.dp)
)
Row(
horizontalArrangement = Arrangement.spacedBy(16.dp)
) {
Button(onClick = { viewModel.generatedNewText() },
modifier = Modifier.size(width = 160.dp, height = 50.dp),
colors = ButtonDefaults.buttonColors(purple)
) {
Text(text = "NEW")
}
}
}
}
}

@Preview(showBackground = true)
@Composable
fun PreviewGameView() {
val viewModel = MainViewModel()
GameView(viewModel)
}

In both views there is an instance of the ViewModel created, because the ViewModel will manage the screen traces and the chatGPT integration in our application, so we'll let the ViewModel for last. The view structures follow the composable pattern, the second view called GameView has a Clolunm that receives two more components, Text and Row and finally the Row receive a Button component that we'll use to generate a random text.

Let's create the Model first, in this file we just need to put what we are going to use in the chat-GPT request and response. In the request we'll need to send a prompt, number of tokens and the model, here we'll use davinci-002.

data class RequestBody(
val prompt: String,
val max_tokens: Int,
val model: String
)

data class OpenAIResponse(
val choices: List<Choice>
)

data class Choice(
val text: String
)

Now let's create our service file, this is a Interface file you can create new>Kotlin Class/File and there you can select a Interface file. So here I called this file of OpenAiService.

interface OpenAiService {
@Headers("Content-Type: application/json")
@POST("/v1/completions")
suspend fun createCompletion(@Header("Authorization") apiKey: String, @Body body: com.example.talkenglish.Model.RequestBody): Response<OpenAIResponse>
}

In this interface file we will get our Headers and the endpoint used. See that I wrote in a POST method, because we are going to send data to the openAI service, and we created a function with an authorization Header and an apiKey parameter of type string.

The apiKey parameter is your Key created on the openAI website and the authorization header is used to send that key as a bearer token. And later thare is a body and this body is the requestBody that we have created in Model file and this body will be a prompt, max_token and model.

class MainViewModel: ViewModel() {
private val _currentScreen = MutableStateFlow("StartView")
val currentScreen: StateFlow<String> = _currentScreen
private val _generatedText = mutableStateOf("Initial Text")
val generatedText: State<String> = _generatedText

fun navigateTo(screen: String) {
viewModelScope.launch {
_currentScreen.value = screen
}
}

private val okHttpClient = OkHttpClient.Builder()
.addInterceptor(HttpLoggingInterceptor().apply {
level = HttpLoggingInterceptor.Level.BODY
})
.addInterceptor { chain ->
val request = chain.request().newBuilder()
.header("Authorization", "Bearer ${ ApiKey.OPENAI_API_KEY }")
.build()
chain.proceed(request)
}
.build()


private val openAIApi = Retrofit.Builder()
.baseUrl("https://api.openai.com/")
.client(okHttpClient)
.addConverterFactory(GsonConverterFactory.create())
.build()
.create(OpenAiService::class.java)

fun generatedNewText() {
viewModelScope.launch {
//first the prompt
val prompt = "Say a english sentence."

//second the response
val response = openAIApi.createCompletion(
"Bearer ${ApiKey.OPENAI_API_KEY}",
RequestBody(prompt, 50, "davinci-002")
)
if (response.isSuccessful && response.body() != null) {
_generatedText.value = response.body()!!.choices.first().text.trim()
} else {
Log.e("MainViewModel", "Falha na requisição: ${response.errorBody()?.string()}")
}
}
}
}

This is the ViewModel where we'll manage our entire application. First there are some private variables that are used to manage the states of our screens and to store the text generated by the AI. The navigateTo function do the transitions from one screen to another, that's all.

We created an okHttpClient structure where we get our key as a bearer token, we'll use this client for the retrofit.

The retrofit receives the baseUrl from openAI and the okHttpClient and our openAIService in create.

FInally we create our generatedNewText() function, where we create our fixed prompt and a response that receives openAIApi.createCompletion, here we get the bearer token and the body of our request. Then we just do a check where the Text component on the GameView screen is replaced by the new text created by AI.

class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
TalkEnglishTheme {
val viewModel: MainViewModel = viewModel()
AppScreen(viewModel)
}
}
}
}

@Composable
fun AppScreen(viewModel: MainViewModel) {
val currentScreen = viewModel.currentScreen.collectAsState().value

when(currentScreen){
"StartView" -> StartView(viewModel)
"GameView" -> GameView(viewModel)
}

}

@Preview(showBackground = true)
@Composable
fun GreetingPreview() {
TalkEnglishTheme {
val viewModel = MainViewModel()
AppScreen(viewModel = viewModel)
}
}

And finally our MainActivity that will receive our two screen, StartView and GameView, remembering that management is done by the ViewModel.

I'll post some application prints below, enjoy the process and create what you want, this is just an idea, you can change and create a lot things.

Thank You and let's code!

--

--