Hello Spring AI

Felix Do
6 min readDec 19, 2023

--

In the middle of 2023, Spring unveiled an experimental Spring AI project designed to facilitate the creation of AI applications within the Spring framework. By December 2023, the project had successfully graduated from its experimental phase, officially becoming a recognized and supported project.

In this series on Spring AI, we will delve into its technical aspects, focusing on practical usage and a deep exploration of the Spring AI Core implementation.

Understanding Spring AI

Before we proceed, it’s essential to acquaint ourselves with Spring AI. Rather than duplicating information already available in articles and official documentation, I’ll direct you to valuable resources:

  • Spring AI Reference: Stay updated on the latest releases and project changes. This comprehensive resource not only delves into the core concept of AI but also sheds light on the motivation driving the development efforts of the Spring team behind this innovative project. You can also obtain some knowledge on AI concepts from the resources like model, prompt, etc
  • Introduction Spring AI: Craig Walls provides a practical video guide on setting up a basic project and exploring fundamental concepts in Spring AI.

Hello Spring AI

In this series, my primary focus extends beyond Spring AI to include Spring Boot. I’ll be using Spring AI for demonstration purposes. Together, we’ll navigate the creation of a basic AI using Spring AI, integrating it seamlessly with OpenAI or Azure OpenAI. Following that, we’ll delve into the inner workings of Spring AI Core, explore the intricacies of Spring Boot’s auto-configuration, and more.

In the first article of this series, we’ll create a basic application exposing a REST API to pose questions, obtaining answers from either OpenAI or Azure OpenAI.

Here’s what we’ll cover:

  1. Setting up a Spring AI project.
  2. Expose a restful API to accept a question and generate a response from OpenAI API or Azure Open API

Setting Up Your Project:

First, use Spring initializr to create a Spring boot application with these dependencies:

At the time of this article, Spring AI is not available in Spring Initializr but can be accessed through a Snapshot release. Follow these steps to add Spring AI to your project using Gradle or Maven:

## Gradle
repositories {
mavenCentral()
maven { url 'https://repo.spring.io/snapshot' }
}

dependencies {
...
implementation 'org.springframework.experimental.ai:spring-ai-azure-openai-spring-boot-starter:0.7.1-SNAPSHOT'
implementation 'org.springframework.experimental.ai:spring-ai-openai-spring-boot-starter:0.7.1-SNAPSHOT'
}
## Maven
<repositories>
<repository>
<id>spring-snapshots</id>
<name>Spring Snapshots</name>
<url>https://repo.spring.io/snapshot</url>
<releases>
<enabled>false</enabled>
</releases>
</repository>
</repositories>

<dependencies>
<dependency>
<groupId>org.springframework.experimental.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.springframework.experimental.ai</groupId>
<artifactId>spring-ai-azure-openai-spring-boot-starter</artifactId>
<version>0.7.1-SNAPSHOT</version>
</dependency>
</dependencies>

Now, you should be able to build your project using ./mvnw compile or ./gradlew assemble.

Adding Controller:

This controller makes a call to the AI platforms API with the prompt input and returns the response from OpenAI.

package com.felix.hellospringai.controller;

@RestController
@RequiredArgsConstructor
public class BasicPromptController {
private final AiClient aiClient;

@PostMapping("/questions:ask")
public ResponseEntity<AnswerResponse> ask(@RequestBody QuestionRequest questionRequest) {
return ResponseEntity.ok(AnswerResponse.builder()
.answer(aiClient.generate(questionRequest.getQuestion()))
.build()
);
}
}
---
package com.felix.hellospringai.model;

import lombok.Data;

@Data
public class QuestionRequest {
String question;
}
---
package com.felix.hellospringai.model;

@Data
@Builder
public class AnswerResponse {
String answer;
}

The controller handles HTTP POST requests to “/questions:ask,” receiving a payload containing a question, utilizing a AiClient to generate an answer from AI platforms API, and responding with the result in the answer payload.

  • @RequiredArgsConstructor used with Lombok, automatically generates a constructor that injects dependencies marked as final. In this case, it injects an AiClient instance.
  • The aiClient.generate method is invoked with the question obtained from the questionRequest. This method will invoke an HTTP request to OpenAI API or Azure Open Azure API depending on your setting up.
  • ResponseEntity.ok(AnswerResponse.builder() ... .build()) constructs an AnswerResponse instance using a builder pattern and wraps it in a successful (ok) ResponseEntity. The response body will contain the generated answer.

Configuring API Key:

Before testing the endpoint, it’s essential to configure the application with the API keys from the respective AI platforms. I’ve set up two distinct profiles — one for OpenAI and another for Azure OpenAI. For security, all sensitive information will be passed through environmental parameters.

# application.yaml
spring:
config:
activate:
on-profile: azure-openai
autoconfigure:
exclude: org.springframework.ai.autoconfigure.openai.OpenAiAutoConfiguration
ai:
azure:
openai:
api-key: ${AZURE_OPENAI_API_KEY}
endpoint: ${AZURE_OPENAI_ENDPOINT}
model: ${AZURE_OPENAI_MODEL}
---
spring:
config:
activate:
on-profile: openai
autoconfigure:
exclude: org.springframework.ai.autoconfigure.azure.openai.AzureOpenAiAutoConfiguration
ai:
openai:
api-key: ${OPENAI_API_KEY}

Where do you get this information from?

I have a footnotes session below to guide you on how to obtain this information

Testing the Project

Start your project using

### OpenAI

## Gradle
OPENAI_API_KEY=<replace your key here> \
./gradlew bootRun --args='--spring.profiles.active=openai'

## Maven
OPENAI_API_KEY=<replace your key here> \
./mvnw spring-boot:run -Dspring-boot.run.arguments="--spring.profiles.active=openai"


### Azure OpenAI

## Gradle
AZURE_OPENAI_API_KEY=<replace your apikey> \
AZURE_OPENAI_ENDPOINT=<replace your endpoint> \
AZURE_OPENAI_MODEL=<replace your model> \
./gradlew bootRun --args='--spring.profiles.active=azure-openai'

## Maven
AZURE_OPENAI_API_KEY=<replace your apikey> \
AZURE_OPENAI_ENDPOINT=<replace your endpoint> \
AZURE_OPENAI_MODEL=<replace your model> \
./mvnw spring-boot:run -Dspring-boot.run.arguments="--spring.profiles.active=azure-openai"

You can then test the endpoint with the provided example:

# Request
curl --location 'http://localhost:8080/questions:ask' \
--header 'Content-Type: application/json' \
--data '{
"question": "What is generative AI?"
}'

# Response
{
"answer": "Generative AI refers to a type of artificial intelligence that is designed to generate new content, such as text, images, music, or videos, that is similar to or indistinguishable from content created by humans. Unlike other AI models that are trained to recognize patterns or make predictions, generative AI models are trained on large datasets and learn to create new content by generating and combining elements from the training data. These models can be used for various applications, including content creation, virtual assistants, chatbots, and even deepfake technology."
}
Tested the endpoint using Postman

Depending on your AI platform, the response may vary and it will be different every time you invoke the endpoint.

That’s it.

Conclusion

This marks the end of the first article in our series. You’ve learned how to run a Spring Boot application, interact with OpenAI or Azure OpenAI APIs, and configure your project for either option.

In the next article, we’ll explore the inner workings of Spring AI as it integrates with various AI platforms. Stay tuned for more insights!

Source code:

Footnotes:

How to obtain API Key from OpenAI

To use OpenAI API, you need to have credit in the account.

Visit the billing page at https://platform.openai.com/account/billing/overview and add your payment method. Ensure you fund a minimum of $5, which should be sufficient for testing the project as you’ll only pay for what you use. After successfully funding the account, regenerate a new API key.

How to obtain API key, endpoint, and model name in Azure Open AI

You need to follow this video to add Azure OpenAI resources, deploy a model into Azure

  • When deploying the model, please refrain from selecting the model version as demonstrated in the video above; instead, opt for model version 0301. Remember the deployment name so that you can pass it into model name in the configuration
  • To obtain the API key and endpoint, navigate to Azure OpenAI and access the “Keys and Endpoints” section.
  • To view your model deployments, navigate to “Model Deployments,” and click on “Manage Deployment.” This action will redirect you to Azure AI Studio, where you can access a list of deployed models.

--

--