Building End-to-End Generative AI application — Day 1

Uva rani Jagadeesan
3 min readMay 29, 2024

--

Alright, I have been involved in a couple of LLM projects and a few hackathons. My learning approach has always been to jump directly into the codebase and learn hands-on. There’s nothing better than learning by doing.

Even though I have experience with certain topics, I always revisit basic concepts. Over time, this habit has strengthened my foundations.

With that said, my aim for this ‘side project’ is to build an end-to-end generative AI application. I plan to use a basic stack: Python, Langchain library, LLM (OpenAI), and possibly Streamlit. This way, I can focus more on the concepts than the code.

On day 1, let’s focus on simple steps. Today, we are going to:

  1. Get an OpenAI account and obtain the OpenAI API key

2. Create a YAML file to store the key

3. Create a simple Q&A application to receive responses from the LLM. ( why dont we make it write Haiku?)

Yep, that’s it.

Set up OpenAI account and Key

head to https://platform.openai.com/apps

Click on apps and provide your billing details. Don't worry it wont cost you money.

By default it will give a 5$ credit which will be sufficient for tons of stuff.

Next generate your Secret Key and keep it safe. Don't expose it anywhere.

Create a yml file to store the OpenAPI_key

Next we are going to create a yaml file. just to store this.

openai_key : <<your-key>>

save it as api_credentials.yml

Write a simple Q&A chat with LLM

Please follow the ipynb notebook upload in my git. To follow along the code.

Start by installing necessary libraries.

ChatpromptTemplate from langchain library.Load the keys from the yml.

And then Choosing the model. here were are choosing gpt-3.5-turbo. Temperature parameter controls the randomness of the response from the model. ( keep is low for accurate results)

 model = ChatOpenAI(model_name=’gpt-3.5-turbo’, temperature=0.0)

Let’s focus on the main code where the magic happens. We’ll create a simple prompt template and pass it to our model to generate a response:

PROMPT = "Write a haiku poem on {topic}"
prompt = ChatPromptTemplate.from_template(PROMPT)

chain = (prompt | model)

response = chain.invoke({"topic": "India"})
print(response.content)

So, we are writing our request in ‘prompt’ variable. The Chain links the prompt to model

Finally, we are invoking the chain by passing the dynamic parameter and get the response.

And below is a simple flow chart how to code works.

you can replace the {topic} with any value and even the prompt with any context. It can provide out put.

I have added Addtional code in my git, which loops with above code with multiple prompts.

and yes! you are done with your with code interaction with LLM.

haiku response!

In my next article, we will include conversational history so the LLM, can remember the previous conversations.

I would love to hear your comments and feedback. Thank you!

--

--

Uva rani Jagadeesan

Senior Data Engineer @ Fractal.ai. I write about Data, GenAI, and Python coding. Bookworm and movie buff. Sharing my thoughts.