A Non-Coder's Adventure With OpenAI's API

Tiago Rosa
9 min readMar 22, 2024

--

Generative AI is everywhere, and its capabilities are almost miraculous. At this very moment, anyone with an Internet connection can leverage the power of these ultra-sophisticated models to create high-quality, novel content with just a few keystrokes. This technology is changing the world right before our eyes, and it is evolving so fast that it's hard to keep up with the news.

Recently, I've been spending a fair amount of time interacting with ChatGPT, asking Gemini to do some market research for me, and creating illustrations for my presentations with GPT-4’s image generation feature on Microsoft's Copilot free app for Android. But, as a techie/nerd/geek, I also wanted to dive a little deeper and find out how applications can interact with these LLMs to enable functionality that can actually generate value to users.

Now, let me qualify the "non-coder" tag I've given myself in the title: I have not written code professionally for over 10 years, with the very rare exception of a Python script here and there to automate some report generation, and the extremely occasional PR with translation updates or some other kind of very simple change on enterprise stuff. I do play around with pet projects from time to time, but I would not dare call myself a Developer or a Software Engineer by any stretch of the imagination.

So, despite my rusty coding skills, I decided to find out how the OpenAI API works, and give it a whirl.

Step 1: Sign Up, Create an API Key and Add Credits

I started out by going to platform.openai.com and creating an account. Pretty straightforward. I confirmed my e-mail and I was all set.

Landing page of the signed-in area of platform.openai.com

Now, in order to use the API, I had to create something called an API Key. As the name suggests, an API Key is something that "unlocks" access to the API, and at the same time identifies who's using it. I just had to go to "API keys" on the lefthand menu, validate my phone number, and voilà!

API keys page on platform.openai.com

OpenAI's API is not free. There's a cost associated to hitting their models, and it varies by model type, version and amount of tokens.

Tokens are like the LEGO blocks of LLMs. When you send a text prompt, it's broken down into many smaller parts (words, pieces of words, characters, etc.), each of which is named a token. Incredibly enough, the same happens with images or video.

Each model has a specific amount of tokens it can process at a given time (for both input and output). In simple terms. the more tokens you pass through a model, the more you pay.

I went ahead and added $5 to my account. For gpt-3.5-turbo, OpenAI charges something like $0.50 per million input tokens. So, for the purposes of my small adventure here, five bucks will go a long, long way.

Usage page, where you can track your spending and API activity

Step 2: Write some code to actually call the API

Now that I'm all set up and paid for, I can actually write some code to call the API and do something fun with the results. At some point in my life, someone was brave (or foolish) enough to pay me a salary to write Python code, and I learned to love the language for its simplicity and elegance. It's like a Swiss army knife of libs and integrations, which makes it so easy to build simple things fast without too much setup grunt work.

Needless to say, Python of course has a library for OpenAI. So, getting started with it is as simple as running the following on my terminal:

pip install openai

And then opening up a .py file and importing it:

import openai

The API has a lot of options and parameters that you can check in detail in the docs. But, in a gist, it works like this:

  1. Choose a model (for example: gpt-3.5-turbo or gpt-4)
  2. Write down a message to send to it (i.e. the prompt)
  3. Define how "creative" you want the model to be when responding (through a setting called temperature)
  4. Set how many tokens you want the model to use to respond (i.e. how lengthy and verbose you want the response to be)

Once steps 1 through 4 are done, you can send a request, and OpenAI will give you a response. This response is then yours to have fun with.

I wrote a simple Python function to do all of the above:

def get_completion(messages, model="gpt-3.5-turbo", temperature=1, max_tokens=500):
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=temperature
)
return response.choices[0].message.content

Of course, before I can actually use it, I need my API Key to get through the door. Since one is never, never EVER supposed to leave secret keys hanging around in plain sight, I did what every good citizen is supposed to do: I created a hidden .env file and pasted the key there. I also made sure that file is explicitly excluded from version control!

import os
import openai
from dotenv import load_dotenv

load_dotenv()

os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")

client = openai.OpenAI()

def get_completion(messages, model="gpt-3.5-turbo", temperature=1, max_tokens=500):
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=temperature
)
return response.choices[0].message.content
Running the "get_completion" function we just defined and checking the output

Step 3: Do something valuable and fun with it (turn it into a mini-product)

I've recently managed to ramp up my reading habit. I picked up my old Kindle and started cranking through a backlog of books I had bought but not read, at a pace of 3 to 4 a month. Once I got through those, I wanted to keep the momentum going and find new, interesting books to read.

Whenever you finish a book on Kindle, the device shows you a list of suggested next reads. But these suggestions are usually directly tied to the theme of the book you've just read. And I like to mix things up a bit. For example: I read a book about philosophy, then jump to product management, then to behavioral economics, and then back to philosophy.

So, as mostly everyone has been doing since early 2023, I thought about picking the brains of an LLM to get interesting book suggestions. Since I was playing around with OpenAI, this turned out to be a perfect example of nice little task I could accomplish with my new superpowers.

Enter Flask, a super simple Python web framework. With Flask, you can build a functioning web app in just a few lines of code:

from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello_world():
return "Hello, World!"

if __name__ == "__main__":
app.run(debug=True)

Once you have your web app, you can run it locally on your computer, or deploy it somewhere and access it through the Internet. While getting a "Hello World" app running in Flask is a breeze, you can also build more robust things with it by including modern Javascript frameworks for the frontend, for example.

I had been meaning to check out Vue.js for a while. Now, I had never read a single line of a Vue component before but, since I was riding the LLM waves, I thought about getting help from my friend Gemini to get started:

Asking Gemini to write a Vue.js component for me (complete with dynamic actions such as a spinner)

After some back and forth on the chat, alongside an introductory Flask + Vue.js tutorial, I was able to get to this:

Very simple landing page for "GPT Book Recommendations" — written by Gemini and lightly edited by me

With a few more steps of the Flask tutorial, I was able to define an endpoint called "/books" to be fired by the "Get Recommendations" button. It essentially gets the input from all of the form fields, collates it with a predefined "system message" (a little backstory that you pass to the model so it knows how to respond more appropriately), and calls our get_completion function to pass it along to OpenAI:

system_message = f"""
The user has read a few books recently and wants to discover \
a new book to read next.
You need to suggest a next book for the user to read, \
which has has a high likelihood of matching their interests, \
based on the books they have recently read.
Write a nice review of the book containing basic information like \
the title, the authors and a very brief (one sentence long) description \
of why you chose that specific book as a recommendation, outlining \
the similarities between the recommended book and the books the user \
has recently read.
"""

@app.route('/books', methods=['POST'])
def book_recoms():
post_data = request.get_json()
books_read = post_data['books']
completion = openAiSandbox.get_completion(
[
{"role": "system",
"content": system_message},
{"role": "assistant",
"content": f"""
The user has recently read these 5 books: \
{books_read[0]},
{books_read[1]},
{books_read[2]},
{books_read[3]},
{books_read[4]},
"""
},
{"role": "user",
"content": "Suggest a next book for the user to read" },
]
)
return completion

Finally, I can wire everything together to get a fancy little webapp that suggests my next book:

Listing the last 5 books I've read and getting a recommendation for the next one

My very basic Python and JS skills, augmented by a code-writing LLM, were enough to cook this up in about an afternoon. I've hit "submit" a few hundred times already, and my bill at OpenAI has run up to about 6 cents! Those initial 5 dollars will still last me a long time.

Why am I writing about this? Tools such as GitHub Copilot are already being widely adopted in the software industry, and more audacious bets like Devin are also on the horizon. While I don't think AI will completely replace human developers, I do see a future where the different roles that currently exist in the software industry (Developer, Product Manager, Quality Analyst, etc.) will kind of fuse together, with people using AI to supplement the skills that they don't currently have.

Suppose a Product Manager on a large digital firm has an idea for a new feature that could enhance their product. Instead of having to wait until a couple of Developers have time available to build a prototype, a PM with a general understanding of APIs and basic prompt writing abilities can perhaps create a working prototype all on their own, make it available to a few internal users, and collect some metrics and learnings independently. Combine this with developer platforms such as Backstage, which bring to life push-button deployment of new apps with batteries included, and you have a powerful recipe.

If and when the idea proves valuable and it's time to go beyond the prototype, then of course it's time to actually craft software with quality in mind. Here, AI-generated code alone won't cut it. But the prototype itself has served its purpose.

There's a case to be made for Generative AI becoming a PM's best friend, wildly enhancing their productivity and the speed at which they can test and learn things.

With GenAI, the traditional distinction between a "coder" and a "non-coder" can become softer. Rapid prototyping made easier by LLMs can become a key skill in the market for every typically "non-technical" role. The power of this technology is indisputable; it's up to us to harness this power to build the future that we want to live in.

The full "BookRecommendationAI" code can be found on my GitHub: https://github.com/trosa/BookRecommendationAI

--

--

Tiago Rosa

Thoughts on technology, product and people. Principal Consultant @ Thoughtworks