Build and deploy a GPT-3 classifier in 15 min

Jason Fan
Buff AI
Published in
6 min readFeb 1, 2023

ChatGPT is cool. While the API isn’t public yet, there’s still lot you can do with GPT-3, its predecessor.

In this article we’ll set up a support desk that takes in user input and uses GPT-3 to figure out if it’s related to customer support, and if so which department it should be sent to.

This performs surprisingly well on a variety of inputs. You can see a live demo here. For production use cases, check out Buff, our open source project that helps companies build, deploy, and monitor conversational interfaces with LLMs.

We’ll use Google Cloud Functions for the backend and GitHub pages to host the client, so it will only take 15–30 minutes to get the app running.

Step 1: Create an OpenAI account and generate an API key

First off, you will need an OpenAI account. Create one here. As of Jan 31 OpenAI gives new accounts $18 in free credits, but you can also choose to add a credit card to the account since the free credits run out quickly. Create a new API key for your application and make sure to copy it somewhere secure.

Step 2: Deploy the backend as a Google Cloud Function

If you don’t have one already, create a Google Cloud account, then navigate to Cloud Functions. Click Create Function

Leave everything as the default, with 3 exceptions

  • Choose 2nd gen as the environment
  • Select Allow unauthenticated invocations under HTTPs
  • Paste in your OpenAPI API key under Runtime, build, connections, and security settings

Step 3: Update the prompt and server-side code

You’ll see two files, index.jsand package.json. Import the OpenAI libraries, configure and instantiate them, and add the dependencies to package.json . Set the Node version to >16, otherwise you may run into errors with Next.js.

Index.js

const functions = require('@google-cloud/functions-framework');
const { Configuration, OpenAIApi } = require("openai");

const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

functions.http('helloHttp', (req, res) => {
res.send(`Hello ${req.query.name || req.body.name || 'World'}!`);
});

package.json

{
"dependencies": {
"@google-cloud/functions-framework": "^3.0.0",
"openai": "^3.1.0"
}
}

Add the following to the body of the function passed into functions.http()

functions.http('helloHttp', async (req, res) => {
// Ensures that we can send requests to this cloud function from our frontend
res.set('Access-Control-Allow-Origin', "*")
if (req.method === 'OPTIONS') {
// Send response to OPTIONS requests
res.set('Access-Control-Allow-Methods', 'GET, POST');
res.set('Access-Control-Allow-Headers', 'Content-Type');
res.set('Access-Control-Max-Age', '3600');
res.status(204).send('');
}
if (!configuration.apiKey) {
res.status(500).json({
error: {
message: "OpenAI API key not configured, please follow instructions in README.md",
}
});
return;
}

const inquiry = req.body.inquiry || '';
if (inquiry.trim().length === 0) {
res.status(400).json({
error: {
message: "Please enter a valid inquiry",
}
});
return;
}
// Calls the OpenAI Completions API. You can adjust the temperature to change how
// conservative GPT-3 will be when classifying.
try {
const completion = await openai.createCompletion({
model: "text-davinci-003",
prompt: generatePrompt(inquiry),
temperature: 0.6,
});
console.log(completion.data)
res.status(200).json({ result: completion.data.choices[0].text });
} catch(error) {
// Consider adjusting the error handling logic for your use case
if (error.response) {
console.error(error.response.status, error.response.data);
res.status(error.response.status).json(error.response.data);
} else {
console.error(`Error with OpenAI API request: ${error.message}`);
res.status(500).json({
error: {
message: 'An error occurred during your request.',
}
});
}
}
});

Finally, implement the generatePrompt function. GPT-3 performs best with a few examples of the kinds of output that is expected. You can tweak the prompt with additional examples to improve performance, or change it completely!

function generatePrompt(inquiry) {
return `You are an employee at Home Depot tasked with classifying the type of issue a customer is reporting. If the issue does not apepar to be related to Home Depot or its products, say 'Unrelated' instead of making something up.

You must choose from the following options:
- Order Status
- Shipping and Delivery
- Products and Services
- Pricing and Promos
- Payments
- Account
- Returns
- Policies and Legal
- Corporate Information
- Speak to a Human

Question: Can I return orders online?
Answer: Returns

Question: When will my order arrive?
Answer: Shipping and Delivery

Question: Let me speak to someone
Answer: Speak to a Human

Question: I can't sign in to my account
Answer: Account

Question: I need to get a refund, my drill bit arrive broken
Answer: Products and Services

Question: Do you stock the ryobi cordless sander?
Answer: Products and Services

Question: ${inquiry}
Answer:`;
}

Step 4: Deploy and test the cloud function

This is what your directory structure should look like on GCP. Press Deploy. Feel free to test the function from GCP by going to the Testing tab, but if you followed the instructions so far, the cloud function should work as long as the deployment succeeds.

You can find the URL for your backend on the Function details page.

Step 4: Deploy a user interface with GitHub pages

Now we just need some way for users to interact with the cloud function.

Clone the OpenAI quickstart repo. All the client-side code is in pages/index.js and the server-side code is in pages/api/generate.js . We’ll ignore most of the content in generate.js since we’ve already implemented the backend as a Google Cloud Function.

In index.js , change the URL from api/generate to the URL of your cloud function backend.

const response = await fetch("https://name-of-your-function-abcdef3.a.run.app", ...

In the cloned GitHub repo, go to Settings then Pages, and choose to deploy with GitHub Actions.

Next.js should automatically show up as an option. Click Configure and commit the new nextjs.yml workflow file.

Finally, go to the Actions tab in the repo, click on Deploy Next.js site to Pages, and run the workflow.

Conclusion

That’s it! You’ve just deployed a GitHub web page and cloud function that allows anyone visiting your page to interact with GPT-3.

As a next step, you can try adjusting the UI and prompts for other use cases like

  • Coming up with jokes based on a topic
  • Recommending polite ways of saying the same thing to a customer

One thing to note is that by default the GPT-3 Completions API will return a max of 16 tokens. This might be to small for some use cases, so tweak this as needed for your use case. And if you are interested in using GPT or other large language models for conversational interfaces at work, check out our open source project Buff which makes this easy.

--

--