Using Apigee API Proxy to streamline a GenAI Hackathon

Romin Irani
Google Cloud - Community
12 min readSep 7, 2023

If your organization is planning a Gen AI Hackathon, it is advisable that access to the Foundational Models is streamlined. What we mean by that is one of several points below:

  • Wrap Foundational Models access with an API and not directly expose them.
  • Manage (Provision, Approve, Disable) API Keys to participating teams.
  • Track API Key usage.
  • Monitor API Metrics (Latency, Errors) in a centralized way.
  • Manage Rate Limits to the API.
  • Analyse Prompts sent to your foundational models.

To achieve the above, an API Gateway is the right tool and in this blog post, we will go through a step by step tutorial on setting up an Apigee API Proxy to streamline access to the Gen AI APIs during a Hackathon.

Our solution is shown below:

The solution details are as follows:

  1. We develop our API first as a Cloud Function that will wrap the Vertex AI Text Bison model.
  2. We then configure Apigee API Platform in our Google Cloud Project and create an API Proxy for our Cloud Function. This API Proxy is available at /predictionapi endpoint.
  3. All Applications that are developed during the Hackathon will invoke the /predictionapi endpoint, go through the Proxy.
  4. The Proxy will authenticate via API Keys and on successfull authentication the call is proppagated to the Cloud Function and the response is returned back.
  5. We can utilize off the shelf functionality available in Apigee API Platform for API Metrics, etc.

Prerequisites

  • You have a Google Account and Google Cloud Project with Billing Account.

Let’s deploy the API first

We shall first deploy a sample API that wraps a key Foundation model in Vertex AI. We will then manage access to this API via the Apigee API Proxy.

From the previous solution diagram, the step that we are trying to do in this section is highlighted below:

The Text Bison Foundation model is well suited to a variety of Generative AI tasks like Summarization, sentiment analysis, entity extraction, and content creation.

To expose this as an API, we are going to be using a Cloud Function to wrap its functionality. For this, we will use of the Gen AI Application Templates for Google Cloud that is made available in the repository here:

We are specifically going to be using the text-predict-cloudfunction , that is a Google Cloud Function that wraps the call to the predict method in the Text Bison Foundational model.

To setup this Cloud Function, follow the steps specified in the README file here:

Again, it is not important that you install the same Cloud Function but ensure that you do have an API available publicly over https that you can invoke. If you use the text-predict-cloudfunction , then you will have an endpoint in the following format:

https://$GCP_REGION-$GCP_PROJECT.cloudfunctions.net/predictText 

When you invoke the above API via curl by passing in a prompt, it sends back the response received from the Text Bison Foundation model. A sample run is shown below:

curl -m 70 -X POST https://$GCP_REGION-$GCP_PROJECT.cloudfunctions.net/predictText \
-H "Content-Type: application/json" \
-d '{
"prompt": "Give me a famous quote of Plato?"
}'



{"response_text": "\"The greatest wealth is to live content with little.\""}

Steps to setup the Apigee API Proxy

Setup of Apigee API Proxy requires the following steps:

  1. Configure Apigee Organization in your Google Cloud Project.
  2. Create the API Proxy for our Gen AI Cloud Function.
  3. Manage Keys for several Developer Teams that need to access our Gen AI Cloud Function.

Create Apigee Organization in your Google Cloud Project

Skip this section if you have already created an Apigee Organization

The first step is to setup Apigee in your Google Cloud Project. Go to the following url : https://console.cloud.google.com/apigee/welcome and if Apigee Organization is not setup, you should see a screen similar to the one below.

Click on SET UP APIGEE above and this will lead you through a setup wizard, the first of which steps is to Enable the required APIs. Go ahead and click on ENABLE APIs.

The next step is to Set up networking (Go with the defaults):

The next step is to Configure hosting and encryption (Ensure that you select the Google Cloud region that is appropriate to your requirements):

The next step is Customize access routing, the screen is shown below:

In this, we are not using our own domain name but going with one that is automatically generated for us, using nip.io. Additionally, click on the Edit button. For this example, we have gone with public internet access as shown below:

Go ahead with the next step and it will start provisioning Apigee for your project.

Note: This step takes a while. It took about 30–40 minutes for me.

Once complete, you will see the following screen:

We won’t be deploying any sample API since we have already done that via our Cloud Function. As part of setting up the Apigee Organization, it has setup the following Environment Group and Environment for you. Here is the Environment group and note the Host Names, which will be the host name in our API (more on that later). You will notice that it has setup an environment ( test-env ).

If you vist the Environments section, you will notice that there is one environment ( test-env ) as shown below:

If you click on the test-env environment, you can check out the environment details as shown below:

Notice that two nodes were provisioned and all Apigee related software was setup on it.

We are all set now to create the API Proxy.

Create the API Proxy for our Gen AI Cloud Function

The step that we are now going to address in this section is shown below:

To create the API Proxy in our test-env , follow the steps given below:

Go to Apigee → API proxies and click on CREATE. The screen is shown below.

Enter/Select the following values:

Click on NEXT. In this step, choose the environment to deploy the API Proxy into. Since we have the test-env created for us, we can select that.

Click on CREATE. This will create the API Proxy and you can see the details as given below:

Go to the DEVELOP tab above to see the Proxy + Target endpoints along with PreFlow and PostFlow for the API request/response. A sample screenshot is shown below:

Testing the API Proxy

Earlier, we had looked an Environment Groups and the specific group that it had created in the test-env and the hostname. The hostname was in the format: <IP_ADDRESS>.nip.io. So our Proxy endpoint will be in the following format:

https://<IP_ADDRESS>.nip.io/predictionapi

Using curl we can give our API Proxy a shot. Use your specific IP Address in the Environment Group Host name, and give the curl command below:

curl -m 70 -X POST https://<IP_ADDRESS>.nip.io/predictionapi \
-H "Content-Type: application/json" \
-d '{
"prompt": "What are the best places to visit in the United States?"
}'

This should give you back a successful response, if all goes well.

API Monitoring

Visit Proxy development → API monitoring, to view multiple charts that contain key metrics around the API Proxy. Some of the key metrics that are automatically captured include: Proxy Total Traffic, Proxy Error Rate, Proxy Latencies Percentile, Proxy Policies Percentile and more.

A sample screenshot is shown below:

Analytics — API Metrics

You will also find the Analytics section interesting. It provides metrics on Proxy Performance, Error Rates and more. You can also create custom reports, where you can select from 10s of standard metrics calculated and made available for you. You can get API Metrics from Analytics → API Metrics section in the Cloud Console.

A sample screenshot is shown below:

Manage Keys for several Developer Teams that need to access our Gen AI Cloud Function

We are now in the final steps of our solution, where we need to issue the API Keys and setup our API Proxy to verify the API Keys as part of the flow.

Based on our solution, we are now going to address the following. We have highlighted both of there since we will be modifying the API Proxy configuration to add a verifyAPIKeys policy in the API Proxy Incoming Request Flow.

Add Verify API Key Policy

Our first step is to add the Verify API Key policy to the API Proxy. Go to Proxy development → API proxies. Select the predictionapiproxy and go to the DEVELOP tab. Select the PreFlow for the Proxy endpoints as shown below:

Then click on the + sign next to the PreFlow as shown above. Add the policy step for Verify API Key as shown below. Give the name and display name as shown (but you can select your own names).

Click on ADD to complete adding the Policy Step. From the diagram select the VA-1 in the PreFlow and in the XML shown for the Policy Definition, change the ref to request.header.apikey from request.queryparam.apikey.

This means that we will now have to pass in the HTTP POST call to the Proxy, a Header that will contain the API KEY. The API Proxy will then extract out this value and validate it before allowing the call to go through.

Remember to SAVE and then DEPLOY the new revision of the API Proxy.

This completes updating the Proxy to validate the API Keys.

Test the Verify API Key Policy

If we try to invoke the curl command to our proxy again with a wrong API Key, we should get an invalid API Key error. A sample run is shown below:

curl -m 70 -X POST https://<IP_ADDRESS>.nip.io/predictionapi?apikey=xYztKiJpkyI1ySPO05hT6rOikw6snjEGsuky8DnWMBOE2PiX \
> -H "Content-Type: application/json" \
> -H "APIKEY: 1" \
> -d '{
> "prompt": "What are the best places to visit in the United States?"
> }'

{"fault":{"faultstring":"Invalid ApiKey","detail":{"errorcode":"oauth.v2.InvalidApiKey"}}}

Issue API Keys

The final step now is to issue API Keys via the Apigee service. To do that we need to first understand the logical relationships between a few entities.

The first thing that we need to create is an API Product. An API product is used to bundle your APIs and make them available to app developers for consumption. Think of an API Product here as bundling the API Proxy in our case.

So let’s go and create this API Product first. Go to Distribution → API products and click on CREATE button. The screen is shown below:

Give a name and display name to the Product. Ensure that you have selected the right environment and set the access as Public. Scroll below to an important section (Operations) where we include our API Proxy. Click on ADD AN OPERATION and enter the values as specified below:

You will notice that we have selected the API Proxy that we have deployed in the previous section. The Path is the /predictionapi endpoint and the HTTP methods have been selected as POST. Click on SAVE. And then come back and click on SAVE for the Product.

This will create the Product as shown below:

If you click on the above product, you will see the details as shown below:

The next step now is to issue the keys. Here there are two key entities in the Apigee ecosystem : Developer and Apps. Each Developer (think of them as a team in the hackathon) will have one or more Apps. Each of the Apps will have an API Key (Credential).

So, first up, create the Developer. Go to Distribution → Developers and click on CREATE button. Give some values for your first Developer (Team). A sample record is shown below:

Now, we can create an App that tis team will be working on. Click on Distribution → Apps and click on CREATE. A sample application definition for Team 1 is shown below:

Click on ADD CREDENTIAL and ensure that you select the right PRODUCT that you created in the previous step.

Click on ADD and then in the Create App page, click on CREATE. This will create the Application and note that the Credentials (API Keys) are listed and most importantly a correct association for the Products in which you have defined the API Proxy.

This means that the API Key will be validated against the Product that contains the API Proxy.

This completes the configuration. You can now use the same curl command and should find that the whole flow works:

curl -m 70 -X POST https://<IP_ADDRESS>.nip.io/predictionapi?apikey=xYztKiJpkyI1ySPO05hT6rOikw6snjEGsuky8DnWMBOE2PiX \
> -H "Content-Type: application/json" \
> -H "APIKEY: <YOUR_API_KEY>" \
> -d '{
> "prompt": "What are the best places to visit in the United States?"
> }'

<OUTPUT FROM THE API>

Debugging Issues

In case there is an issue and it does work, you should check out the DEBUG tab in the API proxy definition. A sample screen is shown below:

Click on the START DEBUG SESSION. This will bring up the session details, where you need to specify the environment in which you will be testing this out. In our case, it is the default test-env that was setup during the Apigee organization setup.

Click on START. Send across a sample curl request and you will notice that request appearing as shown below:

Click on the specific transaction and you can view the complete request/response along with PrFlow and PostFlow processing.

Conclusion

Hope this clarifies how you can setup an Apigee API Proxy to streamline the development of Applications that require to hit your Foundation Models API during a Hackathon. This could well apply to a range of API Hackathons.

Resources

--

--