Choosing Between DIY Approach vs. API Gateways: Finding the Right Balance for the API Management

Mert Simsek
Beyn Technology
Published in
14 min readMar 28, 2024

In the realm of API management, developers often face the dilemma of whether to leverage API gateways or implement certain functionalities within their application code. This decision carries implications for flexibility, performance, cost, and maintainability. In this blog post, we’ll explore the trade-offs between adopting API gateways and implementing DIY solutions within application code, helping you find the optimal approach for your API management needs. In today’s fast-paced world of software development, the management of APIs plays a critical role in ensuring the success and scalability of modern applications. At the heart of API management lie crucial decisions regarding how to handle various aspects of API traffic, security, and optimization. One of the fundamental decisions developers often grapple with is whether to rely on robust API gateway solutions or to craft bespoke implementations directly within their application code.

https://info.support.huawei.com/info-finder/encyclopedia/en/API+Gateway.html

API gateways offer a comprehensive suite of features designed to streamline API management tasks. These include routing requests to appropriate backend services, implementing authentication and authorization mechanisms, enforcing rate-limiting policies, caching responses for improved performance, and transforming request and response payloads to meet specific requirements. By centralizing these functionalities within a dedicated gateway layer, developers can achieve greater consistency, scalability, and security across their APIs. On the other hand, opting for a do-it-yourself (DIY) approach involves implementing these functionalities directly within the application codebase. This approach offers developers unparalleled flexibility and control over the behavior of their APIs. By customizing each aspect of API management according to their specific needs, developers can fine-tune performance, integrate seamlessly with existing systems, and adapt rapidly to evolving requirements. However, the decision between API gateways and DIY solutions is not a binary one and entails careful consideration of various factors.

Let’s check out a rate limit code instance in Go. In this code example, we import the rate package from golang.org/x/time, which provides functionality for rate limiting. We create a rate limiter that allows up to 3 requests per second. Then, we send 5 requests sequentially and check if the limiter allows each request. If it does, we process the request; otherwise, we inform the user that the rate limit has been exceeded. So is it really worth doing this within the project?

package main

import (
"fmt"
"time"

"golang.org/x/time/rate"
)

func main() {
// Create a rate limiter that accepts up to 3 requests per second
limiter := rate.NewLimiter(3, 1)

// Send 5 requests sequentially
for i := 0; i < 5; i++ {
// If the limiter allows the request, proceed
if limiter.Allow() {
fmt.Println("Request processed")
} else {
fmt.Println("Rate limit exceeded. Please try again later.")
}
time.Sleep(200 * time.Millisecond)
}
}

There is another code instance in Go. It’s checking “api-key” which is common for API communication. we create an HTTP server that requires API keys to authenticate requests. The API keys are stored in a map, and each incoming request is checked against this map to ensure its validity. If the API key is valid, the request is processed; otherwise, an HTTP status code 401 Unauthorized is returned. The apiAuthMiddleware function serves as middleware to handle API key authentication before passing the request to the actual handler. Likewise, what does “api-key” control do in a player service or product service? At this point, we are also dealing with its testing processes and performance losses.

package main

import (
"fmt"
"net/http"
)

// Create a map to store API keys
var apiKeys = map[string]bool{
"api-key-1": true,
"api-key-2": true,
}

// HTTP middleware function to authenticate API keys
func apiAuthMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
apiKey := r.Header.Get("X-API-Key")

// Check if the API key is valid
if _, ok := apiKeys[apiKey]; !ok {
http.Error(w, "Unauthorized", http.StatusUnauthorized)
return
}

next.ServeHTTP(w, r)
})
}

func main() {
// Create an HTTP handler using the API authentication middleware
http.Handle("/", apiAuthMiddleware(http.HandlerFunc(handler)))

// Start the server
fmt.Println("Server listening on :8080")
http.ListenAndServe(":8080", nil)
}

// Sample handler function
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, World!")
}

Implementing rate limiting, API key authentication and so on abilities at the API gateway level offers several advantages over handling them within individual projects. Furthermore, handling rate limiting and authentication at the gateway enhances scalability. The gateway acts as a centralized point for managing incoming requests, allowing it to efficiently handle high volumes of traffic, distribute load evenly, and enforce access controls without overburdening individual services. This scalability ensures that the system can handle spikes in traffic and accommodate future growth seamlessly. Overall, leveraging the capabilities of the API gateway for rate limiting and API key authentication simplifies management, improves scalability, and ensures consistency, making it a preferred approach for modern API architectures.

We’ll delve deeper into the advantages and challenges of both approaches, exploring real-world examples and best practices to help you navigate this decision-making process effectively. Whether you’re building a small-scale application with tight budget constraints or architecting a large-scale microservices ecosystem, finding the right balance between convenience, performance, cost, and maintainability is essential for achieving long-term success in API management. Let’s dive in and uncover the insights that will empower you to make informed decisions tailored to your unique requirements and objectives.

  1. Routing: API Gateways can efficiently route incoming requests to the appropriate backend services based on predefined rules and paths.
  2. Authentication and Authorization: They facilitate secure access to APIs by implementing various authentication mechanisms such as OAuth, API keys, or JWT tokens. Additionally, they enforce access control policies to ensure that only authorized users or applications can access protected resources.
  3. Rate Limiting: API Gateways can enforce rate-limiting policies to prevent abuse or overloading of backend services by limiting the number of requests a client can make within a specified time frame.
  4. Caching: By caching responses from backend services, API Gateways can improve the performance and scalability of APIs by reducing the load on backend systems and minimizing response times for frequently accessed data.
  5. Request and Response Transformation: They can transform incoming requests or outgoing responses to comply with specific formats or standards, such as converting between different data formats (e.g., JSON to XML) or filtering sensitive information before sending responses to clients.
  6. Logging and Monitoring: API Gateways offer built-in logging and monitoring capabilities to track API usage, monitor performance metrics, and detect and troubleshoot errors or anomalies in real-time.
  7. Traffic Management: They enable traffic shaping and load balancing to distribute incoming requests evenly across multiple backend servers or microservices, ensuring high availability and scalability of APIs.
  8. Security: API Gateways provide features like SSL/TLS termination, request validation, and threat protection to enhance the security posture of APIs and protect against common security threats such as SQL injection or cross-site scripting (XSS).
  9. Versioning and Lifecycle Management: They facilitate API versioning and lifecycle management, allowing developers to introduce changes or updates to APIs seamlessly while ensuring backward compatibility and minimal disruption to existing clients.
  10. Integration with External Services: API Gateways can integrate with external services such as identity providers, logging platforms, or third-party API management tools to extend their functionality and streamline API management workflows.

By leveraging these capabilities, API Gateways simplify and streamline the management of APIs, allowing developers to focus on building and delivering high-quality services without getting bogged down by cumbersome processes or infrastructure concerns. If developers choose to implement the functionalities typically provided by an API Gateway directly within their software project, several drawbacks and negative aspects may arise:

  1. Increased Complexity: Incorporating routing, authentication, rate limiting, caching, request/response transformation, logging, monitoring, traffic management, security, versioning, and integration functionalities directly within the software project significantly increases its complexity. Developers must handle each of these aspects individually, leading to a more intricate and convoluted codebase.
  2. Higher Development and Maintenance Costs: Building and maintaining these functionalities within the software project require substantial development effort and ongoing maintenance. Developers need to allocate additional resources for implementing, testing, debugging, and updating each feature, resulting in higher development costs and longer time-to-market.
  3. Lack of Specialization and Best Practices: API Gateways are purpose-built tools designed specifically for managing API traffic and implementing common API management functionalities. By reinventing these functionalities within the software project, developers may overlook industry best practices, optimization techniques, and specialized features provided by dedicated API Gateway solutions.
  4. Scalability Challenges: Managing API traffic and ensuring scalability can be challenging without the infrastructure and optimizations provided by API Gateways. Handling increased traffic, distributing load across servers, and maintaining high availability become more complex and error-prone when implemented directly within the software project.
  5. Security Risks: Implementing security features such as authentication, authorization, and rate limiting without the expertise and built-in security measures of API Gateways may introduce security vulnerabilities and risks. Developers must ensure proper handling of sensitive data, protection against common security threats, and adherence to security best practices, which can be error-prone and time-consuming.
  6. Limited Flexibility and Extensibility: While developers have more control over the implementation of functionalities within their software project, this approach may limit flexibility and extensibility compared to using an API Gateway. Incorporating new features, integrating with external services, and adapting to changing requirements become more challenging and time-consuming without the modular and extensible architecture of API Gateways.
  7. Reduced Performance and Efficiency: Without the optimizations and performance enhancements provided by API Gateways, the software project may experience decreased performance and efficiency, particularly under high loads. Handling API traffic, managing caching, and optimizing response times require careful implementation and may not achieve the same level of performance as dedicated API Gateway solutions.

While implementing API management functionalities within the software project offers increased control and customization, it also comes with significant drawbacks, including increased complexity, higher development and maintenance costs, scalability challenges, security risks, limited flexibility, and reduced performance. Developers should carefully weigh these factors and consider the trade-offs before deciding whether to build these functionalities in-house or leverage dedicated API Gateway solutions like Kong.

Kong Gateway

Kong Gateway is a powerful tool that empowers developers to efficiently manage their APIs while providing various functionalities to streamline the API management process. With Kong, developers can easily set up routing rules to direct incoming requests to the appropriate backend services, ensuring efficient traffic distribution and load balancing. Additionally, Kong offers a wide range of authentication and authorization plugins, allowing developers to enforce security policies and control access to API endpoints effectively. Rate limiting capabilities in Kong enable developers to set usage limits for APIs, preventing abuse or overloading of backend systems.

Furthermore, Kong’s caching functionality helps improve API performance by caching responses from backend services, reducing response times for frequently accessed data One of the key features of Kong is its extensibility through plugins. Developers can leverage various plugins available in the Kong ecosystem to customize and enhance the functionality of their API gateway. Whether it’s request and response transformation, logging and monitoring, or integration with external services, Kong provides a rich set of plugins to cater to diverse requirements.

Moreover, Kong’s robust security features, including ACL, JWT validation, and rate limiting, help protect APIs against common security threats and ensure data integrity and confidentiality. Overall, Kong API Gateway simplifies the complexities of API management, offering developers a scalable, flexible, and secure solution to efficiently manage their APIs and deliver high-quality services to their users.

For installation: https://github.com/Kong/kong

  1. To start, clone the Docker repository and navigate to the compose folder.
git clone https://github.com/Kong/docker-kong
cd docker-kong/compose/
  1. Start the Gateway stack using:
KONG_DATABASE=postgres docker-compose --profile database up

Creating a service

In Kong, creating a service involves defining an upstream service that represents the backend service or microservice to which incoming requests will be proxied. This process enables Kong to route incoming requests to the appropriate backend service based on predefined rules and configurations. Let’s break down the steps to create a service in Kong:

curl -i -s -X POST http://localhost:8001/services \
--data name=example_service \
--data url='http://httpbin.org'

Once you’ve created a service within Kong Gateway, each service is assigned a distinct identifier, often referred to as its ID. This ID serves as a unique reference point for the service within Kong’s system. Alternatively, if you specified a name during the service creation process, you can also use that name to identify the service in subsequent interactions.

To access and inspect the current configuration and status of a service, you’ll utilize its designated service URL. This URL structure follows a consistent pattern: /services/{service name or ID}. By appending the specific name or ID of the service to the base /services endpoint, you construct the URL necessary to access detailed information about that particular service.

curl -X GET http://localhost:8001/services/example_service

Creating a route

Routes in Kong Gateway dictate the behavior of incoming requests, determining how they are proxied and directed within the gateway’s infrastructure. To associate a route with a particular service, you can create a new route by issuing a POST request to the designated service URL.

In the context of configuring a new route, let’s consider a practical example where we want to direct traffic to the example_service service that was previously created. Suppose we wish to configure a route for requests targeting the /mock path.

curl -i -X POST http://localhost:8001/services/example_service/routes \
--data 'paths[]=/mock' \
--data name=example_route

Update and add a tag for previous route.

curl --request PATCH \
--url localhost:8001/services/example_service/routes/example_route \
--data tags="tutorial"

Rate Limit

The rate limiting plugin is installed by default on Kong Gateway, and can be enabled by sending a POST request to the plugins object on the Admin API:

curl -i -X POST http://localhost:8001/plugins \
--data name=rate-limiting \
--data config.minute=5 \
--data config.policy=local

Send 6 mock requests:

for _ in {1..6}; do curl -s -i localhost:8000/mock/anything; echo; sleep 1; done

I’ve seen this message after limit exceeded. Service-level and route-level might be activated.

Service-Level Rate Limiting: At the service level, rate limiting is applied globally to all routes associated with a particular service. This means that regardless of the specific endpoint or path being accessed within the service, the rate limit rules defined at the service level will be enforced uniformly. Service-level rate limiting is ideal for scenarios where consistent rate limiting policies need to be applied across multiple endpoints or when rate limiting is primarily based on the overall traffic volume directed towards a particular service.

Route-Level Rate Limiting: Conversely, route-level rate limiting allows administrators to define rate limit rules that are specific to individual routes or endpoints within a service. This granular approach enables fine-tuning of rate limiting policies based on the unique characteristics and requirements of each API endpoint. Route-level rate limiting is particularly useful when different endpoints within the same service may have distinct usage patterns or when certain endpoints require more restrictive rate limits compared to others.

By offering both service-level and route-level rate limiting capabilities, Kong Gateway empowers administrators to implement flexible and tailored rate limiting policies that suit the needs of their APIs and applications. Whether enforcing global rate limits across entire services or applying targeted rate limits to specific endpoints, Kong’s rate limiting plugin provides a comprehensive solution for controlling API traffic and ensuring optimal performance, reliability, and security.

{ “message”: “API rate limit exceeded” }

Proxy Caching

Kong employs caching as a key strategy to enhance performance and optimize response times for API requests. By leveraging the Proxy Cache plugin, Kong accelerates performance by storing and serving cached responses, thereby reducing the need for repeated requests to upstream services.

The Proxy Cache plugin operates by caching responses based on various configurable criteria, including response codes, content types, and request methods. This allows Kong to intelligently cache responses that meet specific criteria, improving efficiency and reducing latency for subsequent requests.

curl -i -X POST http://localhost:8001/plugins \
--data "name=proxy-cache" \
--data "config.request_method=GET" \
--data "config.response_code=200" \
--data "config.content_type=application/json" \
--data "config.cache_ttl=30" \
--data "config.strategy=memory"
curl -i -s -XGET http://localhost:8000/mock/anything | grep X-Cache

From now on, our response will be served by in-memory cache.
The proxy caching plugin can be used in various scenarios to enhance the performance and scalability of API endpoints. Here are some common situations where the proxy caching plugin can be beneficial:

  1. Frequent Read Operations: When API endpoints serve predominantly read operations and the response data doesn’t change frequently, proxy caching can significantly reduce response times by serving cached responses instead of hitting the backend server for every request.
  2. Static Content Delivery: If your API serves static content such as images, CSS files, or JavaScript files, proxy caching can cache these resources at the edge, reducing latency and offloading traffic from the backend servers.
  3. Reducing Server Load: By caching responses at the gateway level, proxy caching reduces the load on backend servers, allowing them to handle more requests and improving overall system performance and scalability.
  4. Handling Burst Traffic: In scenarios where there are sudden spikes in traffic, proxy caching can help absorb the increased load by serving cached responses, preventing backend servers from becoming overwhelmed.
  5. Improving User Experience: Caching frequently accessed data or resources closer to the user improves the user experience by reducing latency and response times, leading to faster page loads and smoother interactions with the application.
  6. API Rate Limiting: Proxy caching can be used in conjunction with rate limiting to cache responses for requests that exceed the rate limit, ensuring that even when rate limits are reached, users still receive responses without overwhelming the backend servers.
  7. Content Delivery Networks (CDNs): When integrating Kong with a CDN, proxy caching can be used to cache content at the CDN’s edge locations, further reducing latency and improving content delivery performance globally.

Enable authentication

Authentication serves as the gatekeeper of access, ensuring that only authorized individuals or systems can interact with a resource. In the realm of API management, authentication takes center stage as API Gateway Authentication, acting as the guardian of data flow between clients and upstream services.

Within Kong Gateway’s extensive plugin ecosystem, you’ll find a diverse array of authentication methods tailored to meet various security needs. These methods encompass the most prevalent and trusted standards in the industry, ensuring robust protection for your APIs and services.

Among the common authentication mechanisms supported by Kong Gateway are:

  • Key Authentication: A straightforward method where clients provide an API key for authentication, granting access based on the validity of the key.
  • Basic Authentication: A simple yet effective method where clients authenticate themselves using a username and password combination, which is then encoded and transmitted with each request.
  • OAuth 2.0 Authentication: A widely adopted standard for delegated authorization, allowing clients to obtain access tokens to interact with protected resources on behalf of a user.
  • LDAP Authentication: An advanced method that integrates with Lightweight Directory Access Protocol (LDAP) servers, enabling centralized authentication and user management across distributed systems.
  • OpenID Connect: A modern authentication protocol built on top of OAuth 2.0, providing identity layer capabilities and facilitating secure authentication and single sign-on (SSO) across applications and services.

Create a new consumer

curl -i -X POST http://localhost:8001/consumers/ \
--data username=luka

Assign the consumer a key

curl -i -X POST http://localhost:8001/consumers/luka/key-auth \
--data key=top-secret-key

Global key

curl -X POST http://localhost:8001/plugins/ \
--data "name=key-auth" \
--data "config.key_names=apikey"

Let’s see the result. We receive 401 status code until sending a valid key for this request.

To Sum Up

In summary, Kong Gateway offers a robust solution for optimizing API performance, scalability, and reliability through its versatile proxy caching plugin. By leveraging caching at the gateway level, Kong enables organizations to significantly reduce response times, alleviate server load, and enhance the overall user experience. With the ability to cache frequently accessed data, handle burst traffic, and serve static content efficiently, Kong Gateway empowers businesses to deliver fast, reliable, and scalable API services to their users. Whether it’s accelerating read operations, improving content delivery, or integrating with CDNs, Kong’s proxy caching capabilities provide a comprehensive solution for optimizing API endpoints and ensuring seamless interactions with applications. Overall, Kong Gateway serves as a powerful tool for modernizing API infrastructure, driving innovation, and delivering exceptional digital experiences to users worldwide.

--

--

Mert Simsek
Beyn Technology

I’m a software developer who wants to learn more. First of all, I’m interested in building, testing, and deploying automatically and autonomously.