Protocols Demystification

Karthik Joshi
13 min readJun 22, 2024

--

Have you ever wondered how different web protocols stack up in real-world applications? In this blog, we’ll take a hands-on journey through HTTP/1.1, HTTP/2, gRPC, and UDP by building a simple To-Do app. You’ll discover the strengths and quirks of each protocol and learn why understanding them can supercharge your development skills.

Introduction to the OSI Model

The OSI (Open Systems Interconnection) model is a conceptual framework used to understand and standardize the functions of a telecommunication or computing system regardless of its underlying internal structure and technology. It divides the communication process into seven layers, each with specific functions and protocols.

  1. Application Layer: This is the topmost layer where user interaction occurs. Protocols like HTTP, FTP, and SMTP operate here to facilitate user-level applications such as web browsing and email.
  2. Presentation Layer: This layer is responsible for data translation, encryption, and compression. It ensures that data is in a usable format and appropriately presented to the Application layer.
  3. Session Layer (TLS): This layer manages application sessions or connections. It handles establishing, maintaining, and terminating communication sessions, often using protocols like TLS for security.
  4. Transport Layer (TCP, PORT): The Transport layer ensures end-to-end communication and error recovery. TCP (Transmission Control Protocol) operates here to provide reliable data transfer through segmentation and reassembly of data packets, identified by specific ports.
  5. Network Layer (IP, Packets): The Network Layer (IP, Packets) is primarily responsible for data routing, forwarding, and addressing. Protocols like IP (Internet Protocol) handle the delivery of packets from the source to the destination across multiple networks.
  6. Data Link Layer (Fragments): The Data Link layer ensures reliable data transfer across a physical network link. It breaks packets into frames and deals with error detection and correction. Devices like switches operate at this layer.
  7. Physical Layer (Radio waves, Ethernet): The lowest layer deals with the physical connection between devices, including the transmission and reception of raw bit streams over a physical medium such as radio waves or Ethernet cables.

Understanding the OSI model is crucial for grasping how different protocols and devices interact within a network:

  • Stateful Protocols: These protocols, like TCP and WebSocket, maintain a connection state between the communicating parties, ensuring a reliable and ordered delivery of data.
  • Stateless Protocols: Protocols such as UDP, HTTP, and WebRTC do not retain session state, making them simpler and faster but less reliable compared to stateful protocols.

Additionally, the OSI model helps classify the operation of various networking devices and services:

  • Routers operate on the Network layer (Layer 3), directing data packets based on IP addresses.
  • Switches function on the Data Link layer (Layer 2), handling data frames within the same network.
  • CDNs (Content Delivery Networks) work on the Application layer (Layer 7), optimizing content delivery to users.
  • VPNs (Virtual Private Networks) span the Transport and Network layers (Layers 3 & 4), providing secure communication over potentially insecure networks.VPN serves the request on behalf of the client by adding a VPN client and VPN server which will encrypt and decrypt at their end because extra hop VPNs are generally slow compared to ISPs and VPN servers are hosted in another region.

Protocols

TCP: TCP is a core protocol of the Internet Protocol Suite and operates at the OSI model's Transport layer (Layer 4). It is known for providing reliable, ordered, and error-checked delivery of a data stream between applications.

Pros:

  • Flow Control: Ensures that the sender does not overwhelm the receiver by sending data too quickly.
  • Congestion Control: Manages network congestion to prevent excessive packet loss and ensure fair bandwidth distribution.
  • Retransmission: Lost or corrupted data packets are retransmitted, ensuring complete and accurate data delivery.
  • Reliability: Guarantees the integrity and delivery of data through error-checking and acknowledgment mechanisms.

UDP (User Datagram Protocol):

UDP is another Transport layer protocol but operates in a simpler, connectionless manner, focusing on minimal latency.

Pros:

  • No Latency: Data is sent without the overhead of establishing and maintaining a connection, leading to minimal delays.
  • No Need for Connection Establishment: Eliminates the need for handshaking procedures, making it suitable for real-time applications.
  • Stateless: Each data packet is independent, reducing the complexity of maintaining connection states and making it more scalable.

IP (Internet Protocol):

IP operates at the Network layer (Layer 3) and is responsible for addressing and routing packets between devices on different networks.

Key Points:

  • Fragmentation: Each fragment can be a maximum of 65,535 bytes, but the typical Maximum Transmission Unit (MTU) size is 1500 bytes.
  • IPv4 and IPv6: IPv4 uses 32-bit addresses, while IPv6 uses 128-bit addresses to accommodate more devices.
  • Subnet Masking: Determines whether an IP address is within the same network; if not, data is routed through a router using NAT (Network Address Translation).
  • ICMP (Internet Control Message Protocol): Used for diagnostic and error-reporting purposes, facilitating tools like PING and traceroute.

TLS (Transport Layer Security):

TLS ensures secure communication over a computer network and operates at the Transport layer.

Key Versions:

  • TLS 1.2: Uses asymmetric encryption with RSA, involving one public key and one private key for secure data exchange.
  • TLS 1.3: Enhances security with the Diffie-Hellman key exchange algorithm, utilizing two private keys and one public key for forward secrecy and improved performance.

HTTP Protocols

HTTP 1.0:

  • New TCP Connection: Every HTTP request/response is handled over a new TCP connection. This approach is inefficient for multiple requests to the same server.
  • Status Code String Version: Status codes such as “HTTP/1.0 200 OK” are sent in plain text as part of the response headers.

HTTP 1.1

  • Keep-Alive Header: Introduced to allow multiple requests and responses to be sent over a single TCP connection, reducing latency.
  • Pipelining: Allows sending multiple requests without waiting for each response, but it was found problematic due to potential order mismatches and wasn’t widely adopted.

HTTP 2

  • No Status String: Status codes are sent as numeric values instead of string versions.
  • Binary Data Handling: Uses binary framing to efficiently handle data under the wire, improving performance.
  • Secure by Default: Encourages the use of HTTPS, making encryption and security measures standard.
  • Protocol Ossification: Prevents servers and clients from innovating beyond the defined protocol due to strict interpretation.
  • Multiplexing: Enables multiple requests and responses to be sent in parallel over a single TCP connection, reducing latency and improving efficiency.
  • Compression: Supports both header and data compression to further optimize data transfer.
  • Head of Line Blocking (HOL): a case where one slow request/response can block others in the same connection.

HTTP 3 (based on QUIC)

  • UDP-based Protocol: Uses UDP instead of TCP for faster connections and better performance in high-latency environments.
  • Congestion Control and Flow Control: Built-in mechanisms to manage and optimize data transfer over unreliable networks.
  • No IP Fragmentation: QUIC handles packet loss and retransmissions at the application layer, avoiding issues with IP fragmentation.
  • Stream Level Ordering: Allows for ordered delivery of data streams, ensuring that data arrives in the correct order at the application layer.

People are already working to bring QUIC in node js the official PR

Let’s see the Todo application in HTTP2 server

We’ll set up a Node.js server using the HTTP/2 protocol, configure Nginx to serve the client application with HTTP/2 and use self-signed certificates for HTTPS.

Prerequisites

  1. Node.js: Ensure you have Node.js installed.
  2. Nginx: Install Nginx on your system.
  3. OpenSSL: Install OpenSSL for generating self-signed certificates.

Step 1: Generate Self-Signed Certificates

First, generate self-signed certificates using OpenSSL. Run the following command in your terminal:

openssl req -new -newkey rsa:2048 -nodes -keyout localhost.key -out localhost.crt

Step 2: Create the HTTP/2 Server

Create a new Node.js project and mention the certificate path.

const http2 = require('node:http2');
const fs = require('node:fs');

const server = http2.createSecureServer({
key: fs.readFileSync('localhost.key'),
cert: fs.readFileSync('localhost.crt'),
});
const todos = [{ title: "To completed HTTP2 Todo application", isCompleted: false, id: 1 }]
server.on('error', (err) => console.error(err));
server.on('stream', (stream, headers) => {
stream.respond({
'content-type': 'application/json',
':status': 200,
});
let chunks = []
if (headers[":method"] === "GET" && headers[":path"] === "/todos") {
stream.end(JSON.stringify(todos));
} else if (headers[":method"] === "POST" && headers[":path"] === "/addtodo") {
stream.end(JSON.stringify({ response: "Helo World From POST method" }));
} else if (headers[":method"] === "PUT" && headers[":path"] === "/updateTodo") {
stream.end(JSON.stringify({ response: "Helo World From PUT method" }));
} else if (headers[":method"] === "DELETE" && headers[":path"] === "/deleteTodo") {
stream.end(JSON.stringify({ response: "Helo World From DELETE method" }));
}
else {
stream.end(JSON.stringify({ response: "Helo World" }));
}
stream.on("data", (chunk) => {
chunks.push(chunk)
})
stream.on("end", () => {
const body = Buffer.concat(chunks)
if (body.toString().includes("title")) {
todos.push(JSON.parse(body.toString()))
} else if (body.toString().includes("todoid")) {
const todoIndex = todos.findIndex((i) => i.id === JSON.parse(body).todoid)
if (todoIndex != -1) {
todos[todoIndex].isCompleted = !todos[todoIndex].isCompleted;
console.log({updatedtodos:todos})
}
} else if (body.toString().includes("deletetodo")) {
const todoIndex = todos.findIndex((i) => i.id === JSON.parse(body).deletetodo)
if (todoIndex != -1) {
const deletedTodos = todos.splice(todoIndex, 1);
console.log({ deletedTodos })
}
}
chunks = []
})
});
server.listen(8443);

Step 3: Configure Nginx for HTTP/2

Create an Nginx configuration file to serve the client application and enable HTTP/2.

server {
listen 443 ssl http2;
server_name localhost;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_certificate /path/to/localhost.crt;
ssl_certificate_key /path/to/localhost.key;

root /path/to/client;
index index.html;

location / {
try_files $uri /index.html;
}
}

Replace /path/to/localhost.crt and /path/to/localhost.key with the actual paths to your certificate and key files, and /path/to/client with the path to your client files.

Step 4: Create the Client Application

Create an index.html file in your client directory with the following content:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>HTTP/2 To-Do App</title>
</head>
<body>
<div id="todo-container_wrapper"></div>
<input type="text" id="todo_title" placeholder="Add a new to-do">
<button id="todo_btn">Add To-Do</button>
<script src="client.js"></script>
</body>
</html>

Create a client.js file with the following content:

let todos = [];

(async () => {
const response = await fetch("https://localhost:8443/todos");
const body = await response.json();
todos = body;
addTodosToUI(todos);
})();

const updateCompletionStatus = async (id) => {
try {
const response = await fetch("https://localhost:8443/updateTodo", {
method: "PUT",
body: JSON.stringify({ todoid: id }),
});
console.log(response);
} catch (error) {
console.log(error);
}
};

const deleteTodo = async (id) => {
try {
const response = await fetch("https://localhost:8443/deleteTodo", {
method: "DELETE",
body: JSON.stringify({ deletetodo: id }),
});
console.log(response);
} catch (error) {
console.log(error);
}
};

function addTodosToUI(todos) {
const container = document.querySelector("#todo-container_wrapper");
if (todos.length) {
let todosstr = ``;
todos.forEach((item) => {
todosstr += `
<li class="list-group-item" id="todo_item_${item.id}">
<div>
<div class="d-flex align-items-center gap-4">
<input
class="form-check-input me-1"
type="checkbox"
${item.isCompleted ? 'checked' : ''}
id="todo_checkbox_${item.id}"
/>
<label
class="form-check-label d-flex align-items-center gap-4"
for="todo_checkbox_${item.id}"
>
<span>Completion Status: </span>
${item.isCompleted
? `<span class="text-success">Done</span>`
: `<span class="text-danger">Pending</span>`
}
</label>
<button type="button" class="btn btn-sm btn-danger" id="todo_delete_btn_${item.id}">
<span class="material-symbols-outlined">
delete
</span>
</button>
</div>
<div class="d-flex w-100 justify-content-between">
<h5 class="mb-1">${item.title}</h5>
</div>
</div>
</li>
`;
});
container.innerHTML = todosstr;
todos.forEach((element) => {
document.getElementById(`todo_checkbox_${element.id}`).addEventListener("change",
async (_) => {
await updateCompletionStatus(element.id);
const todoIndex = todos.findIndex((i) => i.id === element.id)
if (todoIndex != -1) {
todos[todoIndex].isCompleted = !todos[todoIndex].isCompleted;
}
addTodosToUI(todos);
});
document.getElementById(`todo_delete_btn_${element.id}`).addEventListener("click", async () => {
await deleteTodo(element.id);
const todoIndex = todos.findIndex((i) => i.id === element.id);
if (todoIndex != -1) {
todos.splice(todoIndex, 1);
}
addTodosToUI(todos);
});
});
}
}

const btn = document.querySelector("#todo_btn");
btn.addEventListener("click", async (_) => {
const todoEl = document.getElementById("todo_title");
const todotitle = todoEl.value;
const todo = { title: todotitle, id: Date.now(), isCompleted: false };
if (todotitle.trim() != "") {
try {
const response = await fetch("https://localhost:8443/addtodo", {
method: "POST",
body: JSON.stringify(todo),
});
todoEl.value = "";
console.log(response);
todos.push(todo);
addTodosToUI(todos);
} catch (error) {
console.log(error);
}
} else {
alert("No title found");
}
});

Step 5: Start the Server and Nginx

Start your Node.js server:

node server.js

Restart Nginx to apply the new configuration:

nginx -s reload

open the localhost 443 in the browser. Browser still indicates it as insecure because the browser doesn’t trust the self-signed certificates if you host a client application on some service you can use certbot CLI to provide the certificates for the hosted domain.

HTTP2 Todo application

gRPC

gRPC is a modern, open-source, high-performance remote procedure call (RPC) framework that can run in any environment. It is designed to make inter-service communication more efficient and scalable, leveraging HTTP/2 for transport and protocol Buffers (protobuf) for interface description language, and it supports various programming languages.

Key Features of gRPC

  1. HTTP/2-Based Protocol: gRPC uses HTTP/2 as its underlying transport protocol, which provides several benefits, such as multiplexing, flow control, header compression, and more efficient binary framing. This allows for more efficient communication between services.
  2. One Protocol for Everything: gRPC is versatile and can handle multiple types of communication patterns within a single protocol. This includes:
  • Unary RPC: The client sends a single request to the server and receives a single response.
  • Server Streaming RPC: The client sends a request to the server and gets a stream to read a sequence of messages back. The client reads from the stream until there are no more messages.
  • Client Streaming RPC: The client writes a sequence of messages and sends them to the server, which processes the stream and sends back a single response.
  • Bidirectional Streaming RPC: Both client and server send a sequence of messages using a read-write stream. The two streams operate independently, allowing both sides to read and write in whatever order they like.

3. Binary Streaming Under the Hood: Unlike traditional REST APIs which typically use JSON over HTTP/1.1, gRPC uses binary streaming. This makes gRPC communication more efficient and faster because binary data is more compact and quicker to serialize/deserialize than text-based formats like JSON.

4. Schema-Based: gRPC uses Protocol Buffers (protobuf) as its interface definition language (IDL) for defining service methods and message types. This schema-based approach ensures that the services and messages are strongly typed and allows for automatic code generation across different programming languages.

Let’s see the Todo application in gRPC

Prerequisites

  1. Node.js and npm: Ensure you have Node.js and npm installed.
  2. Docker: Install Docker on your system.
  3. protocol buffers: Install the Protocol Buffers compiler (protoc,protoc web).

Step 1: Define the gRPC Service

Create a .proto file mentioning the schema of the todo and methods to expose to the client.

syntax = "proto3";

package todo;

service Greeter {
rpc SayHello (HelloRequest) returns (HelloReply) {}
rpc AddTodo (addTodoParams) returns (Todo) {}
rpc gettodos (empty) returns (TodoList) {}
rpc deleteTodo (deleteTodoParams) returns (empty) {}
rpc updateTodo (updateTodoParams) returns (Todo) {}
}

message getTodos{}
message empty{}

message HelloRequest {
string name = 1;
}

message deleteTodoParams {
string id = 1;
}

message HelloReply {
string message = 1;
}

message updateTodoParams {
string id = 1;
}

message addTodoParams {
string title = 1;
string id = 2;
}

message Todo {
string title = 1;
string id = 2;
bool isCompleted = 3;
}

message TodoList {
repeated Todo todo = 1;
}

Step 2: Generate gRPC Web Files

Use the protoc CLI to generate gRPC web files for your proto definition. Run the following command:

protoc -I=. todo.proto --js_out=import_style=commonjs:./client/src --grpc-web_out=import_style=commonjs,mode=grpcwebtext:./client/src

Step 3: Create the gRPC Server

Set up a basic gRPC server in Node.js. Create a new Node.js project and install the necessary dependencies:

npm init -y
npm install @grpc/grpc-js @grpc/proto-loader
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync('todo.proto', {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
});
const todoProto = grpc.loadPackageDefinition(packageDefinition).todo;

const todos = [{ title: "Complete gRPC To-Do application", isCompleted: false, id: "1" }];

const server = new grpc.Server();

server.addService(todoProto.Greeter.service, {
SayHello: (call, callback) => {
callback(null, { message: 'Hello ' + call.request.name });
},
AddTodo: (call, callback) => {
const newTodo = call.request;
todos.push(newTodo);
callback(null, newTodo);
},
gettodos: (_, callback) => {
callback(null, { todo: todos });
},
deleteTodo: (call, callback) => {
const index = todos.findIndex(todo => todo.id === call.request.id);
if (index !== -1) {
todos.splice(index, 1);
}
callback(null, {});
},
updateTodo: (call, callback) => {
const index = todos.findIndex(todo => todo.id === call.request.id);
if (index !== -1) {
todos[index].isCompleted = !todos[index].isCompleted;
callback(null, todos[index]);
} else {
callback(null, {});
}
}
});

server.bindAsync('0.0.0.0:9090', grpc.ServerCredentials.createInsecure(), () => {
console.log('Server running at http://0.0.0.0:9090');
server.start();
});

Step 4: Create the React Frontend

Create a new React application:

npx create-react-app grpc-todo-client
cd grpc-todo-client

Install the gRPC web dependencies:

npm install grpc-web google-protobuf

Create a grpc_client.js file in the src directory where we initialize the proto methods that make requests to the gRPC server

import { GreeterClient } from './todo_grpc_web_pb';
import { addTodoParams, deleteTodoParams, empty, updateTodoParams } from './todo_pb';

const client = new GreeterClient('http://localhost:8080');

export const getTodos = (callback) => {
const request = new empty();
client.gettodos(request, {}, (err, response) => {
if (err) {
console.error(err);
return;
}
callback(response.getTodoList());
});
};

export const addTodo = (title, id, callback) => {
const request = new addTodoParams();
request.setTitle(title);
request.setId(id);
client.addTodo(request, {}, (err, response) => {
if (err) {
console.error(err);
return;
}
callback(response);
});
};

export const deleteTodo = (id, callback) => {
const request = new deleteTodoParams();
request.setId(id);
client.deleteTodo(request, {}, (err, response) => {
if (err) {
console.error(err);
return;
}
callback();
});
};

export const updateTodo = (id, callback) => {
const request = new updateTodoParams();
request.setId(id);
client.updateTodo(request, {}, (err, response) => {
if (err) {
console.error(err);
return;
}
callback(response);
});
};

Step 5: Configure Envoy Proxy

Create a envoy.yaml file to set up a proxy server. which will act as a process that can send HTTP/2 calls. So we send an HTTP 1.1 call to the proxy from the browser, the proxy gets it and calls the gRPC server via HTTP/2 sending the request URL and parameters with it. Then, it receives a response from the gRPC server via HTTP/2, the response is now sent to the client via HTTP 1.1 by the proxy.

admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }

static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
stream_idle_timeout: 0s
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: todo_service
max_grpc_timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "172

Step 6: Create the Dockerfile for Envoy

Create a Dockerfile for the Envoy server:

FROM envoyproxy/envoy-dev:latest
COPY envoy.yaml /etc/envoy/envoy.yaml
RUN chmod go+r /etc/envoy/envoy.yaml

Step 7: Build and Run the Envoy Docker Container

Build and run the Docker container:

docker build -t envoy-todo .
docker run -d -p 8080:8080 envoy-todo

Step 8: Run the gRPC Server and React Client

Start the gRPC server:

node server.js

Run the React client application:

npm start
gRPC todo application

Conclusion

Building a To-Do application using different protocols like HTTP/1.1, HTTP/2, gRPC, and UDP has been a rewarding experience. This journey has underscored the importance of understanding various protocols, enabling us to choose the right one for optimizing performance, reliability, and scalability.

By mastering these protocols, software engineers can:

  1. Optimize Performance: Choose the most suitable protocol to minimize latency.
  2. Enhance Reliability: Ensure dependable communication tailored to application needs.
  3. Improve Scalability: Use advanced protocols for efficient, multi-stream handling.

Explore the complete project on GitHub: Repo link

Thank you for reading! I hope this guide inspires you to delve deeper into networking protocols and their applications in software development. Happy coding!

--

--

Karthik Joshi

I am a passionate software engineer with a strong focus on front-end development. I thrive on diving deep into various aspects of software development