[CNet-03] “Hello World” web server with kubernetes as orchestrator

Habibie Faried
Nov 4 · 4 min read

After our successful attempt to develop concurrent socket connections on previous article (https://medium.com/@habibiefaried/cnet-02-forking-for-concurrent-access-db5962f14e2b). Now we need to learn how to communicate with protocol HTTP

Now, we will not cover the request processing in this article. As mentioned in the title, I’d like to make it simpler, just to return “Hello World” whatever the client requested.

Basic 200 HTTP Response

Referring to: https://www.tutorialspoint.com/http/http_responses.htm. I would like my webserver to return this

HTTP/1.1 200 OK
Server: HelloWorldServer
Content-Length: 51
Content-Type: text/html
Connection: Closed
<html>
<body>
<h1>Hello, World!</h1>
</body>
</html>

Here are some notes that you need to consider:

  • The Content-Type header length must match with the length of the body (starting from tag html)
  • Between headers (first 5 line above) and body (the rest on bottom) must be separated by 2 carriage return (\n\n)

Let’s see this code

strcpy(buff,"HTTP/1.1 200 OK\nServer: HelloWorldServer\nContent-Length: 51\nContent-Type: text/html\nConnection: Closed\n\n<html>\n<body><h1>Hello, World!</h1>\n</body></html>\n");

Look above on how I’m able to form up the string.

Full code: https://github.com/habibiefaried/sysprousingc/blob/master/webhello.c

Deployment

This is my dockerfile looks like

FROM ubuntu
EXPOSE 8181

RUN useradd -ms /bin/bash srv
ADD . /home/srv/code

WORKDIR /home/srv/code
RUN apt-get update -y && apt-get install -y build-essential && gcc webhello.c -o webhello && chmod +x webhello && chown -R srv:srv .

USER srv
CMD /home/srv/code/webhello

As usual, I put that on jenkins and build the image. Then I do kubernetes deployment, here is the yaml file:

apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: sysprohelloworld
spec:
ports:
- port: 8181
protocol: TCP
targetPort: 8181
selector:
run: sysprohelloworld
type: NodePort
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: sysprohelloworld
name: sysprohelloworld
spec:
replicas: 5
selector:
matchLabels:
run: sysprohelloworld
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: sysprohelloworld
spec:
containers:
- image: <yourregistry>/sysprohelloworld:latest
name: sysprohelloworld
ports:
- containerPort: 8181
resources:
limits:
memory: 25Mi
cpu: 25m
requests:
memory: 10Mi
cpu: 10m
status: {}

And yes I put that under 25mb ram and 25mcpu only. To prove how lightweight the server is.

Anyway if you don’t understand any of this docker and k8s, it’s ok. You can just compile the code on your local machine and try it by yourself

Testing

Let’s test with basic cURL, I deployed it on my server

# curl -vvv http://habibiefaried.com:8181
* Rebuilt URL to: http://habibiefaried.com:8181/
* Trying 164.68.113.243...
* TCP_NODELAY set
* Connected to habibiefaried.com (164.68.113.243) port 8181 (#0)
> GET / HTTP/1.1
> Host: habibiefaried.com:8181
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: HelloWorldServer
< Content-Length: 51
< Content-Type: text/html
< Connection: Closed
<
<html>
<body><h1>Hello, World!</h1>
</body></html>

Then open in browser

Website opened

Resilient Testing

I’m using flood.io, I configure it to use 22000 virtual users to hit this website. I use k8dash dashboard to monitor the pod’s resource usage. This is the beginning

After a minute

Flood.io continue to flood the servers, there’s a time that all of those pods are overloaded

As you can see, there containers that actually restarted. It’s because it hits the limit of the resources (cpu and memory) that given to pod. k8s is smart orchestrator, it will kill and respawn the pod whenever it exceeds the limit.

Conclusion

Test is over, let us check the result

The end of the test
How many restart happened during the tests

Let us check the stats of the user on peak time

Peak time

As you can see, 1000 concurrent users with 4200 requests per minute rate. However, the response time is still minimum (380ms).

It’s a good result for 5 pods that have maximum 25mb ram and 25 mcpu, doesn’t it? Can we consider this project as successful?

Of course it is! Successful!

And yes, this is the end of series talking about network programming using C. I hope you enjoyed all of the articles I’ve written on it.

Feel free to comment, share, clap, and give me a suggestion about this.

See you!

Further study

You can check this code: https://github.com/habibiefaried/habibiefaried.com/ on how I build my own website (https://habibiefaried.com) using C. There is a request processing over there. Just check it out!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade