GSoC’21: Coding Phase Sixth Week
I hope you all are safe and doing good! I am writing this blog to share my Sixth week of the coding phase in GSoC’21.
So, I am currently in the second phase of the coding period and passed the first evaluations.
I got a note from the mentors to comment more on the code and use diagrams wherever possible as my work mostly includes writing .yaml files for the Kubernetes and to make my code understand better for future contributors. I have incorporated the notes and started to use comments in the Dockerfiles and .yaml files.
This week I have worked upon building up two docker images for the application.
- Dashboard+Coordinator Which includes the UI part and coordinator part.
- Worker Node
First, I tried to create one docker image which includes all three components in the same image which are Dashboard+Coordinator+Worker with ClamAV daemon running in the background. For this, I have used Ubuntu as a base image.
The main requirement for the application is that the application requires Redis and MongoDB connection at startup and using Redis and MongoDB in the same container is not a good approach. For this, I have created a docker network in which I have joined all the three containers which are Redis, MongoDB, and Scan8 containers. Once I started up all the three containers voila! everything was running perfectly and when I used the -p flag while running the scan8 container and exported it to the outside world on port 8080 I can see the dashboard part on the browser. Then I quickly submitted one folder to the application for scan and it works pretty fast and fine.
Blockers faced
When I tried to decouple the application and converting it into two parts which are mentioned above(Dashboard+Coordinator) and Worker node. I have faced an error because the Coordinator part requires the Worker node scan job file and there is no way docker containers can call other docker containers methods without any API and Coordinator and Worker cannot reside in the same container because then the application will break on the scale and it also does not follow our application architecture. Currently, we are working to decouple the application.
Coming up
In the coming week, I will be working on creating Docker images and running the application on K8s with HPA on my own system.
Stay tuned for further updates and feel free to connect with me on Linkedin if you have any doubts, feedbacks, or even otherwise!
Join the GITTER channel of Scan8 for more insights into the project.