Sprint 1: Setting Up

Degoldie Sonny
slovneek
Published in
4 min readFeb 27, 2019

Greetings! My name is Degoldie Sonny, one of the Hackers in Slovneek. I mainly develop the back-end side of our project.

Our first sprint was quite an experience. I sort of expected it but it’s worth mentioning that it was really overwhelming. Setting up a project is really hard since there are a lot of things to consider, making sure that future development will be as smooth as possible.

There are a few things I learned during this sprint, mostly around trying to control development flow through git, developing with Test Driven Development in mind, setting up CI/CD + Docker and working with an unfamiliar DBMS, Oracle Database.

Git Flow: Integrated Coding in a Team

Over the past two and a half years majoring in Computer Science, I had a lot of experience using git to manage team projects. The only thing I can say about git is: I can’t imagine how tedious team projects would be without git.

We started using git by cloning the online repository for our project created by our lecturers, which is stored in the Faculty’s private server running GitLab.

git clone <repo-url>

At the time, there wasn’t a staging branch. So I created one and pushed it to the online repository:

git checkout -b staging
git push -u origin staging

I then began setting up the project for the back-end side of our project, while simultaneously setting up CI/CD and deployment for it. I placed them in branches called ‘US-0-set-up-backend’ and ‘US-0-set-up-cicd’. (Note: US-0 isn’t actually a User Story. It is named as such to indicate it preceding all User Stories.)

git checkout -b <branch-name>
git add <files>
git commit -m <commit message>
git push -u <remote-name> <branch-name>

After all this, I submitted a merge request to staging and waited for my peers to review the code.

Test Driven Development: Discipline in Nature

It’s not a foreign concept to me, but it still is something hard to maintain.

In theory, using TDD ensures working code at all times, warning developers when a certain function fails its intended response. However, TDD is hard when you incorporate a lot of other aspects which aren’t obvious at first, like how a function interacts with models. This requires near-perfect knowledge of how the implementation of a function will become, before even implementing it. But if this is achievable, TDD helps you escape from most major bugs.

In this project, TDD is enforced by classifying each of the commits into one out of four possible types:

  • [CHORES], which are commits that include code outside of implementations of user stories, such as DevOps related commits or project initializing.
  • [RED], which are commits that create tests for functions that will later be implemented and its stubs to refer to.
  • [GREEN], which are commits that implement the functions whose tests were created in the [RED] commits.
  • [REFACTOR], which are commits to help integrate the code with existing code, in the event of a mismatch or a conflict.

Aside from commit discipline, code coverage is an important factor. Code coverage measures the scope of the code in which the tests cover. It is important that the code coverage stays high, which means the code we have is well tested and goes through most of its cases.

CI/CD and Docker: Autonomous Integration, Delivery and Deployment

This part took longer than necessary.

There are a few things I struggled at, mostly at learning Docker and Docker Compose. Problems also arise due to us being provided with only one repository, even though our tech stack splits our project into two servers (front-end and back-end), effectively requiring two different types of CI/CD in different folders.

Docker is an extremely convenient tool, yet personally I find it hard to understand at first. It basically enables execution and distribution of software prototypes in a simple and contained way, so deploying it to a brand new server is easier than ever. These software prototypes are placed in an isolated environment called a container, in which you can run, test, or deploy it through docker commands. But setting a project up with docker can be mentally taxing, as it requires a lot of prior knowledge.

Working with GitLab CI/CD has also been mentally taxing, despite having experience on it during previous courses. Introducing docker deployment to our project has made the CI/CD a bit more complex than usual, coupled with the fact that there are two folders which require two disjoint CI/CD configuration.

Oracle Database: Powerful Yet Resource Heavy

Why did we attempt to use Oracle Database? Although we were told it was optional (albeit highly suggested if possible), we tried anyways.

During my time learning Oracle Database, I felt like we weren’t ready to use it. Oracle is certainly quite powerful, as it provides lots of features including added security and better optimization. However, there are a few things that made using Oracle Database harder than other popular DBMS’ like MySQL or PostgreSQL:

  • Program size: A bit too big (At least 2GB required for XE version, 4GB for the CE version).
  • Memory: Since it uses Java, it requires a lot of memory to run.
  • Commitment: It requires a pricey investment to actually make use of Oracle Database for a deployment environment (which doesn’t translate well with our PPL Course).

After discussing it thoroughly, we decided to not use Oracle Database during our development and instead use PostgreSQL. In the end, it shouldn’t really matter since we mostly rely on Django to handle database queries.

Like I said before, this first sprint is quite an experience, a valuable yet stressful one. There might be some stuff I could’ve done more efficiently at, or maybe I could’ve contributed a bit more to the project, but I’d say it’s enough for the first sprint.

That’s all I have to say for this sprint, stay tuned for the next one!

--

--