How and When a Microservices Architecture Can Streamline Your Next Hackathon Project

Abhishek Sharma
8 min readJul 11, 2020

TL;DR:

  • Microservices are a collection of individual services that are developed and deployed independently, and they usually use REST APIs to communicate with each other. You can use different tech-stacks to build these individual services.
  • By using microservices at a hackathon, you can team up with anyone who will be able to own their service from end to end, even if they want to use a different tech-stack. Hence, you form teams based on ideas and not on technologies.
  • Microservices will save efforts during deployments, as everyone in the team will be individually deploying their own services, and the team will not need to learn a common deployment step. Also, if a single service fails, then only that service would get re-deployed, and not the entire application. This will lead to more deployments though, as every service will get individually deployed.
  • A monolithic architecture would make more sense when the application being developed does not have many moving parts, or when the entire team wants to work on the same tech-stack, or when your application requires some advanced dev-ops procedures, and you don’t want to independently scale your services, then you will save time by having just one person do all of the deployments.

Hackathons are a great place where people collaborate with each other to build interesting projects. I have attended four traditional 36 hours hackathons, and after my last hackathon at Stanford’s TreeHacks, I realized how a microservices architecture can truly streamline a hackathon project and even help you team up with people without worrying about which technologies you all know in common. In this article, I will share with you how implementing a microservices architecture helped me and my team to rapidly build a system that involved 7 different services to work with each other in order to produce the desired result.

The Ex-Director of Web Engineering at Netflix, Adrian Cockcroft, defines a microservices architecture as a “service‑oriented architecture composed of loosely coupled elements that have bounded contexts” (Source Link). This means that in this architecture, all of the services are individual, stand-alone applications with their own code bases. They are independently built, tested, and deployed, and by having bounded contexts, they will not need to know how the other applications are working internally, instead, they will just need to know about their external API contracts in order to interact with them.

Now, at Stanford’s TreeHacks, we were a team of three software engineers and one machine learning engineer, and we used microservices to build a system that allowed users to talk to a voice assistant through phone calls. Users could ask general health-related questions to this assistant and get answers in return. Any questions that the assistant was not able to answer would get forwarded to a Web-UI from where authorized users could provide answers to them. Finally, these answers would get fed back to the assistant’s knowledge base for future use.

For this entire system to function, we had set up 7 individual services that were interacting with each other using APIs, and each one of us in the team owned one or more of these services. I discuss them in detail here:

Twilio’s Phone Service: A phone number, provided by Twilio, acted as the entry point to our system, and users would make a phone call on this number to ask their question. Twilio would forward this question’s voice recording to a Node JS server running on a Virtual Machine, which would interact with other services to get an answer, and then return that answer back to Twilio as text. Twilio would convert this answer in a text format to voice before sending it back to the user.

Microsoft Azure’s VM: An ngrok tunnel was created through this VM in order to turn on the Twilio’s phone service so users can make phone calls to it, and then the Node JS server running on this same VM received the user’s question as a voice recording from the Twilio’s phone service. The Node JS server then interacted with Google Cloud Platform’s Voice to Text Converter to retrieve a text transcript of the question, and then consecutively, it sent this text transcript to Microsoft’s QnA Maker. The QnA Maker matched that question to its knowledge base, and then returned an answer along with a confidence score bounded to it. If the confidence score was higher than a threshold value, then the Node JS server forwarded this answer back to Twilio, and Twilio relayed it back to the user, but on the other hand, if the answer’s confidence score was lower than the threshold value, then the Node JS server returned an error message to Twilio instead, and Twilio notified the user to check back again, and concurrently the Node JS server made an API call to store this unanswered question into a MongoDB Atlas database which was accessible via a Web-UI.

Web UI: A Web-UI served as an admin panel for authorized domain experts to view those unanswered questions stored in the MongoDB Atlas database, and the experts could provide their answers to those questions, and also peer review each other’s answers by upvoting them. When an answer had enough upvotes, the Web-UI would make an API call to post that pair of question and answer back onto the QnA maker, which would update its knowledge base with this information, and any user asking the same question again to our system received an actual answer in return this time. This Web-UI was developed using TypeScript, React JS, and Material-UI, and it was deployed on GitHub Pages.

MongoDB Atlas: MongoDB Atlas is a database hosted on the cloud, and it contained all of the questions that were unanswered by the QnA Maker, the answers provided to those questions, and the corresponding upvotes received by those answers through the Web-UI.

Node & Express API: This Node JS server was different from the one running on the VM, and this server made a connection to the MongoDB database and exposed three endpoints for interactions with this database through a REST API built on this server. One endpoint was for storing unanswered questions, one for updating those questions with answers, and another one for getting the entire list of questions and answers. This Node JS server was being hosted on Heroku.

Microsoft’s QnA Maker: This was our knowledge base, which contained pairs of questions and answers. These pairs were generated automatically by feeding the QnA Maker with thousands of articles. Then, It received questions as requests from the Node JS server on the VM, and the QnA Maker returned answers along with a confidence score back to the server. Finally, it exposed an API endpoint for other applications to add new pairs of questions and answers to update its knowledge base.

Google Cloud Platform: This service was responsible for converting the user’s question from voice to text.

This is how our entire system was built, and we were able to get the entire system to run as expected without any latency in the workflow. Here is a link to a video showing our project in action (Video Link). The only feature that we were not able to implement was to post pairs of questions and answers back onto the knowledge base, and this was because we were having some authentication issues while posting on the QnA Maker’s API endpoint.

We were really glad to have used the microservices approach for building this complex system as there was a better sense of responsibility amongst us in regards to our individual services, and everything got parallelly developed as we all knew how our individual services would behave, and we were able to specify what type of data would be provided to other services once deployed. Hence, we were just using dummy data while developing our services, and everything fitted perfectly in the end. Moreover, we saved so much valuable time along the way as we were able to use our preferred technologies, coding styles, and deployment procedures, and most importantly, after completion, whenever our system’s workflow broke, we were quickly able to figure out where the fault was originating from, and the engineer who owned that service worked on fixing and re-deploying it, without needing to re-deploy any of the other services, this made sure that everyone else could focus on their own tasks without being blocked.

We did face a major challenge due to the microservices approach, and that was with the deployment of our Web-UI. We were almost an hour away from our submission, and the person in charge of the Web-UI was having deployment issues. It was only after a couple of tries that we were finally able to have the web page up and running on GitHub Pages. This was a situation when we thought that coupling the Web-UI with our Node and Express API, and deploying them together onto Heroku would have served us better, but then I realized that we would have never faced this problem if we just tested all of our deployments ahead of time.

There are still merits to building monolithic based applications at hackathons. A monolithic architecture would benefit more when all of the team members prefer working on a common tech stack, and the application does not have many moving parts that are needed to be broken down into individual services. Another instance when using a monolithic architecture would make more sense for your team is when your project involves complex dev-ops procedures, and your team is not concerned about independently scaling any of its internal services. In this situation, your team will save time by having just a single source of deployment implemented by one person.

I hope with this article you have a better understanding of how and when a microservices architecture can streamline your next hackathon project. I’d love to know your thoughts as well, so please share your comments in the section below, and also feel free to ask me any questions. Thank you so much!

If you are interested in learning more about our project, then please visit these links

Also, here are some resources that I found really helpful while learning about microservices

--

--