The Intra-MIC Hack: A Word from the Teams (Part 1)

SRM Machine Intelligence Community
SRM MIC
Published in
6 min readSep 30, 2020
Cute bot is cute :D

We at SRMMIC had conducted an intra-MIC hackathon from 5th September to 18th September as an effort to inculcate a sense of belonging among the new members. The motivation was to encourage everyone, old and new, to participate and interact with people and come up with a new hack in 2 weeks! There were no specific themes, everything machine intelligence and beyond was fair game. A total of 12 teams participated, and the four teams for this blog are: Hydrogen, Helium, Beryllium and Boron.

Team Hydrogen

Team members: Rusali Saha, Saisha Shetty, Pranav B Kashyap

What was your hack about?

To beat boredom during the pandemic times when we are all locked in at home, movies are our best friends. This consequently led to people watching more shows and movies online. And with so much to watch, we decided to create a filter that would help people to figure out what they like. We worked on a project named — “Machines and Sentiments”. In this project, we deployed a model which would be able to figure out what kind of feedback the user has given for a movie. We also implemented a movie recommender system along with a chatbot named — ROBO.

What did you learn along the way?

The basics of NLP and working of a simple chatbot was one of the fundamental aspects of the project, we knew we had to start there. We tackled the various necessary concepts in NLP such as stemming, tokenization, part-of-speech tagging and parsing. There was also the aspect of deployment, and so we learnt how to deploy an ML model on web using Streamlit. But of course, the biggest element was teamwork, collaboration, and rapid growth in a plethora of minor technical skills.

What difficulties did you face?

Deployment was certainly the biggest thorn in our side. This was because we had no prior experience in web dev. Consequently, working on the front end interface and making the experience pleasing for a prospective user was a challenge as well.

Any special comments?

Overall this hackathon was a great experience. We gained a lot of knowledge in a short period of time. We all had a blast of a time working with each other. It was truly a great initiative. Looking forward to more such events organised by SRMMIC!

Well, that’s got my movie weekend sorted!

Check out the hack here!

Team Helium

Team members: Aryan Kargwal, Abhishek Saxena, Anoushka Halder

What was your hack about?

We wanted to do something for non-designers who want to make their projects look aesthetic. Our project ‘Consillio’ was centered around CPPNs, a type of neural network that can generate abstract patterns, a sort of unique color gradients for wallpapers or banners. Our plan was to provide features like neural style transfer to make it an ideal place to provide stochastic designs as well as make it accessible and easy to use. We built a clean user interface around it which made it as simple as a few clicks to make aesthetic designs which could be used anywhere from t-shirts to backgrounds for instagram posts.

What did you learn along the way?

CPPNs were completely new to all of us which we picked up during the course of the hack. We spent a major amount of time reading about flask, handling POST and GET requests and basically inlay all the web components and python in one place, since it wasn’t made to deliver interactive content.

What difficulties did you face?

We had a hard time integrating our pre-made HTML pages with it and had to entirely start from scratch as it wasn’t working. Time management was another one, with classes taking up about half the day. Our network required a decent amount of computational power which added to the difficulties.

Any special comments?

It was a great overall learning experience. Deployment isn’t taught everywhere and our hack focused on that aspect. However, if we could have a few extra days or we could have managed our time better, our hack could have come out much better. Notwithstanding, it was a great hackathon and a great to participate with such wonderful people!

The hunt for the perfect generic wallpaper ends here

Check out the hack here!

Team Beryllium

Team members: Irfhana Zakir Hussain, Harshit Aggarwal, Kunal Mundada

What was your hack about?

Both inside and outside of the community, we read a lot of papers. Although we can usually understand a research paper after a couple close reads, there are a lot of visual learners and students with learning disabilities who would prefer a summarized mind map of a research paper to more easily grasp its contents. Plus, it can be of help to anyone who wants a quick overview of a paper before diving deep into it. We got our inspiration while we were going through a dozen of research papers for a good idea for the hack, that’s when we realized how boring the task was. And so we did the “Visual Research Paper!”

What did you learn along the way?

We primarily learnt how to work with APIs using Flask, XML processing and NLP libraries. Of course Streamlit was a fundamental component to create a front end component. We were quick on our feet; when one approach didn’t work, we would try something else that could accomplish the task.

What difficulties did you face?

Getting the data in a structured format from a pdf was astoundingly tough. Another difficult department was mapping the data in an interactive mind map. The BERT-summarizer in use currently hogs extreme amounts of RAM, which would definitely affect user experience. We scrapped so many approaches, but that was okay! Everything we tried can be usable for another project because of all the knowledge we gained.

Any special comments?

The great thing about a research project started in a hack is that you have a baseline from which you can improve your project. We’re really excited to take this one forward!

If the Mindmap looks this scary, imagine what the actual paper is like ( ⚆ _ ⚆ )

Check out the hack here!

Team Boron

Team members: Harsh Sharma, Prathamesh Deshpande, Sashrika Surya

What was your hack about?

An intrinsic part of maintaining a healthy lifestyle is eating right. Trying to keep track of calories every time before you eat can become a cumbersome task. So we decided to build something to help, as well as encourage people to keep track of their food, and ensure they eat healthy every day. So we came up with our project NutriTrack, a model used to calculate the accurate amount of calories in your food just by looking at an image with the click of a button. We wanted to give people a complete solution and insight into how much calories in total they are consuming for a particular food, hoping that it would help our user plan their diet better.

What did you learn along the way?

First of all, we learnt managing a full github repository while working in a team, dealing with several branches of the master branch, issues arising from merge conflicts etc. We learnt annotating images using LabelImg Software, implementing Masked RCNN, YoloV4, the new CV architecture like Yoloact for better segmentation, manipulating image pixels, calculating depth in an image, deploying our model on Streamlit and most importantly Team Coordination.

What difficulties did you face?

There were quite a number of difficulties in our way, a major one being losing a lot of files from our master branch during a merge from another. We had to make a custom dataset for the model(Yolov4). Later, deploying the whole model as a Web App into Streamlit was quite challenging to say nothing about choosing perfect model architecture for the purpose, training and initiating them.

Any special comments?

This hack helped us solidify our learnings as well as participate with a team having members at the same level. All our opinions were heard and discussed, and we had a lot of fun overall. The mini hack was a smooth success and we should conduct more intra community events to strengthen team bonding.

Long overdue dieting plan, here I come!

Check out the hack here!

Stay tuned for the posts from the other teams!

--

--