Hack4Impact members win at PennApps XIII
“Learn about health, science, physics, and much more with this collaborative augmented reality app!”
Hack4Impact developers Abhinav Suri and Hunter Lightman and their team of Penn freshmen made top 10 in PennApps XIII, which took place January 22nd-24th, 2016.
For more context about the video of their presentation and their project description, we sat down for a conversation with Abhi and Hunter.
What stimulated your interest in working with Cardboard/VR?
AS: We were throwing around a lot of ideas before the hackathon, from a messaging app to unite all messaging services (see: https://xkcd.com/927/) to building a 3d printed hand. We wanted to definitely do something with hardware, but we didn’t know how to wire things together. Hunter mentioned he had 6 spare Google Cardboard devices so we decided to go with that. Initially, we were planning on making an interactive chess game where both players could see the same board and interact with it in real time. But we decided halfway through the hackathon that we needed to make something that could be used outside of gaming (like most of the other VR/AR apps out there). I think that is what set us apart from our competition.
HL: All of Abhi’s reasons are true. Though my motivation for pursuing AR/VR in the first place (before we knew what we were making exactly) was (1) I’d never used it before (2) it sounded like it’d be challenging (which was important for me; I wanted to learn something in the process) and (3) VR is really cool.
Did your involvement in Hack4Impact impact your PennApps experience? How do you envision LeARn shaping your future involvement in Hack4Impact?
AS: Our involvement in H4I influenced us tremendously, especially during our ‘pivot’ period when deciding to go with the more socially relevant uses of our hack. Additionally, H4I taught us how to work in teams well, delegating tasks and managing a git repo that didn’t turn into an absolute mess at the end of 36 hours. As far as how leARn can shape our future in Hack4Impact, I definitely think that this can be used as an example of how concepts which are super-technical and seemingly esoteric can always be repurposed to benefit normal people.
HL: Again, as for why leARn specifically, I definitely agree with Abhi. It just sounded significantly more cool/fun/interesting to make something with a potential for impact, and H4I has had an impact on that perspective, for sure.
LeARn was conceived around a simple notion: what if we could use Google Cardboards to create a cheap, dynamic, and…devpost.com
LeARn was conceived around a simple notion: what if we could use Google Cardboards to create a cheap, dynamic, and useful educational tool? Any student that ever attended a primary or secondary school will know how intensely boring a chemistry or anatomy lecture can be. As we dozed off in these very same classes in high school, our experience sowed the seeds for what would become LeARn. This app has the potential to disrupt the way students view lectures, literally.
What it does
LeARn allows users to interact with physics simulations, plot 3D graphs, view MRIs, and watch the molecular structure of a chemical compound float before their very eyes. The user can move and manipulate these projections via an online client, affecting factors like scale, rotation, and position. Most importantly, when, say, a teacher makes these changes, all other users viewing the same object will also see them take effect. This collaborative aspect is one of the key features of the application.
How we built it
LeARn is built using a panoply of technologies. Unity/C#/Vuforia is used for 3D rendering, physics, and stereoscopic projection, JS/Node.js/Express.js/Socket.io is what keeps our backend running smoothly, and HTML/CSS/Materialize comprises our front end stack.
Challenges we ran into
LeARn was very difficult to build for a variety of reasons. Nobody on our team had worked in Unity before or even knew C#, so to call it a learning experience is an understatement. We used complex and highly detailed models, and optimizing these such that they would render well on mobile presented a significant challenge in and of itself. We are truly pushing the boundaries of Google Cardboard technology, having to cut apart our Cardboards just to allow the app to function.
Accomplishments that we’re proud of
Having gone into this hackathon with no experience in augmented reality or Unity, we are all immensely proud of how far we were able to take this app, and we’re thrilled with the product thus far. The collaborative aspect of the app is particularly interesting, and certainly no small technical feat.
What we learned
As previously mentioned, we went into this hackathon with little to no experience in many of the technologies and languages we were working in. We know a great deal more now than we did, and although we are very much still learning, we feel we’ve come a long way.
What’s next for LeARn
Although we’re satisfied with the progress of LeARn thus far, there are a few more features/fixes we’d like to implement in the future. For one, we began to implement voice control via Houndify. Unfortunately, we didn’t have time to bring that feature to fruition, but we would like to see this completed down the road. We’d also just like to devote more time to improving the user experience and visuals.”