4 other UNT students and I participated (and won first place!) in the 2016 24 hour AT&T Mobile App IoT hackathon and created “Audience”, a context driven dynamic-volume speaker system, with an app to go along!

My Role

The great thing about this project is that we were all able to contribute a significant amount and learn a lot about previously unexplored areas of app design and development. I, personally, was in charge of designing the user interface and interactions, within the app itself as well as well as how users would potentially interact with the changing environment introduced by the product. I also got to help develop the front-end a little bit, introducing me to Node.js!

We walked into this hackathon having no idea what we’d be working on.

24 hours isn’t a lot of time to think of a product idea, let alone create a functioning application! During the opening ceremonies, the organizers announced that they would be providing multiple API’s, as well as some great hardware, including Arduinos, Harman Omni Speakers, too many sensors to count, and AT&Ts M2X and Flow applications. With these in mind we came up with “Audience”, a context driven dynamic-volume speaker system. The way Audience works is based on settings implemented by the user, so the volume of the speakers will change based on any particular context.

For example; say you’re hosting a mixer and want to encourage conversation. If the music gets too loud, guests will need to start talking over the speakers to hear each other. Audience’s decibel sensors pick up on the increased audio level in the room, and will automatically decrease the music volume. The app itself is a Heroku hosted Node.js application, using AT&T M2X to create data based on room sound level picked up by decibel sensors, and AT&T Flow to communicate the data to a Harman speaker, automatically adjusting volume based on need, if the user so desired.

The Process

Like I said before, 24 hours isn’t very much time to create an entire product from scratch. That said, we certainly fit as much as we possible could within those 24 hours. We started out shooting around ideas. Everything from food expiration notification apps to apps making university parking easier. We finally ended up coming up with Audience, and hit the floor running.

We started a design sprint (cheetahs sprint, right?), ideating how users will be using and interacting with the app. I started working out the onboarding for the app, but we quickly realized I didn’t have time to fully flush out the onboarding process, thus no high fidelity mockups were made. I then moved onto designing the interactions between the user and the speakers. This was a really fun concept to work on because there were a number of factors that needed to be taken into account. Namely, how users would access speakers, what functionality the app would need as far as control, and how we can easily implement a programming function within the app, so anyone can set how the speakers react to the ongoing environment.

Image for post
Image for post

After creating the user journey, I started working on wireframing the application. We wanted this app to be so simple anyone could use it, so for presentation purposes, we designed around two theoretical onboarding procedures: The user already had Harman speakers in their home, and bought a kit containing the sensors, or the Harman speakers were a version that theoretically contained sensors in them already and the app automatically synced with the Harman API, providing a list of speakers already connected to the Wi-Fi.

Image for post
Image for post

Once wireframes were done, we tested it with other hackers. We were pleased to find out that the large majority of users went through the process of choosing a speaker, and while a large percentage went straight to choose a preset mode, the ones who decided to create their own custom preset showed few problems doing so.

24 hours total, tons of snacks and caffeine, one half-hour-long nap, and with seconds to spare, presentation time came around. And we killed it. Jerad talked about the technology and I demo’d the app itself, while the others answered questions, and we were later chosen to tie with another team for first place.

Conclusion and Takeaways

We learned a lot of this within these 24 hours; one thing being that 24 hours isn’t a lot of time to work with new technology, let alone learn how to do some of the things we did. When it comes down to it, I think we should’ve stuck with technology we knew so we could provide a more flushed out app, rather than use certain companies technology for brownie points. In the end though, we came up with a really cool product that could potentially be really useful in the future, and it wouldn’t have been possible without my awesome teammates.

Written by

Multidisciplinary designer with a psychology background and a focus on UX living in Dallas, Texas.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store