AT&T Entertainment Hackathon (aka My First Hackathon)

Therese Arcangel
Human Friendly
Published in
6 min readMar 22, 2018
Hackathon Team: Clockwise (Therese Arcangel, Christine Tayaba, Tanvi Juneja, Joey Lin, Erick Olivares, and Brandon Carter. Missing: Anita Chen)

On January 19, 2018 I joined my first Hackathon at the AT&T Entertainment Hackathon. These are my insights to my team’s prototype as well as the hackathon experience. I acted as lead UI UX designer and product owner/manager.

Prompt

Come up with a new and innovative way that users either experience, consume, or create entertainment/media in less than 24 hours. Categories included Top 3 for Overall Idea, a Best Second Screen App category, and a Best Marketing Concept category.

The Inspiration

A company called KlugTV had created a second screen app that made it easy for user to learn about characters on TV and purchase products seen on any given show in real time as they are watching that program. It was an innovative way that enhanced e-commerce for stores and products affiliated with the show or episode, as well as enhanced how users discovered and purchased products.

KlugTV’s second screen app showing the user getting real time information about what they’re watching on TV.

This concept primarily benefits businesses as it helps them creatively market to consumers in a more seamless and effortless way. Imagine if this same technology could be used to benefit viewers consuming politics and various news outlets.

How can second screen apps change the current news landscape and how will it influence this current climate of “fake news”?

This is where UREKA comes in.

Use UREKA to identify audio from your TV and translate it into quick facts that you can further explore.

UREKA

UREKA is a second screen app intended to bridge people of all levels of understanding and education in real-time through speech recognition.

With U-Reka, users can tune into political debates, broadcast news, or even an infomercial for a new marketed medicine and use their second screen app to identify the audio and pull key-words, information, and recent articles pertaining to the content they are watching

The Problem

“Fake News” has always been around, but has become an even bigger issue since the previous election.

How can people stay on top of legitimate news and how can they inform themselves of both political figures and topics within real time as they are watching or listening to debates, news segments, and other news media coverage?

Quick brainstorm and ideating

The User

Our user is someone described as “the uninformed who wishes to be informed.” This is the person, who, similar to those who create new year’s resolutions, they’d like to follow through, but the resolution or in the case, the choice to become more informed about political events and the world around them — simply falls off their priority list. Life happens, and they simply forget. And as hard as they try, it is hard for this person to create a new routine in their already busy schedule.

For a more in-depth, please see below:

UREKA Persona

Our Focus

To tackle this, we determined key needs:

  • The ability to quickly search
  • Ability to integrate audio-recognition technology
  • Determine what API’s we could pull from

Current Search Methods

To bring true value, we had to pinpoint the integral difference and benefit between learning through UREKA vs. how people currently search topics.

One of the biggest pain points we hypothesized is a person’s ability to accurately remember what they want to search, as well as the memory to conduct the search. Things that can affect a person’s recall accuracy:

  • Clarity of audio they are listening to
  • Time it takes to open and load phone browser and enter their search query
  • Rather than searching now, they’d like to search later but forget

These methods create large room for errors and blind spots.

Affordances

Through this app the user has the affordance of creating quality searches and removing room for error based on current methods of search that simply relies on memory, clarity of audio, and the persons ability to type quickly should they want to search something they’re watching.

  • Information automatically retrieved based on audio (think something along the lines of Shazaam)
  • Ability to identify speakers, get bios to help assess credibility and background, ie polticians, news anchors, etc
  • Retrieve recent articles pertaining to topic
  • Create visual text of audio with highlighted keywords and names for the user to click and conduct a deep-dive discovery

The Scenario

Our initial goal was to focus on a scenario where our user is just tuning into a debate. It is the third debate in a series of four presidential debates, Trump is talking about DACA and our user wants to learn more about DACA. In this scenario the points of interaction we witness are:

  • The user initiating speech recognition
  • The app identifying the speaker and outputting text from the audio
  • The user clicking on a key-word highlighted within the text from the app
  • The user moving to a dedicated info page regarding DACA
Scenario of user watching Trump discuss the state of DACA.
Audio has been recognized and is outputted in soundbite text. The speaker has also been identified.

Ureka will then pull up relevant articles and relevant hashtags. The point of this is to also speed up the search via motion. User can swipe right on a hashtag to conduct a deep dive search, swipe down to save for later, and swipe left to discard.

Example of dedicated info page.

Process

The process in the duration of this hackathon was unique. We had less than 24 hours to onboard our teammates, figure out our technical capabilities and which scenario and user flow we wanted to focus on.

First I pitched the idea to my team and made sure we were all on aligned with the same vision. My main concern as project lead/project manager was maintaining this aligned focus. Because there was such a short amount of time, we needed to create timely check-ins with both the design team and the developers. The design needed to be compatible with the technical capabilities that are being discovered as they are being discovered.

Design Studio

Sketches, medium, high-fidelity

Sketches by Anita Chen
Sketches by Christine Tayaba
My personal sketches.
Lo-fidelity Wireframes

Technical Capabilities

On our team we were lucky to have two front-end developers and one back-end developer. We wanted to utilize speech recognition as well as the capability to auto-retrieve information based on that soundbite.

Tech we implented includes:

  1. Voice Recongition Google Cloud Speech API: Node.js Client
  2. Keyword Analytics https://cloud.google.com/natural-language/
  3. Metadata Crawler

Presentation

Presentation can be seen here

Working product can be seen and interacted with here.

Next Steps

  1. Integrate API’s from twitter
  2. Explore API’s from notable news outlets such as CNN, and others
  3. Explore more voice recognition technology to be able increase the distance between user and the screen (for the current prototype we had to be close to the screen for the tech to work)
  4. Refine the design into high-fidelity wireframes and create a clean/high-fidelity prototype.

About Therese

I’m a UI UX designer interested empowering individuals and enhancing human experiences. Currently freelancing and working as a UX Design intern at QUE-UE.

Currently open to full-time UI UX opportunities.

Contact Info:

theresearcangel@gmail.com

http://arcangel.design/

--

--