Rabble is Relaunched — Round Up Your Crew
DISCUSS. SPORTS. LIVE.
The past two years have been quite a journey. I’ve gone from contracting with Rabble to their CTO, single handed built, maintained their iOS app(s) and went from one end of the technical stack to the other.
Almost 2 years ago to the day I released v1.0.0 of Rabble. The web app had been out for a short amount of time. The basis of the original product was to schedule a live audio broadcast around a sports event or TV show. You would essentially “call the game” while other participants would listen to your commentary while watching the muted event on their TV.
When I was first pitched the idea I got it right off the bat. I remember my grandfather sitting in his recliner, watching the Tennessee Volunteers play football, muted and listening to the local AM station call it. Why?…Because the local guys were fanatics about UT, knew the players, coaches, fans, etc. better than any other corporate broadcaster.
My best friend Ben, was their CTO (co-founder), at the time and had built an amazingly customized streaming stack using HLS / RTMP. It was a little bit ahead of it’s time. It is very similar, albeit at a fraction of the scale, as what Facebook did with Live. Ours was audio only. We dropped the video on the floor, which is an important note in the evolution to “Rabble 2.0”.
The iOS was built was pretty straight forward. I had two main targets both of which were written in Objective-C:
The only external framework that I used was Flurry. Honestly, who wants to write their own analytics code.
Going from May of 2015 through the first quarter of 2016 I iterated over bug fixes and new features. Mostly notably, reminders, broadcasting from the app, custom audio player, universal links and a full UI overall.
After a series of meetings talking about the future of the product and examing user feedback, we had to come to the conclusion that it was time to adjust. At the heart of the change would be moving from
N:N, broadcasting. This time with video.
Who would have known that that one variable change would move the product in an entirely different direction.
Birthing a New Baby
The main question that rattled around in my brain for weeks as we continued the initial product planning was: “How do we leverage as much of the existing stack and code base as possible with such a product shift?”
The answer was… we couldn’t. I decided that if you are going to be a monkey then you should be a gorilla. We would also take this an opprotunity to get off AWS and move over to GAE. In hindsight this was one of those decisions that paid off monetarily and time.
Out with the Old and in with the New
Rabble 1.0 Stack:
Rabble 2.0 Stack:
Interesting Problems to Solve
During one of early product planning meetings our lead engineer (services and Android) and I sat down to prioritize development based upon LOE. Without a doubt the most complex portion of the app is not the streaming of many-to-many video, but keeping all of the clients sync’d up with published / unpublished streams, as well as, the read and write transactions to Firebase.
We had to keep in mind different application states of other clients that are participating in the conversation and their associated roles: Host, cohost or viewer. When there is a change to the conversation everyone needs to know about it and act accordingly. This could be the host leaving, a new cohost request, a new cohost request is accepted, rejected or blocked, incoming comment, etc. During each of these scenarios there are numerous delegate methods called to manage the A/V, as well as, Firebase listeners.
There still a few edge cases lingering out there that could put a user in a weird state, but Sterling and I spent months building a client driven system that is stable and robust.