Some Musings on the SRG SSR Hackdays 2018 : The Pitches

Source: Radio Télévision Suisse YouTube Channel

Disclaimer: As mentioned in the main post, the fact I could even attend this event was a massive surprise. Neither my tech ‘nor my media knowledge are amazing, as this article will probably highlight.

As this got quite big fast I decided to split it into a separate post. Below you can find a short description of each pitch as well as my opinions on the program presented in them. I tried to keep myself short, however at 27 pitches this post still became quite lengthy. Some are longer than other due to me having more to say or knowing more about a subject, but I tried to give each pitch it’s fair share of words.

If any participants (or anybody else) read this and spot any mistakes on my part, please notify me of them and I will change them. I am also always happy to read the opinions of others, so comment away. Furthermore please excuse my lack of visuals on this one, next time I’ll bring a camera to the event.


RoboJournalism:

This tool was based around the idea of using an AI to create audiovisual content instead of written one. The showcased version created at the Hackdays used various inputs to create highlight reports of Minor League Football, which it further commented through Text-To-Speech. The presenter mentioned issues related to usage of sports specific terms and slang, which the program could not yet use properly.

By having this as a first presentation a high bar was set. The tool worked quite well in presentation, and while you could tell that it was automatically generated it impressed me with how natural the commentary sounded considering the fact the tool had only been worked on a single day. Adding Text-to-Speech to a highlight reel made it a lot more interesting to follow, even if the information transmitted this way was just the location of the game and the results.

I have read reports that AI will slowly take human’s jobs even in journalism, the first field likely being minor league sports journalism, and this was a good example of how it could happen. While I doubt there’s much interest (or timeslots) for Minor League Sports in broadcast media, I can see this tool being used online to cover as many recorded games as possible in a short timeframe, probably internationally. Whether or not a tool like this will ever make it big is unclear, but it served as a good example for what the future of reporting sports results may look like, in my opinion an important part of what the Hackdays aimed to achieve.

Understanding Switzerland through its News

Next up was Geneva’s first presentation, which used the RTS’ archives to visualize data. The team showed how the tool classified the data by multiple categories, including location, keywords and type of article. Through the latter two it could also identify popular topics (in this case, tennis).

Included in this tool was a machine learning program for facial identification. It would determine who was visible in an article and use this to create further outputs. The example used during the pitch was tennis, with the tool visualizing the difference in popularity between Roger Federer and Stanislas Wawrinka over a span of multiple years.

This is definitely useful for data collection and analysis, to the point where I strongly assume that programs similar to it are already used in many places. The addition of facial identification (which worked quite well according to the presenters) was interesting, as this means that visual archives are likely easier to analyse.

MeteoMom

The idea behind MeteoMom was “short term weather on the go”. Integrated into Amazon’s Alexa, this tool took weather data and translated it into something simple. Instead of telling you about temperature and humidity, it would tell you to, for example “Wear a coat and grab an umbrella” when going it. The tool was based off of SRF’s Meteo API.

During the Pitch the presenters noted that converting the API’s data into language for MeteoMom to the tell its user was the main hurdle. They also included a few ideas for what could be done in the future, namely severe weather warnings, details regarding other locations and a proper “back and forth” in which the tool would explain its decisions to the user in simple terms.

I can see some use in this, as it slots nicely into what Alexa, Google Home and others are already doing. While hearing about temperature, chance of rain and so on is nice, the more personal “please wear a coat” helps in making these tools more human and accessible, something that these tools benefit from.

Polis Mapping Workshop

This entry is the first in a sadly somewhat lengthy list of pitches where the livestream, bad mic quality and the accent-barrier made it difficult to understand what exactly was happening. The tool was based off of SRF’s Politics API (called Polis, from what I later gathered). It took relevant data and visualized it in a dropdown menu as well as various maps, both realistic and abstract.

I wasn’t too amazed by this tool, personally. I acknowledge that a lot of effort probably went into it and that the people behind it managed to create it in a short amount of time, yet it’s something that is already used in so many places. The fact this entry was referred to as a workshop makes me think that this may have been a presentation of what the already-existing tool, which wasn’t developed at the Hackdays, can do, in which case it wasn’t communicated clearly.

Rap Local

My personal winner of “best presentation”, this pitch started off with one of the presenters rapping. The tool itself aimed to take the “endless horrifying rant collection” (their words, not mine), that is the internet and its community and make interacting with them easier. To this end users could select text on an article (similar to here on Medium) and leave a reaction, which was then listed on a separate page. For reactions there was either “!” or “?”, the former of which being positive and signifying something important to the user, the other being negative or a question.

The tool hoped to create a “rap battle style” (again, their words) of community interaction, with immediate action and reaction.

I like the concept behind this one, Medium’s ability to highlight sections of articles is something I wish more pages would introduce. I feel like the !/? system may be a bit too simple, however, as it doesn’t communicate much and a “negative” reaction could also mean somebody doesn’t quite understand what an article is talking about. Nevertheless, this is one of the tools/concepts that I can see having the most real-life application.

infaux Darius

This program essentially was a version of Face2Face, deepfake and other tools of the same style. Taking a recording of one person’s face they could then transmit this to another person. Considering the amount of time spent on it, the result was quite impressive.

Application-wise this one is more of a “what could journalists soon have to deal with” than anything they themselves will use, in my opinion. (Let’s hope James O’Keefe never hears of this) What I can see some use in on the field of journalism would be deconstructing this technology, as knowing how it happens may help journalists of tomorrow analyse if a video was tampered with in this way or not. The second option is a bit stranger: Instead of having separate Anchors, you could just have one person who isn’t actually real doing the job (possibly interesting for online platforms), or replace Anchors who are sick/unavailable, which just opens another can of worms when it comes to ethics in media…

Of course, for other media professionals tools like this may open some new avenues that could be used in interesting ways too.

Blockchain Reporter

One of the two eventual winners, this tool, as is clearly visible, jumped onto the recent blockchain hype. According to the presenters, it aimed to “democratize the news” through allowing “verified reports by anonymous reporters”. As a decentralized mobile app its core audience would be journalists facing censorship, who could use the blockchain to send their reports out into the world.

Journalists would be verified through their private keys, which would also allow the direct donation of Ether, as the tool was built on Ethereum infrastructure.

My personal concern with this one (which was echoed by another guest I talked to) is it being anonymous. There has to be a way to confirm what somebody claims, and I feel like this tool works, in a way, against this (I of course don’t know exactly what the programmers were considering). Furthermore, the tool requires a dedicated mobile app, which would likely be easy to track and possibly influence or ban, at which point you may as well just rely on TOR to get your information out.

Still, it’s an interesting way to see how blockchain may be used in media in the future. If you are interested in this subject and want some further reading, the Columbia Journalist Review recently released an interview with the founder of po.et, a blockchain based tool for content creators as well as an article on Civil, a tool which is aimed specifically at journalists.

Personal News Extension

My notes on this one aren’t great as I spent a bit too much time writing down my thoughts on Blockchain Reporter, but in its essence, this one was a Chrome extension which allowed you to enter interests and websites you find interesting. It would then provide Highlights each day, serving as what I suppose is best called an eclectic and personalized Newsletter.

I quite like the concept behind this, picking highlights and showing them to you daily E-Mail Newsletter style is something that I can get behind. However, it’s nothing ground-breaking in my opinion. RSS has been around for a long time and does pretty much this. Still, I imagine a Chrome extension like this one already exists (if not, then somebody get around to making it!) which shows that there’s demand for it.

TV to SM

I didn’t quite understand this one, unfortunately. As far as I can tell, this program (which may have been programmed previously?), would do Market Research and then send info related to the current day’s TV program to potential viewers on Social Media (Facebook in this case).

This one was a bit of a strange one, because from what I can tell it’s less of a “we did this during these 24 hours” and more of a “this is what our company has been working on and what we plan to do with it” which doesn’t really capture the Hackdays’ main point, in my opinion. Nevertheless, this one clearly has real life application (otherwise it probably wouldn’t be commercially developed), even if I’m unsure if Messenger notifications about today’s TV program really are necessary. Probably nice as a reminder for something you really want to watch live, but beyond that, I’m not so sure.

VideoMLTech

My personal favourite when it comes to applicability and the eventual third-place, VideoMLTech used machine learning to analyse raw video footage. It cut out unusable parts which were, for example, blurry or had the recorder’s hand on them and then showed which parts have been cut. The tool worked quite well when presented, with most of the remaining footage being something that could be used further.

I love the idea behind this one. If you have a few hours of recorded footage and don’t want to manually cut out all the parts you can just let the tool do it and then check what was cut. As with everything related to machine learning it’s not perfect, but this is one of the tools pitched that I could see myself using regularly, which in my opinion is an important aspect when it comes to pitching software. It was also one of the few tools to really use machine learning from what I gathered, thus scoring some extra points on my personal “media and AI” scale.

Jass we can

Next up was an idea based off of DeepMind’s AlphaGo: An AI which could play Jass (on the off chance non-swiss people read this, there’s a link). In essence, the tool could take an input of cards and then determine the best order in which to play them, working around the fact that there is hidden information while doing so.

This presentation combined terms and concepts from both Jass and advanced AI. I’m not particularly well versed in either of the two thus my notes are a bit short. I don’t see a lot of usage in media in this one, beyond maybe running it during a televised Jass event instead of hiring experts to comment everybody’s most likely move.

MatchMe

This one was pitched as “Tinder for News”. The program showed you a headline, and through swiping you decided if you wanted to ignore it or add it onto your reading list, with each swipe improving the tool’s knowledge of your reading habits and thus the suggestions.

This is a concept I’ve come across before here on Medium. While that article’s a few years old by now, much of what was said in it was executed in MatchMe, and I will simply link my own comment on that article to describe my opinion on the concept: https://medium.com/@ivan.anderegg/its-an-interesting-thought-experiment-and-something-that-i-could-working-in-practice-6b5235e6e966

Make a Bunny Smile

This project was pitched by one of SRF’s inhouse developers, which may mean we’ll see this in action soon. Through using a webcam this tool allowed the tracking of a viewer’s facial expressions while they were watching a video and thus showed their emotions as the video progresses to the developer.

Apart from the fact that this sort of webcam usage throws up some red flags for me, as it would either have to be approved by the user every time they look at a video or just run in the background, I like this one. Tracking a viewer/reader’s emotions while they are consuming a piece of media is interesting from an audience targeting perspective, it would likely help in visualizing just what can attract viewers in a video.

WatchMe

This tool, from what I understand, let you input your favourite movies/TV shows and would then suggest similar movies and TV shows which would be shown on TV soon.

This presentation was quite short, so there’s not much more to say. I like the idea of only suggesting what will be on TV soon, which separates the program from countless ones that also recommend new things based on your favourites.

Captain Caption

Captain Caption was created around the concept of providing automatic Multilanguage Closed Captions on an input video. By not only doing Speech-to-Text but also analysing background music and noises, it could provide automatic captions quickly. The pitch included some well-done graphical aspects which showed both the Confidence Level of the various ways the tool got its results as well as what it would show at any specific point in the video.

Anybody that’s used automatic captions on YouTube knows that they don’t work amazingly well, but I’m optimistic both in this specific concept and the future of automatic captions in general. While I doubt they will ever replace people that provide them live (hopefully), being able to provide proper closed captions on the go could be useful in backing these people up or jumping in during timeslots where none are available.

La Bocca della Verità — Mouth of Truth

This tool was meant to help with researching a single thing. After inputing a keyword, the program would show a wide variety of articles, which it then automatically labelled through keywords, through which you could also filter articles. Further labels could be added by users, with a reputation system where commonly upvoted users’ labels would rank higher.

I like this one. It cuts down on time if you want a quick overview on a subject, which is something I’m always fond of. There’s some concerns regarding the potential abuse of the system (because the internet is an “endless horrifying rant collection” and troll factory) but the Steam Store’s user tag system has shown that that part can work (somewhat well). I’m not sure how a potential exclusion of websites, whether wanted or unwanted, which could also cause some problems, would be handled though.

Politcalendar

Using the SRF’s Politics API and a private database by the developers, Policalendar aimed to list all political events in Switzerland (as far as I could tell), with the current data this mainly meant votes and elections. The developers plan to continue working on this tool and the pitch included a list of things the developers were aiming to include in future versions. The website the tool runs on is actually live as of the time of writing: http://termine.politik.ch/

I like this one. It’s nothing massive, but it’s nice to have a centralized source that lists all votes happening on any given date. With what the developers are planning to include in the future this will probably turn into a nice tool to quickly figure out what’s happening, something that there might be demand for.

Méthylation

This team was made up of people currently working for Temps Present, RTS’ investigative magazine. Their project aimed to show what goes into the production of a report by being what the pitch called “a game”, where players would choose a role in the production process and then take a variety of decisions. They too mentioned that they’d continue working on it in the future during the pitch.

I like this one on two layers: The first; I like games, especially about things I’m interested in. The second is that this could help improve media competency, by giving people (especially teenagers/young adults) an easy way to see what goes into a report while keeping them invested through making it a game instead of just a video/text. I’ll certainly be watching where this one goes in the future.

Wordclipper

My notes sum up this one as “automated YouTube Poop”. Wordclipper would take a video as an input and run it through a Speech-to-Text. Afterwards the user could input a text, which the tool then recreated by taking sections from the video where single words were said. This worked quite well, even though I’m not sure if the original video was turned into one word long clips manually or automatically.

As I mentioned above, the first thing I had to think of were the YouTube Poop videos from years ago, when taking single words and creating sentences out of them seemingly was the internet’s favourite pastime. This is still done by some people, by the way. Worldclipper is a gimmick, but it’s a pretty cool one at that. Is there any real application for it, beyond making those content creator’s work easier? I doubt it. Is it fun? Yes.

#WeAreTataki Chatbot

Tataki is not just a Japanese way of serving fish but also a social media youth portal in the Romandie. This specific entry was based around a chatbot which would have three different personalities the portal uses, that users could then interact with, in the pitch presentation this was through a little game. The presenters noted that part of what the bot could do is similar to Buzzfeed’s personality tests.

This one goes onto the list of teams creating something that they have clear plans for, which means it’ll probably see usage. I’m personally not too big a fan of these kinds of chatbots, but I can see how this one could be used to help with user interaction and participation through messaging them on Facebook and then interacting with them, which is what the team was aiming for.

Warhol TV

Warhol TV’s main function was to take a video, run it through Speech-to-Text and then create a gif of the subtitled video. Beyond this, it could do the same thing but then take it a step further and find a popular gif that matched every word, which it then included in the resulting gif.

This one’s also what I’d classify as a fun gimmick. While the presenters explained the benefits of gifs, I’m a bit more sceptical, as anything beyond a few seconds in length just isn’t worth using gif for, in my opinion. Still, out of all the pitched projects this one would probably have some of the best chances of actually seeing use, in its case on Social Media.

Smart Badge Voice Assistant

The second eventual winner, this one was quite impressive and totally deserved in my opinion. Through linking Google Home, an IPad and a printer, this tool printed badges fully voice-controlled. It ran into some of the usual troubles associated with voice controls during the pitch, but that’s to be expected.

As I mentioned above, Smart Badge’s victory was deserved. The tool does one thing, and it does it well. I can see some application at parties or in places where having to enter something digitally may be a bit of a problem. It’s not groundbreaking, but putting up a few of these might help streamline some events in the future.

MX3ForMe

Up next was MX3ForMe, which aimed to recommend new songs on MX3, the SRG SSR’s platform for swiss music, to its user. Functioning with the MX3 API and tracking what a user listened to/watched, the tool would recommend new songs. In order to do this, it would use user-to-user collaborative filtering, in which one person’s likes are recommended to people who like the same things our of a collection.

At first I was fairly sceptical of this as I didn’t actually know what MX3 was (I’m a cultural savage, clearly). But now that I finally remembered, my main reaction is: if this isn’t already a thing, make it one, SRG! Again this is nothing groundbreaking, but on a platform for relatively unknown artists like MX3, this style of recommendation can be what pushes a band out of obscurity.

Mojo Wirecast Kit

From what I could gather from the pitch (not amazingly presented unfortunately), this was about a tool currently being developed inhouse by the RTS. Designed for mobile journalism it could take files and stream them to multiple places at the same time, if I understood this correctly.

As this presentation was both short and not particularly well explained, I don’t have much of an opinion on the program itself, especially as it wasn’t presented in action in any way. I (for now) don’t know what tools are available when it comes to mobile journalism, but I imagine that if it’s being developed, there’s a demand for it. Again, if I understood it correctly and this one was already developed before the Hackdays, it misses the event’s aim in my opinion, which is a shame.

Heidi

Heidi was a digital assistant in the style of Alexa and Google Home, focused specifically on the SRG’s radio channels. Completely voice activated, you could ask Heidi (which in this case was a wooden box), to play a specific channel. This would also offer the chance to gain some additional customer data.

I like this one, and I can see it having real world application (and apparently the concept already exists, which is unsurprising). With many radios becoming more and more digital, using them and switching channels will become more difficult for people who don’t have any tech knowhow, whereas telling a program to turn on your favourite channel will always be simple. There’s a few other uses for voice activated tools, where this one fits into a nice (yet very small) niche.

Afterhack

This one wasn’t a pitch, but instead an explanation of what would come now that the Hackdays were over. Debriefing, communication between organizers and participants, people on social media offering their opinions about the event and the projects (heh, I guess I was part of the pitches in a way) and in some cases, further development and how said development might happen.

This was quite interesting, as it offered some background into what would happen after the event, something that I’m quite interested in personally. While it didn’t talk about specific details a lot it raised a few nice points like the organization of the current data (github, mindmaps or just scattered pieces of paper) and the teams, some of which were people who had no previous contact. It wasn’t a pitch per se, and thus should’ve probably been placed at a different timeslot. In my opinion this would’ve been best placed as the last presentation (which it was originally intended to from what I can tell) or between the presentations and the winner ceremony.

Ratingsbot

The last entry which functioned purely voice controlled, ratingsbot used voice recognition to allow users to browse their account’s various metrics. This worked quite well in the presentation, with the option to choose the channel and timeframe. From what I could tell, the only thing the tool could do at the moment was popularity, which you could then further inspect on each entry.

This one was more of a gimmick again in my opinion, as I personally don’t see much use for voice control when it comes to a metrics tool. Nevertheless it was interesting to see what can be done with the tool and how specific you could get with search terms.

Community Event

Community Event’s pitch suffered from one major issue: The developer didn’t show up. There was a presentation which talked about the idea, however unfortunately the team could not present an actual program (and was disqualified from what I gathered). As for the idea itself: the program aimed to “connect people around an event”, by allowing for shared group livestreams and allowing people to interact over audio. This could then be further extended to allow people to do live commentary to a larger audience, according to the presenter.

This one could function quite well. Shared livestreaming already exists, as do things which allow you to livestream to a large audience. This shows that it can work in practice, the question would just be if there is enough demand for a separate application which does all of this. It’s something I could see myself using in some situations if it ended up being properly optimized.


That sums up my personal opinions on the various pitches. I don’t have any real insight into the media world, so obviously take these with a grain of salt, but perhaps this article will create some visibility for these concepts, as the livestream is no longer available to watch. If you are looking for a way back to my main post about the event, click here.

Like what you read? Give Ivan Anderegg a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.