Raul Incze
Cognifeed
Published in
5 min readSep 13, 2019

--

This week in machine learning: Facebook launched a new deepfake challenge, researchers detect heart failure 💔from a single heartbeat with 100% accuracy and more!

In the spotlight🔦 Sonero — an AI powered speech coach.

A deepfake example from Wikipedia.

If you remember from our last TWML, a deepfake voice was used to impersonate a CEO and steal some money. In an attempt to help researchers develop betters tools to detect such deepfakes, Facebook has released a new dataset last week.

This is a constantly evolving problem, much like spam or other adversarial challenges, and our hope is that by helping the industry and AI community come together we can make faster progress.

Be careful how you treat your AI personal assistant as it might be your best friend when you get older! In her article, Tanya Basu explores how Alexa and Google Assistant are embraced by the elderly at a rate no other tech gadget ever was. Digital assistants seem to have become the perfect portal for them to access information about anything on the internet… including their pensions.

It’s about convenience, and they like that they can talk to a robot 24/7 and ask questions. But they also say they have a new friend at home. In the morning they get up and say good morning, and when they go to bed, they say good night. Nobody wanted to give it back.

Of course, the conversational technology is not perfect yet and the road to perfection might be hindered by various regulations that will be very soon in place on using audio conversations as training data.

Who would have guessed that humans dislike being bossed around by algorithms? You did? Oh… well, now it’s official! A joint study by The New York University and The University of Warwick shows that Uber drivers dislike the constant surveillance, the little transparency and dehumanization they face when engaging in the gig-economy activity. Given that a lot of companies are using algorithms to manage their remote workers this might prove to be a big issue for the mental well being of the workforce of the future.

WIRED published a very intriguing write-up on how cold war analogies and misguided references to “arms race” scenarios are hurting emergent technologies, AI included. It hinders development by warping policy making around this ominous fear and encourages governments to over-focus on AI’s military applications.

With great interconnection between the US and Chinese technology sectors, science and technology research is anything but zero-sum.

We agree with WEIRED on the matter and believe that tech research, at least when it comes to AI, knows no borders in today’s global environment. The “arms race” analogy is a bit naive.

Have you wanted to start a let’s play channel but you have no personality? (… that appeals to kids, of course) Or maybe you’re not funny enough to do the voice over commentary? No worries! Soon enough a machine learning model is going to generate it for you!

To be fair, I was hoping for a video with a robotic voice over it talking something close to gibberish. This is more of an image captioning task, where frames are clustered together and split into scenes after which a text is generated for each scene. It seems to work well enough, but I’m missing my WOW-factor.

Mihaela Porumb et al. have published their research in Biomedical Signal Processing and Control on using convolutional neural networks to identify congestive heart failure (CHF) with 100% accuracy from a single heartbeat.

Being sort of a medical illiterate I scoffed at the results at first… It’s not predicting anything in the future but rather labeling the heart failure only after the fact. But apparently this is pretty hard to do right now so this is a big deal. And the 100% accuracy is really impressive. We don’t see this type of results in a mostly probabilistic field such as machine learning too often.

Startup Spotlight

I had to do a lot of public speaking lately — from pitching in front of investors to speaking at meetups… And, to be honest, I suck. I’ve been using my friends and team to get feedback and try and improve. While I appreciate their valuable input, they are not public speaking coaches that can give precise feedback and tips on what and how to improve.

Sonero aims to fill in for that public speaking coach I so desperately need. It uses various machine learning model analyzing the speech (both content and delivery) and all the visual cues while delivering it to predict various attributes such as pacing, energy and facial expressivity.

Don’t forget to show them some love on Product Hunt too!

Oh… One more thing, before we wrap everything up. If you’re working in an organisation (startup, company, corporation, etc) and you’re having difficulties calculating how much implementing AI would cost, you might want to check out our new article:

… fresh out of the oven. That is all for this week! Until next one, stay curious!

--

--

Raul Incze
Cognifeed

Fighting to bring machine learning to as many products and businesses as possible, automating processes and improving living experience.