Best content on data analytics & AI of May 2018

1. Google Duplex content

Screenshot of the Duplex tech demo

2. A.I. Is Harder Than You Think, by Gary Marcus and Ernest Davis, New York Times

Gary Marcus is one of the most critical scientists among AI researchers regarding the current approach to AI. He wrote the heavily discussed paper Deep Learning: A Critical Appraisal. In this New York Times article he and Davis take the Google Duplex demo to bash the progress of the AI field and concludes that the dominant approach to AI doesn’t work. And they advocate to learn more about the fundamental elements of human understanding first. While their critique has some merit I think you could apply it to all upcoming technologies. And there were plenty that advocated to look for new approaches when things like a smartphone or car were invented. But that didn’t stop those fields from advancing.

3. Facebook Adds A.I. Labs in Seattle and Pittsburgh, Pressuring Local Universities, The New York Times

As I’ve written in earlier posts, big dollars from the big tech companies are draining universities. From a professors perspective it’s completely understandable. If you have been working in a university for years, for a meager salary with all the limitations and frustrations from the university research life, I can imagine it’s very appealing to join a big company with a astronomical salary, a great data set and all the tools you need. However from a societal perspective it becomes very worrying if universities are drained from their best researchers. And it now comes to a point that if you want to work with the top scientists, you have to join Google because that’s where they are, and where you can learn from them. If they do not only have a monopoly on datasets, but also on data talent it becomes really scary.

4. Stanford’s explanation on how DeepFake Video’s work

We’ve seen quite some generated video and audio lately (examples are the video of Obama/ Trump or the porn videos with celebrities). In this video researchers explain in plain English how these models work, and how video is generated.

5. Article: AI Doubling Its Compute Every 3.5 Months, Synced

While according to Moore’s law computational power doubles every two years. Advances in in both software and hardware layers made it possible to go much faster than that. It now only takes approximately 3,5 months to double up. If you look a bit behind the numbers there are some caveats. But it’s very impressive regardless.

6. Podcast: Making intelligence intelligible with Dr. Rich Caruana, Microsoft Research Podcast

One of the big problem area’s for AI’s progression will be explainability. Caruana has been working on this problem for over a decade. In the U.S. there have been a couple of curious cases with recidivism models. Some peopled sued the state or company that made these models to get an explanation, ‘why do you think I have a higher chance of recidivism?’. But lawyers always ruled in favor of the company to either protect IP or not make the criminals smarter in a cat-and-mouse game. In the EU this is different, the GDPR forces companies to be able to explain their algorithms. Research in this area is very important. Because how can you explain a model with millions of entries? Potentially this can become a real bottleneck in exploration too. If DeepMind had to fully explain their models, AlphaGo wouldn’t be so smart now. Caruana made a ‘student’ that can mimic the algorithm (in some specific cases), and although not as sophisticated as the real model, it should be able to explain the model.

7. Article: Facebook is designing its own chips to help filter live videos., Bloomberg

It’s interesting to see that companies like Facebook have such big use-cases for specific A.I. tasks, that off the shelf-chips have to much unnecessary overhead. It seems like we are getting beyond the Van Neumann architecture. It’s no longer the all-purpose CPU’s that is doing the job, but much more specific chips. Google already designed the TPU, and now Facebook takes it arguably even further with this step. Chip design is generally a very interesting area. There is more room for smaller innovators than before. It will be interesting to see how Nvidia, Intel and AMD will respond to these changes.
If you are interested in this topic I would recommend you to listen to ‘Clouds, catapults and life after the end of Moore’s Law with Dr. Doug Burger’.

8.Cambridge Analytica: how did it turn clicks into votes?

After all that noise around the Cambridge Analytica Facebook “leak” it’s good to finally read what they really did. To be honest they just took the Obama 2008 approach cut out the ethical boundaries and tuned it to 2016 tech-stack. No rocket science at all. But it does show how easy it is to abuse data if you don’t actively try to protect it. Companies will be collecting enormous amounts of data about you, and with domotica and new analytics systems this will only exponentially grow. Do you know what happens with the data from your smartwatch, -tv, -fridge, -thermostat etc? Chances are, it’s stored on some server from that vendor.

9. Article: How Frightened Should We Be of A.I.?

The New Yorker wrote a well-informed piece on AI safety research and AGI including typical New Yorker comics. I think the piece very clearly demonstrates on why it’s not a good idea to leave the development of AI and regulation to just Silicon Valley. Because it’s not at all a good representation of society.

10. MOOC: Google’s practica: Image Classification

Google published their first practica, image classification. They promised to publish more practica’s in the future. This specific one learns how you actually use, program and optimize image classification models.


Book: The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity by Byron Reese

In the same vein as Human + Machine: Reimagining Work in the Age of AI by Daugherty & Wilson, this book attempts to be of a more practical nature. I would recommend Human + Machine first, but it is still a recommended read.

Article: Prefrontal cortex as a meta-reinforcement learning system | DeepMind

DeepMind draws a lot of inspiration from the human brain to improve their methods. Some of their researchers have a background in neuroscience (among them CEO Demis Hassabis). Their newest insight came from looking to the brains reward system, dopamine.

Podcast: Netflix’s Justin Basilico on How Entertainment and AI Intersect, Nvidia AI Podcast

While of course, Basilico couldn’t go very deep into technical solutions that Netflix uses it is fascinating to hear how a company like Netflix approaches AI with their huge troves of data.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store