Every month I will publish a blog post to summarise what I think are the 10 highlights in content on AI & data at large per month. I write this monthly overview with two objectives in mind. First to structure and process my own thoughts, and second to give back to the community. You can consume years worth of great content, just for free. There are some great initiatives that gather the best articles, and they have been very helpful to me. I hope this monthly blog is helpful to you as well.
1. Google Duplex content
Yes, this month was really about the Google Duplex tech demo. If you haven’t watch it yet;
When I read the Google blog post “Semantic Experiences by Ray Kurzweil & Rachel Bernstein”), I was really impressed, but didn’t foresee that they could also do this with it. Read the Google blog post about the underlying technique and ideas. Some of my thoughts on this topic (as mentioned on Twitter):
1. This will be an amazing tool for credit card scammers/Nigerian princes etc. If this tech really passes the Turing test, it will be hard to weapon yourself with or without tech against it.
2. The public (horrified/scarred) reaction reaction by some, seem to debunk the original notion of the uncanny valley. If the notion were true we would only welcome the ‘humanness’
3. I can’t wait to use this tech to handle and filter a part of my incoming and outgoing mail and phone-calls. Would be great if this bot can unsubscribe me from a service without me having to wrestle through sales reps. Sweet revenge for all those call center menu’s.
4. Governments really need to get their act together for regulation, and need serious tech consultancy and training. You can already see the stupid misguided regulation coming up, while this certainly needs regulation.
5. While the tech has great potential to close the gap between rich and poor, it’s likely to only increase it unfortunately.
6. This tech demo will stir up the AI hype even more. Keep in mind that this was a tech demo, and one that the company with arguably the most AI talent in one house build over the last couple of years. There is nothing that comes close to it outside of tech demo’s. Compare it to Alexa for instance, where you often need multiple tries to get it right. It’s very impressive, and really cool long term. But people will overrate AI short-term because demo’s like these.
7. Google completely overshadowed Microsoft’s and Facebook’s tech-shows with this demo alone. And it does seem that Google is at least a bit ahead on its competitors.
2. A.I. Is Harder Than You Think, by Gary Marcus and Ernest Davis, New York Times
Gary Marcus is one of the most critical scientists among AI researchers regarding the current approach to AI. He wrote the heavily discussed paper Deep Learning: A Critical Appraisal. In this New York Times article he and Davis take the Google Duplex demo to bash the progress of the AI field and concludes that the dominant approach to AI doesn’t work. And they advocate to learn more about the fundamental elements of human understanding first. While their critique has some merit I think you could apply it to all upcoming technologies. And there were plenty that advocated to look for new approaches when things like a smartphone or car were invented. But that didn’t stop those fields from advancing.
3. Facebook Adds A.I. Labs in Seattle and Pittsburgh, Pressuring Local Universities, The New York Times
As I’ve written in earlier posts, big dollars from the big tech companies are draining universities. From a professors perspective it’s completely understandable. If you have been working in a university for years, for a meager salary with all the limitations and frustrations from the university research life, I can imagine it’s very appealing to join a big company with a astronomical salary, a great data set and all the tools you need. However from a societal perspective it becomes very worrying if universities are drained from their best researchers. And it now comes to a point that if you want to work with the top scientists, you have to join Google because that’s where they are, and where you can learn from them. If they do not only have a monopoly on datasets, but also on data talent it becomes really scary.
We’ve seen quite some generated video and audio lately (examples are the video of Obama/ Trump or the porn videos with celebrities). In this video researchers explain in plain English how these models work, and how video is generated.
5. Article: AI Doubling Its Compute Every 3.5 Months, Synced
While according to Moore’s law computational power doubles every two years. Advances in in both software and hardware layers made it possible to go much faster than that. It now only takes approximately 3,5 months to double up. If you look a bit behind the numbers there are some caveats. But it’s very impressive regardless.
6. Podcast: Making intelligence intelligible with Dr. Rich Caruana, Microsoft Research Podcast
One of the big problem area’s for AI’s progression will be explainability. Caruana has been working on this problem for over a decade. In the U.S. there have been a couple of curious cases with recidivism models. Some peopled sued the state or company that made these models to get an explanation, ‘why do you think I have a higher chance of recidivism?’. But lawyers always ruled in favor of the company to either protect IP or not make the criminals smarter in a cat-and-mouse game. In the EU this is different, the GDPR forces companies to be able to explain their algorithms. Research in this area is very important. Because how can you explain a model with millions of entries? Potentially this can become a real bottleneck in exploration too. If DeepMind had to fully explain their models, AlphaGo wouldn’t be so smart now. Caruana made a ‘student’ that can mimic the algorithm (in some specific cases), and although not as sophisticated as the real model, it should be able to explain the model.
7. Article: Facebook is designing its own chips to help filter live videos., Bloomberg
It’s interesting to see that companies like Facebook have such big use-cases for specific A.I. tasks, that off the shelf-chips have to much unnecessary overhead. It seems like we are getting beyond the Van Neumann architecture. It’s no longer the all-purpose CPU’s that is doing the job, but much more specific chips. Google already designed the TPU, and now Facebook takes it arguably even further with this step. Chip design is generally a very interesting area. There is more room for smaller innovators than before. It will be interesting to see how Nvidia, Intel and AMD will respond to these changes.
If you are interested in this topic I would recommend you to listen to ‘Clouds, catapults and life after the end of Moore’s Law with Dr. Doug Burger’.
After all that noise around the Cambridge Analytica Facebook “leak” it’s good to finally read what they really did. To be honest they just took the Obama 2008 approach cut out the ethical boundaries and tuned it to 2016 tech-stack. No rocket science at all. But it does show how easy it is to abuse data if you don’t actively try to protect it. Companies will be collecting enormous amounts of data about you, and with domotica and new analytics systems this will only exponentially grow. Do you know what happens with the data from your smartwatch, -tv, -fridge, -thermostat etc? Chances are, it’s stored on some server from that vendor.
The New Yorker wrote a well-informed piece on AI safety research and AGI including typical New Yorker comics. I think the piece very clearly demonstrates on why it’s not a good idea to leave the development of AI and regulation to just Silicon Valley. Because it’s not at all a good representation of society.
10. MOOC: Google’s practica: Image Classification
Google published their first practica, image classification. They promised to publish more practica’s in the future. This specific one learns how you actually use, program and optimize image classification models.
Book: The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity by Byron Reese
In the same vein as Human + Machine: Reimagining Work in the Age of AI by Daugherty & Wilson, this book attempts to be of a more practical nature. I would recommend Human + Machine first, but it is still a recommended read.
DeepMind draws a lot of inspiration from the human brain to improve their methods. Some of their researchers have a background in neuroscience (among them CEO Demis Hassabis). Their newest insight came from looking to the brains reward system, dopamine.
Podcast: Netflix’s Justin Basilico on How Entertainment and AI Intersect, Nvidia AI Podcast
While of course, Basilico couldn’t go very deep into technical solutions that Netflix uses it is fascinating to hear how a company like Netflix approaches AI with their huge troves of data.