Deep Hunt — Issue #54

Here are the highlights of the week: OpenAI’s bot beats top Dota 2 player; Andrew Ng’s Next Trick: Training a Million AI Expert; Captioning Novel Objects in Images; Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm

Avinash Hindupur
Deep Hunt
4 min readAug 12, 2017

--

News

OpenAI’s bot beats top Dota 2 player so badly that he quits

An artificial intelligence has beaten one of the world’s top Dota 2 players in single combat today. Danil Ishutin, better known by his gaming handle “Dendi,” threw in the towel in the middle of a second game against a bot that OpenAI created, one that had been beating him handily.

DeepMind and Blizzard open StarCraft II as an AI research environment

Along with Blizzard Entertainment, DeepMind announces the release of the Starcraft II Learning Environment (SC2LE), a suite of tools to accelerate AI research in the real-time strategy game and make it easier for researchers to focus on the frontiers of this field.

NVIDIA Gives Away More V100s to AI Researchers

NVIDIA hands out the V100s — which are the world’s most powerful GPUs, offering more than 100 teraflops of deep learning performance, to 15 recipients from major labs across the world.

Articles

Andrew Ng’s Next Trick: Training a Million AI Expert

Millions of people should master deep learning, says the leading AI researcher and educator — Andrew Ng and starts deeplearning.ai. You can watch the series of interviews he did with top AI leaders as a part of these online courses here.

Tutorials, Tools and Tips

Captioning Novel Objects in Images

Current visual description or image captioning models work quite well, but they can only describe objects seen in existing image captioning training datasets, what to do on novel objects?

EffectiveTensorflow: Tensorflow tutorials and best practices.

Here is a cool resource to know all about Tensorflow best practices and a bunch of tutorials.

Jeff Dean’s Lecture for YC AI

Jeff Dean is a Google Senior Fellow in the Research Group, where he leads the Google Brain project. He spoke to the YC AI group this summer. Watch the talk and read his slides here.

Research

MIT: Visual Importance

Knowing where people look and click on visual designs can provide clues about how the designs are perceived, and where the most important or relevant content lies. This is where this research made great strides.

Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm

Through emoji prediction on a dataset of 1246 million tweets containing one of 64 common emojis, this group of researchers obtain state-of-the-art performance on 8 benchmark datasets within sentiment, emotion and sarcasm detection using a single pretrained model.

If you like what you are reading, please follow and recommend to your friends or give a shoutout on Twitter! I’m glad to hear your suggestions and recommendations @deephunt_in or in comments below!

--

--

Avinash Hindupur
Deep Hunt

Dreamer, @iitguwahati alum. Creator of @deephunt_in, Organiser @ DeepLearningDelhi | Interested in all things data and machine learning.