Interesting stuff of AI, Machine learning, and Deep Learning 2017–10#5

Shan Tang
BuzzRobot
Published in
4 min readOct 31, 2017
http://research.nvidia.com/sites/default/files/pubs/2017-10_Progressive-Growing-of//karras2017gan-paper.pdf

These Images are Generated by a Deep Learning GAN

“This is my shortest article ever. I am speechless. Here’s a video about this:”

and it is the paper “PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION

The Real Story of Automation Beginning with One Simple Chart

“There’s a chart I came across earlier this year, and not only does it tell an extremely important story about automation, but it also tells a story about the state of the automation discussion itself. It even reveals how we can expect both automation and the discussion around automation to continue unfolding in the years ahead. The chart is a plot of oil rigs in the United States compared to the number of workers the oil industry employs, and it’s an important part of a puzzle that needs to be pieced together before it’s too late.”

The 10 Top Recommendations for the AI Field in 2017

“Today we released our second annual research report on the state of artificial intelligence. Since last year’s report, we’ve seen early stage AI technologies continue to filter into many everyday systems: from scanning faces at airport security, to recommending to hire someone, to granting someone bail, to denying someone a loan. This report was developed for our annual AI Now Experts’ Workshop, which included 100 invited researchers across relevant domains, and it reflects a range of views that were discussed at the event.”

A List of Chip/IP for Deep Learning (keep updating)

“Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world’s top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know.”

How to unit test machine learning code.

“Over the past year, I’ve spent most of my working time doing deep learning research and internships. And a lot of that year was making very big mistakes that helped me learn not just about ML, but about how to engineer these systems correctly and soundly. One of the main principles I learned during my time at Google Brain was that unit tests can make or break your algorithm and can save you weeks of debugging and training time.”

First assessment of learning-to-rank

“The Search Platform Team has been working on improving search on Wikimedia projects with machine learning. Machine learned-ranking (MLR) enables us to rank relevance of pages using a model trained on implicit and explicit judgements. In the first test of the learning-to-rank (LTR) project, we evaluated the performance of a click-based model on users searching English Wikipedia. We found that users were slightly more likely to engage with MLR-provided results than with BM25 results (assessed via the clickthrough rate and a preference statistic). We also found that users with machine learning-ranked results were statistically significantly more likely to click on the first search result first than users with BM25-ranked results, which indicates that we are onto something. The next step for us is to evaluate the model’s performance on Wikipedia in other languages.”

Common Sense, Cortex, and CAPTCHA

“What kind of generative model would suffice for common sense? One way to approach this is to instead ask: what kind of model does the human visual system build? In our recent Science paper, we take a step towards answering these questions by demonstrating how clues from the cortex can be incorporated into a computer vision model we call the Recursive Cortical Network (RCN) [4]. In this blog post, we describe RCN in the context of common sense, cortex, and our long-term research ambitions at Vicarious.”

How Humanity Can Build Benevolent Artificial Intelligence

“Artificial Intelligence gets a bad rap. Any time an AI appears in a movie, we can safely predict that it will turn malevolent in the second act. Twenty minutes into Westworld or I, Robot, we all knew what was coming — the AI will turn evil, forcing humans to fight them as enemies.”

As deep learning frameworks converge, automation possibilities unfold

“Getting productive on a new DL project may require that DL developers be cross-trained on a different modeling framework. However, this requirement is becoming more cumbersome as the range of open-source and commercial DL frameworks grows.”

Ranking Popular Deep Learning Libraries for Data Science

“At The Data Incubator, we pride ourselves on having the most up to date data science curriculum available. Much of our curriculum is based on feedback from corporate and government partners about the technologies they are using and learning. In addition to their feedback we wanted to develop a data-driven approach for determining what we should be teaching in our data science corporate training and our free fellowship for masters and PhDs looking to enter data science careers in industry. Here are the results.”

Learning a Hierarchy

“We’ve developed a hierarchical reinforcement learning algorithm that learns high-level actions useful for solving a range of tasks, allowing fast solving of tasks requiring thousands of timesteps. Our algorithm, when applied to a set of navigation problems, discovers a set of high-level actions for walking and crawling in different directions, which enables the agent to master new navigation tasks quickly.”

Weekly Digest Oct. 2017 #1

Weekly Digest Oct. 2017 #2

Weekly Digest Oct. 2017 #3

Weekly Digest Oct. 2017 #4

--

--

Shan Tang
BuzzRobot

Since 2000, I worked as engineer, architect or manager in different types of IC projects. From mid-2016, I started working on hardware for Deep Learning.