A.I. Articles of the Week, Jan. 2018 #4

Shan Tang
BuzzRobot
Published in
5 min readJan 22, 2018

The Google Brain Team — Looking Back on 2017 (Part 1 of 2)

The Google Brain Team — Looking Back on 2017 (Part 2 of 2)

The Google Brain team works to advance the state of the art in artificial intelligence by research and systems engineering, as one part of the overall Google AI effort. Last year we shared a summary of our work in 2016. Since then, we’ve continued to make progress on our long-term research agenda of making machines intelligent, and have collaborated with a number of teams across Google and Alphabet to use the results of our research to improve people’s lives. This first of two posts will highlight some of our work in 2017, including some of our basic research work, as well as updates on open source software, datasets, and new hardware for machine learning. In the second post we’ll dive into the research we do in specific domains where machine learning can have a large impact, such as healthcare, robotics, and some areas of basic science, as well as cover our work on creativity, fairness and inclusion and tell you a bit more about who we are.

Cloud AutoML: Making AI accessible to every business

Cloud AutoML helps businesses with limited ML expertise start building their own high-quality custom models by using advanced techniques like learning2learn and transfer learning from Google. We believe Cloud AutoML will make AI experts even more productive, advance new fields in AI and help less-skilled engineers build powerful AI systems they previously only dreamed of.

Alexa, What Are You Doing with My Family’s Personal Info?

Amazon Alexa, Google Assistant and several smart-home technologies that debuted at last week’s CES add convenience but also raise privacy concerns

No, machines can’t read better than humans

Headlines have claimed AIs outperform humans at ‘reading comprehension,’ but in reality they’ve got a long way to go

How to Design Social Systems (Without Causing Depression and War)

“In my note to Mark Zuckerberg (which you probably want to read first), I urged his team and other technologists to reimagine their products as “practice spaces” — virtual places where people practice the kinds of acts and relationships they find meaningful.”

Getting Started with TensorFlow: A Machine Learning Tutorial

TensorFlow is an open source software library created by Google that is used to implement machine learning and deep learning systems. These two names contain a series of powerful algorithms that share a common challenge — to allow a computer to learn how to automatically spot complex patterns and/or to make best possible decisions. If you’re interested in details about these systems, you can learn more from the Toptal blog posts on machine learning and deep learning.

In defense of skepticism about deep learning

“In a recent appraisal of deep learning (Marcus, 2018) I outlined ten challenges for deep learning, and suggested that deep learning by itself, although useful, was unlikely to lead on its own to artificial general intelligence. I suggested instead the deep learning be viewed “not as a universal solvent, but simply as one tool among many.””

Introduction to Various Reinforcement Learning Algorithms. Part I (Q-Learning, SARSA, DQN, DDPG)

Reinforcement Learning (RL) refers to a kind of Machine Learning method in which the agent receives a delayed reward in the next time step to evaluate its previous action. It was mostly used in games (e.g. Atari, Mario), with performance on par with or even exceeding humans. Recently, as the algorithm evolves with the combination of Neural Networks, it is capable of solving more complex tasks, such as the pendulum problem:

I trained fake news detection AI with >95% accuracy, and almost went crazy

We made a fake news detector with above a 95% accuracy (on a validation set) that uses machine learning and Natural Language Processing that you can download here. In the real world, the accuracy might be lower, especially as time goes on and the way articles are written changes.

The light and dark of AI-powered smartphones

Analyst Gartner put out a 10-strong listicle this week identifying what it dubbed “high-impact” uses for AI-powered features on smartphones that it suggests will enable device vendors to provide “more value” to customers via the medium of “more advanced” user experiences.

What AI can and can’t do (yet) for your business

Artificial intelligence is a moving target. Here’s how to take better aim.

Best of CES 2018: The one company vital to gaming, self-driving cars, and AI

Quartz’s time at the Consumer Electronics Show in Las Vegas has come to a close, and we’ve reflected on a week of being inundated with gadgets, technology, and pitches.

Optimizing Mobile Deep Learning on ARM GPU with TVM

TVM addresses the difficulty of deploying for different hardwares by introducing an unified IR stack, with which the optimization for different hardwares can be done easily. In this post, we show how we use TVM/NNVM to generate efficient kernels for ARM Mali GPU and do end-to-end compilation. In our test on Mali-T860 MP4, compared with Arm Compute Library, our method is 1.4x faster on VGG-16 and 2.2x faster on MobileNet. Both graph-level and operator-level optimization contribute to this speed up.

My Journey Into Data Science and Bio-Informatics — Part 1: Programming

“Algorithms are the new drugs, and doctors the new technology prescribers.” — Hugh Harvey, radiologist

A List of Chip/IP for Deep Learning (keep updating)

Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world’s top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know.

Weekly Digest Dec. 2017 #1

Weekly Digest Dec. 2017 #2

Weekly Digest Dec. 2017 #3

Weekly Digest Dec. 2017 #4

Weekly Digest Jan. 2018 #1

Weekly Digest Jan. 2018 #2

Weekly Digest Jan. 2018 #3

--

--

Shan Tang
BuzzRobot

Since 2000, I worked as engineer, architect or manager in different types of IC projects. From mid-2016, I started working on hardware for Deep Learning.