LINTAO ZHANG/GETTY

Interesting stuff of AI, Machine learning, and Deep Learning 2017–10#3

A List of Chip/IP for Deep Learning (keep updating)

Shan Tang
Published in
5 min readOct 17, 2017

--

1. China’s AI Awakening

The West shouldn’t fear China’s artificial-intelligence revolution. It should copy it. China might have unparalleled resources and enormous untapped potential, but the West has world-­leading expertise and a strong research culture. Rather than worry about China’s progress, it would be wise for Western nations to focus on their existing strengths, investing heavily in research and education. The risk is missing out on an incredibly important technological shift. Yes, companies like Google and Facebook are making important strides in AI today, but this isn’t enough to reboot a whole economy. Despite the fanfare around AI, there are few economic signs — such as increased productivity — that most of the economy is taking advantage of the technology yet. Large segments of the economy beyond Silicon Valley, like medicine, service industries, and manufacturing, also need to sign on.

2. WaveNet launches in the Google Assistant

Just over a year ago we presented WaveNet, a new deep neural network for generating raw audio waveforms that is capable of producing better and more realistic-sounding speech than existing techniques. At that time, the model was a research prototype and was too computationally intensive to work in consumer products.
But over the last 12 months we have worked hard to significantly improve both the speed and quality of our model and today we are proud to announce that an updated version of WaveNet is being used to generate the
Google Assistantvoices for US English and Japanese across all platforms.

3. Programmable Chips Could Move Machine Learning Out of the Cloud

Efinix, a semiconductor start-up based in Santa Clara, is trying to create a programmable chip that can be customized for machine learning used in applications like robotic arms, autonomous drones, security cameras, and driverless cars.

4. The Curse of Dimensionality

How do machines see? Or, in general, how can computers reduce an input of complex, high-dimensional data into a more manageable number of features? Extend your open hand in front of a nearby light-source, so that it casts a shadow against the nearest surface. Rotate your hand and study how its shadow changes. Note that from some angles it casts a narrow, thin shadow. Yet from other angles, the shadow looks much more recognizably like the shape of a hand.

5. Inside Vicarious, the Secretive AI Startup Bringing Imagination to Computers

By reinventing the neural network, the company hopes to help computers make the leap from processing words and symbols to comprehending the real world. That’s the belief motivating the founders of Vicarious, an enigmatic AI company backed by some of the most famous and successful names in Silicon Valley. Vicarious is developing a new way of processing data, inspired by the way information seems to flow through the brain. The company’s leaders say this gives computers something akin to imagination, which they hope will help make the machines a lot smarter.

6. Teachable Machine

This experiment lets anyone explore how machine learning works, in a fun, hands-on way. You can teach a machine to using your camera, live in the browser — no coding required. You train a neural network locally on your device, without sending any images to a server. That’s how it responds so quickly to you. Watch this video to learn more:

7. Learning Diverse Skills via Maximum Entropy Deep Reinforcement Learning

Deep reinforcement learning (deep RL) has achieved success in many tasks, such as playing video games from raw pixels (Mnih et al., 2015), playing the game of Go (Silver et al., 2016), and simulated robotic locomotion (e.g. Schulman et al., 2015). Standard deep RL algorithms aim to master a single way to solve a given task, typically the first way that seems to work well. Therefore, training is sensitive to randomness in the environment, initialization of the policy, and the algorithm implementation. This phenomenon is illustrated in Figure 1, which shows two policies trained to optimize a reward function that encourages forward motion: while both policies have converged to a high-performing gait, these gaits are substantially different from each other.

8. How To Stop Worrying And Love The Great AI War Of 2018

Artificial intelligence is the biggest opportunity in business since mobile, for tech giants and upstarts alike. It’s also the most confusing. We’re here to help.

9. GOOGLE’S SEARCH FOR THE SWEET SPOT

With every Google product release, we learn more about the company’s overarching strategy. Following this week’s hardware event, it’s safe to say they’re betting the farm on artificial intelligence.

10. The Seven Deadly Sins of AI Predictions

Mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future. We are surrounded by hysteria about the future of artificial intelligence and robotics — hysteria about how powerful they will become, how quickly, and what they will do to jobs.

11. A theory of working memory without consciousness or sustained activity

Working memory and conscious perception are thought to share similar brain mechanisms, yet recent reports of non-conscious working memory challenge this view. Combining visual masking with magnetoencephalography, we investigate the reality of non-conscious working memory and dissect its neural mechanisms. In a spatial delayed-response task, participants reported the location of a subjectively unseen target above chance-level after several seconds. Conscious perception and conscious working memory were characterized by similar signatures: a sustained desynchronization in the alpha/beta band over frontal cortex, and a decodable representation of target location in posterior sensors. During non-conscious working memory, such activity vanished. Our findings contradict models that identify working memory with sustained neural firing, but are compatible with recent proposals of ‘activity-silent’ working memory. We present a theoretical framework and simulations showing how slowly decaying synaptic changes allow cell assemblies to go dormant during the delay, yet be retrieved above chance-level after several seconds.

Weekly Digest Sept. 2017 #1

Weekly Digest Sept. 2017 #2

Weekly Digest Sept. 2017 #3

Weekly Digest Sept. 2017 #4

Weekly Digest Oct. 2017 #1

Weekly Digest Oct. 2017 #2

--

--

Shan Tang
BuzzRobot

Since 2000, I worked as engineer, architect or manager in different types of IC projects. From mid-2016, I started working on hardware for Deep Learning.