Eric Nyquist for Quanta Magazine

Interesting stuff of AI, Machine learning, and Deep Learning 2017–10#2

Shan Tang
BuzzRobot
Published in
4 min readOct 8, 2017

--

A List of Chip/IP for Deep Learning (keep updating)

1. A Brain Built From Atomic Switches Can Learn

A tiny self-organized mesh full of artificial synapses recalls its experiences and can solve simple problems. Its inventors hope it points the way to devices that match the brain’s energy-efficient computing prowess.

2. The Seven Deadly Sins of Predicting the Future of AI

We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs. Below I outline seven ways of thinking that lead to mistaken predictions about robotics and Artificial Intelligence. We find instances of these ways of thinking in many of the predictions about our AI future. I am going to first list the four such general topic areas of such predictions that I notice, along with a brief assessment of where I think they currently stand.

3. STYLE2PAINTS

The AI can paint on a sketch according to a given specific color style. The codes can be found here.

4. Nonlinear Computation in Deep Linear Networks

We’ve shown that deep linear networks — as implemented using floating-point arithmetic — are not actually linear and can perform nonlinear computation. We used evolution strategies to find parameters in linear networks that exploit this trait, letting us solve non-trivial problems. Neural networks consist of stacks of a linear layer followed by a nonlinearity like tanh or rectified linear unit. Without the nonlinearity, consecutive linear layers would be in theory mathematically equivalent to a single linear layer. So it’s a surprise that floating point arithmetic is nonlinear enough to yield trainable deep networks.

5. GPUs vs. TPUs — Can NVIDIA Hold On To Its Lead?

Now that deep learning has turbocharged NVIDIA’sNVDA data center business by five-fold in the past two years, competition is brewing. As of 2017 nearly a dozen startups have launched, with the goal of building dedicated chips for deep learning applications. In addition, large public tech companies such as Alphabet, Intel, AMD, Qualcomm, and Apple plan to enter the market for deep learning chips. Given this intense and growing competition, can NVIDIA maintain its lead in the market for deep learning chips? And, which of the newcomers are best positioned to succeed?

6. Experts Weigh in on Fairness and Performance Trade-Offs in Machine Learning

Artificial intelligence plays an increasingly important role in informing public policy — for everything from the likelihood of an individual becoming homeless or dropping out of school can be predicted through algorithms today. But how do these algorithms generate their predictions? Are the results fair? How do we define “fair” when we balance technology and government decisions?

7. Google and Uber’s Best Practices for Deep Learning

There is more to building a sustainable Deep Learning solution that what is provided by Deep Learning frameworks like TensorFlow and PyTorch. These frameworks are good enough for research, but they don’t take into account the problems that crop up with production deployment. I’ve written previously about technical debt and the need from more adaptive biological like architectures. To support a viable business using Deep Learning, you absolutely need an architecture that supports sustainable improvement in the presence of frequent and unexpected changes in the environment. Current Deep Learning framework only provide a single part of a complete solution.

8. Real-Time Recognition of Handwritten Chinese Characters Spanning a Large Inventory of 30,000 Characters

Handwriting recognition can enhance user experience on mobile devices, particularly for Chinese input given the relative complexity of keyboard methods. Chinese handwriting recognition is uniquely challenging, due to the large size of the underlying character inventory. Unlike alphabet-based writing, which typically involves on the order of 100 symbols, the set of Hànzì characters in Chinese National Standard GB18030–2005 contains 27,533 entries, and many additional logographic characters are in use throughout Greater China.

9. Intel Gears Up For FPGA Push

Chip giant Intel has been talking about CPU-FPGA compute complexes for so long that it is hard to remember sometimes that its hybrid Xeon-Arria compute unit, which puts a Xeon server chip and a midrange FPGA into a single Xeon processor socket, is not shipping as a volume product. But Intel is working to get it into the field and has given The Next Platform an update on the current plan.

10. Will the Future of AI Learning Depend More on Nature or Nurture?

NYU’s Yann LeCun and Gary Marcus debate whether or not AI needs more built-in cognitive machinery similar to that of humans and animals to achieve similar intelligence

11. GANs are Broken in More than One Way: The Numerics of GANs

Last year, when I was on a mission to “fix GANs” I had a tendency to focus only on what the loss function is, and completely disregard the issue of how do we actually find a minimum. Here is the paper that has finally challenged that attitude: Mescheder, Nowozin, Geiger (2017): The Numerics of GANs I reference Marr’s three layers of analysis a lot, and I enjoy thinking about problems at the computational level: what is the ultimate goal we do this for? I was convinced GANs were broken at this level: they were trying to optimize for the wrong thing or seek equilibria that don’t exist, etc. This is why I enjoyed f-GANs, Wasserstein GANs, instance noise, etc, while being mostly dismissive of attempts to fix things at the optimization level, like DCGAN or improved techniques (Salimans et al. 2016). To my defense, in most of deep learning, the algorithmic level is sorted: stochastic gradient descent. You can improve on it, but it’s not broken, it doesn’t usually need fixing.

Weekly Digest Sept. 2017 #1

Weekly Digest Sept. 2017 #2

Weekly Digest Sept. 2017 #3

Weekly Digest Sept. 2017 #4

Weekly Digest Oct. 2017 #1

--

--

Shan Tang
BuzzRobot

Since 2000, I worked as engineer, architect or manager in different types of IC projects. From mid-2016, I started working on hardware for Deep Learning.