A.I. Articles of the Week, Aug. 2018 #1

Shan Tang
3 min readAug 7, 2018

--

Natural Language Processing is Fun!

How computers understand Human Language

Program Synthesis in 2017–18

…this post — a high-level overview of the recent ideas and representative papers in program synthesis as of mid-2018.

How decision trees work

Decision trees are one of my favorite models. They are simple, and they are powerful. In fact most high performing Kaggle entries are a combination of XGBoost, which is variant of decision tree, and some very clever feature engineering.

Ten Techniques Learned From fast.ai

Right now, Jeremy Howard — the co-founder of fast.ai — currently holds the 105th highest score for the plant seedling classification contest on Kaggle, but he’s dropping fast. Why? His own students are beating him. And their names can now be found across the tops of leaderboards all over Kaggle.

Using Uncertainty to Interpret your Model

As deep neural networks (DNN) become more powerful, their complexity increases. This complexity introduces new challenges, including model interpretability.

When Recurrent Models Don’t Need to be Recurrent

We discuss several proposed answers to this question and highlight our recent work that offers an explanation in terms of a fundamental stability property.

L1: Tensor Studio

L1: Tensor Studio is a live-programming environment for differentiable linear algebra. A playground for tensors.

Good First Impressions According to Data Science

Making a good first impression is hard. I made a model that can predict how good of an impression you are making based on a video clip submission.

Embrace the noise: A case study of text annotation for medical imaging

In this post we’ll discuss the recent paper TextRay: Mining Clinical Reports to Gain a Broad Understanding of Chest X-rays focusing on the best practices the paper exemplifies with regards to labeling text data for NLP.

Want Less-Biased Decisions? Use Algorithms.

And that is the most relevant question for practitioners and policy makers: How do the bias and performance of algorithms compare with the status quo? Rather than simply asking whether algorithms are flawed, we should be asking how these flaws compare with those of human beings.

Data’s day of reckoning

We can build a future we want to live in, or we can build a nightmare. The choice is up to us.

Evolutionary algorithm outperforms deep-learning machines at video games

Neural networks have garnered all the headlines, but a much more powerful approach is waiting in the wings.

DESPITE PLEDGING OPENNESS, COMPANIES RUSH TO PATENT AI TECH

A List of AI Chip/IP

Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world’s top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know.

Weekly Digest May. 2018 #1

Weekly Digest May. 2018 #2

Weekly Digest May. 2018 #3

Weekly Digest May. 2018 #4

Weekly Digest May. 2018 #5

Weekly Digest Jun. 2018 #1

Weekly Digest Jun. 2018 #2

Weekly Digest Jun. 2018 #3

Weekly Digest Jun. 2018 #4

Weekly Digest Jul. 2018 #1

Weekly Digest Jul. 2018 #2

Weekly Digest Jul. 2018 #3

Weekly Digest Jul. 2018 #5

--

--

Shan Tang

Since 2000, I worked as engineer, architect or manager in different types of IC projects. From mid-2016, I started working on hardware for Deep Learning.