Image for post
Image for post

— by Junling Hu

Since the success of deep learning in computer vision in 2012, we have seen its rapid extension to many AI domains: From speech recognition to machine translation, from game playing to robotics, deep learning has shown versatile capabilities of tackling different types of AI problems.

The most notable development is in natural language processing (NLP), where BERT, a deep learning model based on transformer, has demonstrated super-human performance for almost all NLP problems, such as machine translation, named entity detection, sentiment analysis and question answering. In the SQuAD (Stanford Question Answering Dataset) competition, the winning programs on the leaderboard have surpassed human performance (92% vs. …


— by Clay Morrison

Technology has rapidly evolved since the turn of the century. With the internet now widely available, 3D printers allowing us to create whatever we want, and virtual reality a viable means of entertainment, the future is already on our doorstep.

But while these changes have helped to make our lives far easier, there’s a potential downside too. As AI (artificial intelligence) continues to move forwards, the probability of these robotic rivals taking human jobs becomes ever-higher.

The biggest threat to this comes in the form of the transport and storage industry, where it’s predicted as many as 56% of jobs are at risk of being taken. …


Image for post
Image for post

Łukasz Kaiser joined Google Brain in 2013. He moved from French National Center for Scientific Research. At Google Brain, he co-designed neural models for machine translation, parsing and other algorithmic and generative tasks and co-authored the TensorFlow system and the Tensor2Tensor library.

Advanced seq2seq learning

Deep learning researchers started eyeing natural language processing, an AI research field concerned with human-machine interaction with languages. It was, however, a difficult challenge.

“When neural networks first came out, it’s built for image recognition to process inputs with the same dimension of pixels. Sentences are not the same as images.” Lukasz says.

The magic of deep learning did not happen in NLP until Google Brain Researcher Ilya Sutskever, Oriol Vinyals, and Quoc Le proposed sequence-to-sequence learning in their 2014 paper Sequence to Sequence Learning with Neural Networks. It is an end-to-end encoder-decoder architecture built on the recurrent neural network (RNN) and long-term short memory (LSTM) to map the sequential data like texts and documents into a fixed-length vector. …


Image for post
Image for post

On November 11 at the AI Frontiers Conference in San Jose Convention Center, California, Google Cloud Platform (GCP) will present full-day training on image understanding with TensorFlow.

Image understanding is an AI technique that interprets images to figure out its context, including what the objects are, their spatial relationship to each other, and so on. It is now playing an essential role in numerous fields, for example, from Facebook automatically creating hashtags for your photos to personal album organization on your smartphones.

With increasing adoption of image understanding across various services and products, the industry now demands a large number of talents who can master the skills of image understanding. …


Image for post
Image for post

Udacity opened up a three-month NLP nano-degree program this year. The AI Frontiers Conference has teamed up with Udacity to present a shorter version of the program delivered directly by members of Udacity. The training touches upon text processing, feature extraction, topic modeling, and NLP with deep learning.

While we have seen a tremendous growth of applications powered by speech recognition and computer vision over the past few years, NLP, an area of artificial intelligence concerned with interactions between computers and human languages, represents the next technological breakthrough. Last year at the AI Frontiers Conference, Deeplearning.ai & Landing.ai …


Image for post
Image for post

I remember the first time I saw photoshop more than 10 years ago: My designer was trying to modify a picture for our website, to match the color of our website — Green. He clicked on the blue sweater of the woman in the picture (and then clicked on some color on the side), then magically that whole sweater turned into green while other parts of the pictures kept the same. I was awed.

Over 90 percent of the world’s creative professionals use Photoshop, an Adobe product. Over 12 million people subscribe to Adobe’s Creative Cloud, the suite of Photoshop, Premiere Pro, After Effects, etc. Each day Adobe receives hundreds of millions of highly-produced images and videos from all over the world. …


Image for post
Image for post

AI has come to the game industry.

Last year, Electronic Arts established an R&D division called SEED. The team leverages AI to explore new technologies and creative opportunities that they can enable for future games. Recently they showcased their latest work with real-time ray-tracing and self-learning AI agents that can play Battlefield.

And in the digital world, the billion dollar game company Epic Games created a believable virtual human in an collaborate effort with CubicMotion, 3Lateral, Tencent, and Vicon. …


Image for post
Image for post

-Junling Hu

The most exciting developments of AI in 2018 is AutoML. It automates machine learning process. In January this year, Google released AutoML Vision. Then in July Google launched AutoML for machine translation and natural language processing. Both packages have been used by companies such as Disney in practical applications.

Google’s AutoML is based on Neural Architecture Search (NAS), invented in the end of 2016 (and presented in ICLR 2017) by Quoc Le and his colleague at Google Brain. In this article I will review the historical context of AutoML and the essential ideas of NAS.

Historical context

In the last 6 years (since 2012), AI has taken the world by storm. The term machine learning has become an almost magic phrase, implying some process that can automatically solves problems that human cannot. Be it showing ads, making recommendation, or predicting a frauded transaction, machine learning has become the synonym to automation. …


Image for post
Image for post

Last month Sony re-launched its AIBO robot dog, which took Mario Munich on a trip down memory lane: It seems not that long ago, in a research project he was involved, that he coded speech recognition and object detection into an AIBO. That was early 2000 before Sony discontinued AIBO in 2006.

Today Munich is the senior vice president of iRobot, a global consumer robot company whose vacuum cleaning robot Roomba has become a household name. …


Image for post
Image for post

Dressed in a well-tailored suit and a white shirt, Kai-fu Lee walked into his Beijing-headquartered office on April 25 where his Chinese venture capital firm Sinovation Ventures announced a new $500 million fund to back early-stage and growing tech companies in both China and U.S. Thereupon, Sinovation Ventures manages a total of $2 billion across six funds.

Founded in 2009 by Lee, Sinovation Ventures is now one of the most aggressive Chinese venture funds — this attributed to Lee’s great foresight. The fund has invested market leaders including Chinese photo touch-up and selfie phone producer Meitu, Chinese online education firm VIPKID, Chinese online Q&A forum Zhihu, AI and computer vision unicorn Megvii (Face++), chip upstart Horizon Robotics, and bike-sharing firm Mobike. …

About

AI Frontiers

We showcase the cutting-edge AI research results that are deployed in large scale. Join our online weekly meetings at http://meetup.com/aifrontiers

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store