Google Machine Learning news summary

Don Dodge >Root Access
4 min readNov 28, 2016

--

In my job at Google I am reading and learning as much as I can about Artificial Intelligence, Machine Learning, and Deep Learning. Along my machine learning journey I will share more news and links that might interest you.

“Google says its artificial intelligence has taught itself to ‘translate between languages that it doesn’t even know” (Daily Mail, November 27) “Google has built an algorithm that enables it to translate between languages that it doesn’t know. The so-called ‘zero-shot translation’ technology is a self-taught method of translating whereby Google Brain — the research collaboration that specializes in ‘deep learning’ projects — uses artificial intelligence to translate between languages that it doesn’t know. The translations are made possible by use of a new system called the Google Neural Machine Translation, according to a blog posting by Google Brain. The end result is that instead of requiring painstaking effort and resources to translate between two languages — which was the case when Google Translate first began over 10 years ago — the GNMT is now capable of using a single system that can translate between languages it doesn’t know.” http://www.dailymail.co.uk/news/article-3976052/Google-says-artificial-intelligence-taught-translate-languages-doesn-t-know.html

Google Adds Artificial Intelligence Hotshots To Lead New Data Crunching Team (Fortune, November 15) “The search giant said Tuesday that it had hired two high-profile AI researchers to lead a new machine learning unit that’s part of its Google Cloud business. The two new hires are Fei-Fei Li, the director of Stanford University’s Artificial Intelligence Lab; and Jia Li, the head of research for Snap, the parent company of popular social messaging app Snapchat.” http://fortune.com/2016/11/15/google-fei-fei-li-snapchat-machine-learning/

Google Cloud Machine Learning family grows with new API, editions and pricing (Google Cloud Platform Blog, November 15) Google announces Cloud Jobs API, GPUs for Google Cloud, and new pricing for Cloud Vision API — Cloud Jobs API — “In order to provide the most relevant recommendations to job seekers, Cloud Jobs API uses machine learning to understand how job titles and skills relate to one another and what job content, location, and seniority are the closest match to a jobseeker’s preferences. You can learn more about how it works here. The API is intended for job boards, career sites and applicant tracking systems. Early adopters of Cloud Jobs API are Jibe, Dice and CareerBuilder.” Google Cloud GPUs — “Beginning in 2017, Google Cloud will offer more hardware choices for businesses that want to use Google Cloud Platform (GCP) for their most complex workloads, including machine learning. For Google Compute Engine and Google Cloud Machine Learning, businesses will be able to use GPUs (Graphics Processing Units) that are highly-specialized processors capable of handling the complexities of machine learning applications.” Cloud Vision API Pricing — “Google has been leveraging the latest hardware and tuned algorithms to significantly improve the performance of our Cloud Machine Learning services. Cloud Vision API now takes advantage of Google’s custom TPUs, our custom ASIC built for machine learning, to improve performance and efficiency. These improvements have enabled us to reduce prices for Cloud Vision API by ~80%. By offering the API at a more affordable price-point, more organizations than ever will be able to take advantage of Cloud Vision API to power new capabilities.” https://cloudplatform.googleblog.com/2016/11/Cloud-Machine-Learning-family-grows-with-new-API-editions-and-pricing.html

Google Assistant Will Trigger The Next Era of AI (BackChannel, October 25) “The Assistant is a single software system that will be implemented across multiple Google platforms, including the Pixel phone and the Google Home device. Though Google already interprets voice commands in products like voice search in the Google app, the Assistant is different: Google sees it as the apotheosis of its efforts to answer questions and perform functions. The company sees the Assistant as an evolution of many products, including Search, Maps, Photos, and Google Now. Sample queries the company offers display the product’s intended breadth: Show me pictures of the beach. Play dance music on the TV. Tell me about my day.” https://backchannel.com/google-our-assistant-will-trigger-the-next-era-of-ai-3c72a4d7bc75#.78q5oytdm

Google DeepMind and Blizzard announce release of StarCraft II as an AI research environment (DeepMind Blog, November 4) “DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how. Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also providing instant feedback on how we’re doing through scores. StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world. The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks.” https://deepmind.com/blog/deepmind-and-blizzard-release-starcraft-ii-ai-research-environment/

Semi-Supervised Knowledge Transfer For Deep Learning From Private Training Data (Penn State, Google, Google Brain and OpenAI, November 7) “Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, we demonstrate a generally applicable approach to providing strong privacy guarantees for training data.” https://arxiv.org/pdf/1610.05755v3.pdf

Google DeepMind AI can lip read TV shows better than a pro (New Scientist, November 21) “A project by Google’s DeepMind and the University of Oxford applied deep learning to a huge data set of BBC programmes to create a lip-reading system that leaves professionals in the dust. The AI system was trained using some 5000 hours from six different TV programmes, including Newsnight, BBC Breakfast and Question Time. In total, the videos contained 118,000 sentences. The AI vastly outperformed a professional lip-reader who attempted to decipher 200 randomly selected clips from the data set. The professional annotated just 12.4 per cent of words without any error. But the AI annotated 46.8 per cent of all words in the March to September data set without any error.” https://www.newscientist.com/article/2113299-googles-deepmind-ai-can-lip-read-tv-shows-better-than-a-pro/#.WDY2RjV9z1c.mailto

--

--

Don Dodge >Root Access

Google, Microsoft, AltaVista, Napster — Machine Learning, AR/VR, IoT, Startups, Venture Capital