SyncedReview
Published in

SyncedReview

Microsoft India Proposes Varuna: Scalable, Low-Cost Training of Massive Deep Learning Models

The performance of contemporary AI systems on natural language processing (NLP) tasks would have been difficult to imagine just a few years ago. The 2017 debut of massive language model BERT was a game-changer, but even BERT-large’s 340 million parameters have since been eclipsed by OpenAI’s GPT-3 with its 175 billion parameters.

--

--

--

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Recommended from Medium

Encrypting Different Medical Images using Deep Neural Network with Interactive Code

GoodAI — Sending AI to School

AI Biweekly: AI in Medical Records & Robotic Process Automation

We’re excited to announce our partnership with Next Chymia Consulting, one of our main investors

Latest Disruptions of Quantum Computing in the Coming Years!

INCREASING DEMAND FOR AI DEVELOPMENT IN HEALTHCARE INDUSTRY

What I have Learned After Building A Successful AI PoC

Microsoft Researcher Single-Handedly Surveys the SOTA in Sign Language Recognition

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Synced

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

More from Medium

Meta AI’s OMNIVORE: A Modality-Agnostic Single Vision Model With Cross-Modal Generalization

What is Deepmind’s retrieval-based transformer (RETRO) & how does it work?

How to get on the Podium in AI Championships (almost) without ML

Universal Adversarial Training — Paper Summary