AI @Facebook F8 | Self-Supervision, Fairness, Inclusivity and PyTorch 1.1

Synced
SyncedReview
Published in
7 min readMay 2, 2019

Still reeling from a string of damaging news reports accusing it of misinformation, data abuse and violent content; Facebook is hoping its recent AI R&D efforts can help it climb out of the mess it’s in.

On the second day of its annual F8 developer conference, executives from the social media giant framed AI as a weapon in Facebook’s battle against objectionable content. “Our goal is to reduce the prevalence by taking action on violent content proactively with few minutes,” said Facebook CTO Mike Schroepfer.

Now 15-years old, Facebook has reached a crossroads. The company seems determined to shift from a very public, town square style network to more of a private messaging platform. As Founder and CEO Mark Zuckerberg declared in his keynote speech yesterday, “The future is private.” A number of new features and major redesigns were announced at F8 — a Messenger desktop app, revamped Facebook interface, and a new logo — marking the company’s most significant pivot of the past five years.

The changes were prompted in large part by global criticism of Facebook’s perceived inability to curb the circulation of violent content and misinformation. Sri Lanka banned Facebook for nine days after the Easter bombings there that killed more than 250 people, accusing it of spreading hate speech and spawning violence.

The public also remains skeptical regarding Facebook AI technology’s effectiveness in flagging and blocking inappropriate content. Last month the New Zealand mosque shooter livestreamed his deadly attacks on Facebook, and there were charges that the gruesome video remained online because Facebook’s AI system failed to detect the content.

All this hasn’t stop Facebook from pinning its future on emerging AI. “We are a long way from perfect here, either because our technology isn’t smart enough, we didn’t operationalize it right, or clever people figured out a way around our latest generation … Solutions will never be perfect, but we have to get going” said Schroepfer.

Facebook CTO Mike Schroepfer

Understanding content with less supervision

“AI is the best bot to keep people safe on our platforms,” Facebook Director of Artificial Intelligence Manohar Paluri told the F8 audience, adding that an effective way to achieve that goal is enabling Facebook’s AI system to “understand content and work effectively with less labeled training data.”

Recent progress in this area includes a single natural language processing model (multilingual embedding space) Facebook developed to detect harmful content in different languages without requiring language-specific training data. Its computer vision system can now recognize more image components using a technique called panoptic feature pyramid network (Panoptic FPN). The video understanding system can analyze video clips and improve accuracy without compromising efficiency. Facebook trained the system with hash-tagged Instagram videos to achieve state-of-the-art accuracy of 82.8 percent.

Facebook’s NLP system maps similar sentences in multiple languages in a shared embedding space.
Panoptic FPN

While supervised learning has laid the foundation for most of its technological breakthroughs, Facebook is now increasingly interested in self-supervised learning, a variant of unsupervised learning leveraging a large amount of unlabeled data. Facebook Chief AI Scientist Yann LeCun is a firm advocate of this concept. “The next AI revolution will not be supervised or purely reinforced. The future is self-supervised learning with massive amounts of data and very large networks,” said LeCun at the recent 2019 ISSCC.

Examples of Facebook’s self-supervised learning efforts include training NLP models to predict masked words in a sentence or using speech recognition models to choose the correct version of an audio clip from among distracters. The latter method uses 150x (80 hours vs 12,000 hours) less labeled data than the previous best comparable system.

Fairness is a process

Facebook is endeavouring to convince the public that it fundamentally cares about fairness, and that issues such as its mishandling of 2016 US presidential election meddling won’t recur. The company has devoted years of research to finding a balance between “how to give everyone a voice” and “how to protect a community from harm,” said Facebook Director of Applied Machine Learning Joaquin Quiñonero Candela.

A specific use of AI to fight election interference is the civic content classifier, a machine learning system designed to predict how likely a piece of content discussed involves a civic issue and to prioritize this content for human reviewers. In a multilingual country like India, Facebook has to calibrate its civic content classifier so public discussions across different languages and regions are treated fairly.

It was noted that even today’s advanced AI systems could accidentally miss some misinformation. To address this Facebook has built a decision threshold that assesses how costly a type of misinformation might be and flags it based on the AI algorithm’s prediction score.

“Fairness is a process. We need to systematically surface the hard questions about fairness, resolve these questions through a process, and record the process and decisions involved,” said Candela.

Facebook Director of Applied Machine Learning Joaquin Quiñonero Candela

Inclusive AI built in Portal

Facebook also announced that Portal, a US$199 smart display it released in 2018, will be shipped internationally this fall and support WhatsApp. The built-in AI algorithms factor-in inclusivity to ensure that users will be treated fairly regardless of their age, gender or colour.

We have to really understand our diverse product community and the most critical user problems when working with AI. Inclusive means not excluding anyone,” said Facebook Technical Business Lead of AR/VR Software Lade Obamehinti.

Obamehinti is leading a Facebook development team to establish a three-part framework to consolidate inclusive AI: user studies to look into people’s responses to new products and features; algorithm development to ensure fairness in data collection, training, and model evaluation; and system validation to improve performance and user experience.

Facebook Technical Business Lead of AR/VR Software Lade Obamehinti explains the goal of inclusive AI.

PyTorch updates

Since its debut in 2016, Facebook’s open source AI software framework PyTorch has gained traction due its unparalleled flexibility and power. At the first-ever PyTorch Developer Conference last year, PyTorch 1.0 was introduced to help developers and researchers address four major challenges: extensive reworking, time-consuming training, Python programming language inflexibility, and slow scale-up.

Yesterday at F8 Facebook released PyTorch v1.1 with key features including:

  • TensorBoard: Support for visualization and model debugging with TensorBoard, a web application suite for inspecting and understanding training runs and graphs. Use “from torch.utils.tensorboard import SummaryWriter” command to access TensorBoard on PyTorch.
  • JIT compiler: Improvements to just-in-time (JIT) compilation, including various bug fixes as well as expanded capabilities in TorchScript, such as support for dictionaries, user classes, and attributes.
  • New APIs for Boolean tensors and custom recurrent neural networks.
  • Distributed Training: Improved performance for common models such as CNNs, added support for multi device modules including the ability to split models across GPUs while still using Distributed Data Parallel (DDP) and support for modules where not all parameters are used in every iteration (e.g. control flow, like adaptive softmax, etc).
TensorBoard

Facebook also announced two new tools for adaptive experimentation:

  • BoTorch: A research framework built on top of PyTorch to provide Bayesian optimization, a sample-efficient technique for sequential optimization of costly-to-evaluate black-box functions.
  • Ax: An ML platform enabling researchers and engineers to systematically explore large configuration spaces in order to optimize machine learning models, infrastructure, and products.

Facebook F8 ran April 30 to May 1 at the McEnery Convention Center in San Jose, California.

Journalist: Tony Peng | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global