Sign in

Partnership on AI

By Jasmine Wang

AI is by no means the first field to face the challenges of responsibly publishing and deploying high-stakes, dual-use research. This is the second post in a series examining how other fields have dealt with these issues and what the AI community can learn. It is presented as part of the Partnership on AI’s work on Publication Norms. Visit our website for more information.

If you found out “one of computing’s most basic safeguards” was compromised, who could you tell without endangering the world’s data? That was the question the tech industry faced in 2017 after Jann…

By Emily Saltz, Claire Leibowicz, and Claire Wardle

As part of its ongoing effort to address “misinformation that could cause physical harm,” Facebook removed a surprising coronavirus-related post last June. The offending image showed the rock band The Cure with the caption “I’m no expert on COVID-19, but this is The Cure” — an obvious joke. Facebook says its moderators, working in concert with AI, took down 7 million posts spreading COVID-19 misinformation last year. In that context, the loss of a single silly pun may not seem like a tragedy. …

Lessons on publication norms for the AI community from biosecurity

By Jasmine Wang

AI is by no means the first field to face the challenges of responsibly publishing and deploying high-stakes, dual-use research. This post represents the first in a series where we will examine how other fields have dealt with these issues and what the AI community can learn. It is presented as part of the Partnership on AI’s work on Publication Norms. Visit our website here for more information.

In the spring of 2012, Ron Fouchier contemplated a decision that could put him in prison for up to six years or cost him over $100,000 USD in fines…

By Jonathan Stray

Imagine, for a moment, that you’re one of the biggest media organizations in the world. Every day, your journalists create countless videos, articles, podcasts, and more. This wealth of information greatly exceeds what any one person could watch, read, and listen to in 24 hours. So, with all that content, how do you decide what to show your audience?

This is one of many tough questions that two public broadcasters, the British Broadcasting Corporation (BBC) and the Canadian Broadcasting Corporation (CBC), face each day. (The BBC and CBC are both Partner organizations in the Partnership on AI.)…

Field guide by Emily Saltz, Lia Coleman, and Claire Leibowicz

By Claire Leibowicz and Emily Saltz

Machine learning tools for generating synthetic media are becoming more and more accessible. We’ve written about how the availability of these tools can allow the creation of synthetically generated media to mislead and cause harm. Even lower-tech, cheapfake techniques — like those recently used on videos of Joe Biden — can be used to alter the perception of public figures.

Yet, the same tools can have powerful expressive capabilities. For artists, these tools have generated a new creative field for expressing themselves and commentating on life and technology. Derrick Schultz’s AI-generated works leverage artwork…

By Emily Saltz (PAI), Pedro Noel (First Draft), Claire Leibowicz (PAI), Claire Wardle (First Draft), Sam Gregory (WITNESS)

“The real question for our time is, how do we scale human judgment? And how do we keep human judgment local to human situations?”

–Maria Ressa, Filipino-American journalist and founder of Rappler, speaking on “Your Undivided Attention.” (Note: Ressa was found guilty of ‘cyberlibel’ in the Philippines on June 15, 2020 for Rappler’s investigative journalism in what is seen by many as a major blow to the free press.)

What is MediaReview, and why it matters

Digital platforms have a manipulated media problem: mis/disinformation through the use of misleading…

By Emily Saltz, Tommy Shane, Victoria Kwan, Claire Leibowicz, Claire Wardle

To label or not to label: When might labels cause more harm than good?

Manipulated photos and videos flood our fragmented, polluted, and increasingly automated information ecosystem, from a synthetically generated “deepfake,” to the far more common problem of older images resurfacing and being shared with a different context. While research is still limited, there is some empirical support to show visuals tend to be both more memorable and more widely shared than text-only posts, heightening their potential to cause real-world harm, at scale — just consider the numerous audiovisual examples in this running list of hoaxes and misleading posts about police brutality…

The Partnership on AI (PAI) strives to bring diverse organizations together in pursuit of responsible artificial intelligence. We have seen the power of the positive change that can occur when Partners from different geographies and areas of expertise gather to deliberate and evolve the practice of responsible AI.

Today, we are thrilled to be expanding our community by bringing on board four international members to the Partnership.

With these Partners, we see new opportunities in enacting progress for responsible artificial intelligence. The Alan Turing Institute, and Wadhwani Institute for Artificial Intelligence, strengthen our collaboration capacity in applying AI for socially…

From time to time, the Partnership on AI publishes Issue Briefs and Discussion Papers on topics that our community cares about which are inspired by or build upon our prior work in specific areas. These papers are authored by members of our staff Research Team and/or Research Fellows affiliated with our organization. The content herein does not reflect the views of any particular member organization of the Partnership on AI.

By Riccardo Fogliato, Alice Xiang,& Alex Chouldechova

In an effort to protect the health and safety of inmates and the Federal Bureau of Prisons (BOP) personnel in the wake of…

Collecting and using demographic data in service of detecting algorithmic bias is a challenge fraught with many legal and ethical implications. How might those concerned about bias within algorithmic models address this?

By McKane Andrus, Elena Spitzer, Alice Xiang

A lack of clarity around the acceptable uses for demographic data has frequently been cited by PAI partners as a barrier to addressing algorithmic bias in practice.

This has led us to ask the question, “When and how should demographic data be collected and used in service of algorithmic bias detection and mitigation?”

To field preliminary responses to this question, PAI…

Partnership on AI

The Partnership on AI is a global nonprofit organization committed to the responsible development and use of artificial intelligence.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store