Lighting up the “dark ads”

The Russian interference in the 2016 US election, and the quest for transparency in targeted political advertising on social media — a chronology

Political propaganda is not unique to social media. Nor to “populists”. It is not the exclusive realm of bots and hackers. Lies don’t only spring from Macedonian or Russian “troll farms”. Not everything is “fake news” — sorry.

There is no “post-truth” era, nor algorithm to make it to the White House. There’s no technology to destroy all facts: only humans, at times, unfortunately can.

Even then other humans sooner or later rebel, and a distinction is drawn. The Internet is powerful, arguably the greatest and fastest revolution in the history of mankind — but it’s not powerful enough to destroy the difference between true and false in every human judgment. The principles of non-contradiction and logics more broadly remain, even if most fall for the traps — the biases in reasoning — described by behavioral psychology.

Yes, the Internet brings about previously unseen dynamics and challenges to the democratic processes. And yes, we definitely have to investigate the role of “fake news” on social media in shaping public opinion.

And yet, this doesn’t mean we should allow propaganda about Internet propaganda. As Matt Rosoff put it on CNBC, “You can use the tech industry as a scapegoat for an election result you didn’t like, (…) but I’m not buying it.”

Blaming tech is simplistic, and lacks conclusive evidence. Especially when it translates into a fundamental, overarching — and unverified, possibly unverifiable — assumption: that without the scientific management of targeted lies on social media by both the Trump campaign and emissaries of Vladimir Putin, without the possibility for both hate and white supremacism to roam freely among the lands of Facebook, Google, YouTube, Twitter, Pinterest, Instagram and even Pokémon Go, Trump wouldn’t be president.

This is a story that both “liberal” pundits — not all of them, but many — and Trump campaign managers can agree upon.

The raw unchecked power of “virality” has changed politics forever. Cambridge Analytica is the new ideology: gather the more information you can about each specific target voter, possibly even thousand “data points” per person. Compute, according to needs. And ultimately flood each potential voter on social media with finely-tuned slogans, memes, “snaps”, “stories”, and gifs, bots, chatbots, and every other form of communication available to the casual by-stander of the “hyperconnected” era — provided it’s all about whatever it is he or she personally wants to hear.

Information we give about ourselves with our every click. But also stolen information. Hacked information. Fake information. Manipulated information. Soon, manufactured videos of anybody saying things they never said, as if they were really saying them. The killer app of the “fake news” era, only for real-time video.

The scenario can turn to fears of a looming dystopia pretty easily.

Social media, portrayed as untiring vehicles of democracy during the “Arab Spring”, have quickly turned into a method with which nationalists, “populists” and terrorists alike can overthrow “the system”, and break the “liberal” order.

Anyone can do it. It can happen anywhere. You can be next.

Behind the “moral panic” that caught that order by a storm in November 2016 with president Trump, there’s this fundamental fear of a global resurgence of authoritarianisms through manipulation on the Internet.

As if “democracies” were a model of Internet freedom.

As if the “crisis of democracy” coincided with the “rise of Facebook”.

And yet, there’s no need to make, or even criticize, such bold claims to find a request for bigger transparency in targeted political ads on social media reasonable — especially when “bigger” means “more than 0”, as the inquiry into Russian meddling into the 2016 US Presidential election showed pretty clearly.

Instead of endlessly, and pointlessly, discussing the effects of “the Internet” on “democracy”, we should instead start low, and try and address one smaller issue that the era of “the Internet” does however pose to “democracy”: the fact that, as things stand now, anyone can use social media as megaphones for their political purposes, with no obligation to inform the target about who pays for the messages, who are the other targets, and in what the broader campaign actually consists, as a whole.

In some cases, even if it is a racist, Islamophobic or fascist campaign.

This is what’s been emerging from the whole Russian-“dark-ads”-on-Facebook thing. The ads are “dark” not because they are only visible to targets, and impermanently — all social media marketing campaigns work that way. They are “dark” because we should be knowing more about them, and we easily could, but we don’t.

They are “dark” because they are invisible to the eye of the citizen, not the user.

So, this is about democracy.

That’s why the public has a right to know how the data of each user are related to targeted techniques aimed at politically indoctrinating each and every one of them, individually.

Institutions have the obligation to step in and discuss at the very minimum why the instrument of communication for the masses of 2017 — socia media — should be escaping any meaningful transparency requirements, as they did until now, when other mass media don’t and weren’t.

The platforms have a real obligation, too. Not to give up on targeted advertising, which makes the core of their business model, but to help put an end to unchecked political targeted advertising — starting from automatically checked political targeted advertising.

Most importantly, this responsibility entails giving up another wild and yet unquestioned assumption of Silicon Valley-ism: that commercial and political advertising are the same thing.

They are not.

Here’s a chronology of how everyone came to notice.

(Zuckerberg’s live broadcast on greater transparency for targeted political ads on Facebook)

Apr 26, 2011 — Facebook lawyers write a letter to the Federal Election Commission in which the company “seeks confirmation that its small, character-limited ads qualify for the “small items” and “impracticable” exceptions”, and do not require a disclaimer under the Federal Election Campaign Act or Commission regulations”.

What this means is that transparency of political ads on social media is at stake, and Facebook is effectively arguing against it.

June, 2011 — The FEC stalls, and Facebook escapes transparency and disclosure requirements. Reuters journalist David Ingram, six years later, remember the facts in a tweetstorm:

Facebook has persuaded the FEC to treat its political ads as if they were skywriting, or pencils, or buttons
That means FB ads don’t need disclaimers. This in turn makes it easier for interest groups or other advertisers to hide their identity
In 2011, as Facebook advertising was still in its infancy, their lawyers wrote to the FEC to inquire about political disclaimers
Facebook’s lead lawyer was Marc Elias. His other clients have included Hillary Clinton and other high-profile Democrats.
The lawyers had a request for the FEC: Could you please confirm Facebook’s view that it is not subject to disclaimer requirements?
One argument was that internet advertising was relatively new and innovation shouldn’t be burdened by regulation.
Also, ad disclaimers can be intrusive; hence the exemption for “small items” like pencils and impractical situations like skywriting.
One way to look at the question before the FEC was: Is Facebook more like TV, or more like skywriting?
The FEC, a deeply divided agency, debated the subject before deadlocking 3–3 in June 2011
Half the FEC wanted to give Facebook a blanket exemption.
Half opposed an exemption but wanted to give Facebook lots of ways to comply (for example, linking to a page with a disclaimer).
The deadlock meant Facebook basically got what it wanted: The FEC would not require disclaimers.
One way to look at this: The one time that I’m aware of when Facebook weighed in on U.S. campaign regulation, it lobbied against it.

Oct, 2014 — Democratic Vice-commissioner Ann M. Ravel asks for more disclosure of funding of online political ads. The FEC has just suffered the latest impasse, with the three Republican members firmly against regulation, and the three Democrats, Ravel included, arguing instead in favor of it.

“Some of my colleagues seem to believe that the same political message that would require disclosure if run on television should be categorically exempt from the same requirements when placed on the internet alone”, writes Ravel in a letter to the FEC.

“As a matter of policy, this simply does not make sense. … This effort to protect individual bloggers and online commentators has been stretched to cover slickly produced ads aired solely on the Internet but paid for by the same organizations and the same large contributors as the actual ads aired on TV.”

As Quartz reports, Republicans respond by equating “money spent on political advertising on the internet” with “free speech”, thus effectively treating regulation as censorship.

And it works: “The end result was, for the coming years, the commission would do nothing to address who was spending money on political advertising on the web, even as Facebook’s audience and influence grew in the US.”

Ravel instead has to face the backlash of the hate campaign that alt-right websites and their followers — death threats on Twitter and via email included — launch against the “censor”.

Oct, 2016 — ProPublica reveals that Facebook “gives advertisers the ability to exclude specific groups it calls “Ethnic Affinities.”” Reporters Julia Angwin and Terry Parrish Jr. show it by purchasing an ad through the social network’s advertising portal, in the housing category: “The ad we purchased was targeted to Facebook members who were house hunting and excluded anyone with an “affinity” for African-American, Asian-American or Hispanic people.”

(Source: ProPublica)

The ad gets approved within 15 minutes. Facebook replies by arguing that “ethnic affinity” is not the same as race, and that instead the definition — indicating a collection of pages and posts liked or engaged with by a user — was born out of a “multicultural advertising” effort.

Apr 27, 2017 — Facebook publishes a white paper on what it labels “Information Operations”, or

(…) actions taken by organized actors (governments or non-state actors) to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome. These operations can use a combination of methods, such as false news, disinformation, or networks of fake accounts aimed at manipulating public opinion (we refer to these as “false amplifiers”).

The word “ads” is nowhere to be found in the paper. And yet, in it Facebook does acknowledge that this is mostly a story of “inauthentic” human propagandists, not political bots:

There is some public discussion of false amplifiers being solely driven by “social bots,” which suggests automation. In the case of Facebook, we have observed that most false amplification in the context of information operations is not driven by automated processes, but by coordinated people who are dedicated to operating inauthentic accounts.

May, 2017 — TIME’s cover story digs into “Russia’s Social Media War on America”.

(Source: TIME)

In it, a passage reads: “The intelligence officials have found that Moscow’s agents bought ads on Facebook to target specific populations with propaganda”. The revelation comes with an interesting explanation, attributed to a “senior intelligence official”: “they do that just as much as anybody else does.”

Can Mark Zuckerberg confirm? “A Facebook official says the company has no evidence of that occurring”, writes TIME’s Massimo Calabresi.

Jun 22, 2017 — When asked for data on political ads by political scientists, Facebook denies any disclosure adducing privacy reasons. “Advertisers consider their ad creatives and their ad targeting strategy to be competitively sensitive and confidential,” says Rob Sherman, Facebook’s deputy chief privacy officer, in an interview with Reuters.

“In many cases, they’ll ask us, as a condition of running ads on Facebook, not to disclose those details about how they’re running campaigns on our service. From our perspective, it’s confidential information of these advertisers.”

There’s no distinction between commercial and political ads: “We try to have consistent policies across the board, so that we’re imposing similar requirements on everybody.”

Reuters is clear about what this implies: “Details such as the frequency of ads, how much money was spent on them, where they were seen, what the messages were and how many people were reached would remain confidential under the company’s corporate policy”. As in 2011, Facebook is arguing that “dark ads” should remain in the dark.

Jul 14, 2017 — Former head of Trump’s team for digital campaigning, Brad Parscale, is interviewed by investigators at the House Intelligence Committee. He then tweets out a statement, in which he highlights the importance of having had direct cooperation from Facebook, Google and Twitter staff in the success of Trump’s campaign: it is actually the reason why he made it to the White House, he argues.

Interestingly, Wired reminds that

(…) the Trump campaign ran up to 50,000 variants of its Facebook ads a day, learning which ones resonated best with voters. It also deployed so-called “dark posts,” non-public paid posts that only appear in the News Feeds of the people the advertiser chooses.
Parscale has credited that collaboration with delivering Trump’s victory. “Facebook and Twitter were the reason we won this thing,” Parscale told WIRED shortly after the election. “Twitter for Mr. Trump. And Facebook for fundraising.”

To start understanding whether this is true, we should at least have evidence of the actual “reach” of the ads. That same day, Facebook reminds the public — certainly not for the first time — that its own ad metrics, even if we were ever to know them for specific targeted political ads from a specific candidate, are flawed.

In fact, “the social network’s advertising platform claims to reach millions more users among specific age groups in the U.S. than the official census data show reside in the country”, writes the Wall Street Journal. Facebook, for example, claims to be potentially reaching 41 million people in the 18–24 yo segment, while the latest census data estimate the whole category at 30 million.

In response Facebook argues, through a spokesperson, that its audience reach estimates “are not designed to match population or census estimates”.

Jul 20, 2017 — Senate Intelligence Committee Vice Chair, Mark Warner, is looking into the possibility that Russian interference occurred through targeted ads on social media. Again, Facebook denies: “We’ve been in touch with a number of government officials, including Sen. Warner, who are looking into the 2016 US Presidential election,” a Facebook spokesman tells CNN. “We will continue to cooperate with officials as their investigations continue. As we have said, we have seen no evidence that Russian actors bought ads on Facebook in connection with the election”.

Sep 6, 2017 — The Washington Post reveals that Facebook “has discovered that it sold ads during the U.S. presidential campaign to a shadowy Russian company seeking to target voters”, and company officials informed Congressional investigators about it.

In a major turn for the social network, the same day Facebook’s Chief Security Officer, Alex Stamos, confirms the scoop and provides the first details:

In reviewing the ads buys, we have found approximately $100,000 in ad spending from June of 2015 to May of 2017 — associated with roughly 3,000 ads — that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia.

Most of the ads, writes Stamos, “didn’t specifically reference the US presidential election, voting or a particular candidate”. Rather, they more subtly aimed at “amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights”.

Most ran in 2015, not in 2016, and about a quarter of them were geographically targeted.

Stamos also claims that Facebook immediately shut down those “inauthentic” accounts and Pages.

But who exactly was behind them? Facebook officials point to a Russian “troll farm”, the infamous Internet Research Agency exposed by Adrian Chen in a widely read 2015 New York Times Magazine reportage.

Sep 7, 2017 — Facebook has not revealed the content of the ads in its announcement, and the pressure for a full disclosure is quickly mounting on the platform. Facebook however doesn’t seem to like the idea. Therefore ProPublica, long involved in the issue, starts an effort to crowdsource content — while at the same time providing the clearest explanation of what “dark ads”, the heart of the matter, really are.

The nature of online advertising is such that ads appear on people’s screens for just a few hours, and are limited to the audience that the advertiser has chosen. So, for example, if an advertiser micro-targets a group such as 40-year-old female motorcyclists in Nashville, Tennessee, (Facebook audience estimate: 1,300 people) with a misleading ad, it’s unlikely anyone other than the bikers will ever see those ads.
With online ads, “you can go as narrow as you want, as false as you want and there is no accountability,” said Craig Aaron, president and CEO of Free Press, a public interest media and technology advocacy group.
ProPublica wants to change that. Today we are launching a crowdsourcing tool that will gather political ads from Facebook, the biggest online platform for political discourse. We’re calling it the Political Ad Collector — or PAC, in a nod to the Political Action Committees that fund many of today’s political ads.

The tool is initially deployed in cooperation with Spiegel Online, Süddeutsche Zeitung and Tagesschau, and ProPublica will also provide “a public database that will allow the public to see them all”.

Sep 8, 2017 — Facebook officially confirms that it is not releasing the ads. “Due to both federal law and the fact that investigations are ongoing with the relevant authorities, we’re unable to share the ads,” a Facebook spokesperson tells Business Insider. “The spokesperson declined to say which legislation kept the company from disclosing the ads”, the website reports.

Digiday, in the meantime, guesses how the Russian buyers could have slipped through Facebook’s authenticity checks:

With Facebook’s self-service tools, a troll farm could quickly submit hundreds of different ad creatives that vary slightly from each other, said Brendan Gahan, founder of ad agency Epic Signal. By submitting a lot of similar ads, the trolls could A/B test which messages slip through Facebook’s detectors.

Virtual private networks (VPNs) might be involved too:

To avoid detection, political operatives could also hide their identities by masking their IP addresses with virtual private networks, said independent marketing tech adviser Nate Elliott. Using VPNs could make it appear like the computers purchasing inventory to display divisive ads are scattered throughout various locations and not working together, when in reality, they could operate from the same building.

Elliott concludes with a frightening remark:

“Under normal circumstances, Facebook’s ad quality team would never even know these ads existed”.

Sep 9, 2017 — Quoting a Facebook “expert”’s analysis, The Daily Beast writes that “Russian-funded covert propaganda posts on Facebook were likely seen by a minimum of 23 million people and might have reached as many as 70 million”. This means almost three out of ten American voters might have been exposed to the ads.

Journalists also quote Dennis Yu, CTO and co-founder of BlitzMetrics, as he explains how Russia might have “maximized its impact with a basic strategy practiced by Facebook marketers”: bet low on everything, go all in on what goes viral.

Hands-on, here’s what it means:

Seed a new Facebook post with a tiny buy as low as $1 a day, then watch Facebook’s ad console and see if the post catches fire. If it doesn’t, write it off and start on the next post. But if people begin engaging with the post in a serious way, you go all in.
“One out of every 100 posts, you’re going to get that home run,” said Yu, whose clients include GoDaddy and the Golden State Warriors. “Then you’re going to boost the heck out of that sucker. You’re going to put $10,000 on it. And Facebook’s algorithm already knows who to show it to, like the friends of the people who already liked it… It’s a risk-free lottery. The minimum cost is $1 per day.”

Difficult to pin down the actual numbers of those affected by the messages, let alone of those actually persuaded by the messages, without knowing the exact reach of each exact post — in detail.

That’s why the pressure is on Facebook. Only Facebook might — should — know.

Lastly, The Daily Beast points out that even though Facebook did provide some clue about geographical targeting, it did not give any idea about which precise areas were most affected by the ads. Possibly “swing states”?

(Facebook’s CSO) Stamos was silent on which physical areas — and audiences — were targeted, particularly as the election loomed. Clinton lost Michigan by 11,612 votes and Wisconsin by just over 27,000.

The same day, The Daily Beast also reveals in another story that Russian propaganda already took what a former FBI agent defines “the next step” in foreign meddling: influencing real action in another country. “Russian operatives hiding behind false identities used FB’s event-management tool to remotely organize and promote political protests in the U.S., including an August 2016 anti-immigrant, anti-Muslim rally in Idaho”, reads the article.

One such events

was “hosted” by “SecuredBorders,” a putative U.S. anti-immigration community that was outed in March as a Russian front. The Facebook page had 133,000 followers when Facebook closed it last month.

Sep 11, 2017 — University of North Carolina professor Daniel Kreiss, in The Intercept, raises a fundamental issue: not only Facebook is not doing enough, it is also too late. “I just find it stunning that we’re learning about these ad buys 10 months after the election is over as opposed to when they would’ve been consequential”. The New York Times will later reveal that many voiced the same doubts even from within the company: “Why are we only writing about this now?”

Sep 12, 2017 — The same New York Times investigation builds on previous findings from The Daily Beast to detail the workings of propaganda Page “Secured Borders”, defined as “one of hundreds of fake Facebook accounts created by a Russian company with Kremlin ties to spread vitriolic messages on divisive issues”:

The Secured Borders page, a search for archived images shows, spent months posing as an American activist group and spreading provocative messages on Facebook calling immigrants “scum” and “freeloaders,” linking refugees to crime and praising President Trump’s tough line on immigration.

Also, Sen. Warner claims that what we’re witnessing is “just the tip of the iceberg”. Social media, he argues are “the wild, wild West” of political ads. And “the amount of advertising and use of these social media platforms in elections is only going to go exponentially up.”

Sep 13, 2017 — Mathematician and author Cathy O’Neil has an ideal-world solution on Bloomberg:

Maybe what we need is a sort of internet-age analog to the ideal of public broadcasting — a new and parallel internet optimized for the citizen rather than for the consumer. Picture a space that isn’t polluted with ads, that allows people to search for, say, information about health from sources that have nothing to sell them. We would go there to learn, listen and engage, and go back to the commercial internet only when we wanted to buy something. It might start locally, at the city or neighborhood level, curated and informed by local standards and customs. The people who created content could be compensated by their communities.

Sep 14, 2017 — ProPublica, in a breaking news story, finds that ads on Facebook can be targeted to “Jew Haters”:

Until this week, when we asked Facebook about it, the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”

ProPublica knows for sure, because it managed to buy three “promoted posts” targeted to those categories — $30 total, all again approved within 15 minutes.

In the story, Facebook apologizes: “we’ve removed the associated targeting fields in question”.

(Source: ProPublica)

How can this happen? Well, because algorithms:

Unlike traditional media companies that select the audiences they offer advertisers, Facebook generates its ad categories automatically based both on what users explicitly share with Facebook and what they implicitly convey through their online activity.

Facebook CEO Mark Zuckerberg had pledged for a better scrutiny of hate on the platform, boldly stating there’s no room for it on the social network. And yet, here are targeted ads for nazis.

ProPublica nails it:

Facebook apparently did not intensify its scrutiny of its ad buying platform. In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a “field of study.” Facebook’s algorithm automatically transforms people’s declared interests into advertising categories.

Also, Facebook did not eradicate racist abuse of its ad-targeting function from the platform, in response. Instead, what we learn is that it goes much further: a Slate investigation, published just an hour later, finds that many more hate categories can be targeted through its automated system — among them, “Kill Muslimic Radicals”, “Ku-Klux-Klan” and “The school of fagget murder & assassination”. Also, many anti-semitic target groups are still there.

(Source: Slate)

Not only Facebook allowed all of these categories,

Many were auto-suggested by the tool itself — that is, when we typed “Kill Mus,” it asked if we wanted to use “Kill Muslim radicals” as a targeting category.

As Recode explains, this is what happens “when the algorithms that drive Facebook’s business and determine what you see, and don’t see, in News Feed, aren’t properly managed”.

Facebook immediately gives another example of mismanagement, when its policy spokesperson, Andy Stone, tells CNN that there was “no sale support” involved in the deal that allowed the Kremlin to deploy its strategy through “inauthentic” accounts. The company seems to be making the case that “it was just the machine talking to the Russians”, as the CNN puts it. But that’s exactly the problem — not a way out of it.

The article also raises a crucial point: the persistence of the “dark ads” problem. Once you become a target, you’re forever a target — until you’re so brainwashed there’s no need for propaganda anymore.

Any Facebook ad buy can be an entry point to long-term user targeting.
Once an ad buyer identifies a small target audience and succeeds in getting people in that audience to “like” or engage with a post, they can continue targeting those individuals and their friends. Those users may then participate in spreading fake news or misinformation by sharing it on their timeline, in which case the advertising is no longer necessary.

The company tries to put things into (its own) perspective, by replying that “an extremely small number of people were targeted in these campaigns”. Also, “we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue”.

An implicit admission that currently there is none.

Sep 15, 2017 — BuzzFeed News proves that Google “allows advertisers to specifically target ads to people typing racist and bigoted terms into its search bar”. It’s the basically same issue highlighted by ProPublica and Slate on Facebook: also similarly, “Google will suggest additional racist and bigoted terms once you type some into its ad-buying tool”.

(Source: BuzzFeed News)

And yet it’s different, argues Alex Kantrowitz:

There are major differences between Facebook’s and Google’s ad systems that make Google’s system harder to police. On Facebook, you essentially pick targeting criteria from Facebook’s catalogue of information about people — their gender, location, interests, and more. On Google, you target ads to terms you anticipate will be typed in the search box. So Google’s universe of potential ad-targeting contains many more unknowns.

Sep 16, 2017 — Twitter can be exploited to target messages to racist categories too, finds The Daily Beast. Before quickly fixing the “bug”, its advertising platform “allowed prospective marketers to target millions of users interested in derogatory words such as “n**ger” and “wetback.”” More precisely, its targeting system returned “26.3 million users who may respond to the term “wetback,” 18.6 million to “Nazi,” and 14.5 million to “n**ger.””

(Source: The Daily Beast)

Sep 17, 2017 — Facebook has given “copies of ads and related information it discovered on its site linked to a Russian troll farm, as well as detailed information about the accounts that bought the ads and the way the ads were targeted at American Facebook users” to Special Counsel Robert Mueller and his team, writes CNN confirming a WSJ article.

The Congress is still in the dark, though:

Facebook did not give copies of the ads to members of the Senate and House intelligence committees when it met with them last week on the grounds that doing so would violate their privacy policy, sources with knowledge of the briefings said. Facebook’s policy states that, in accordance with the federal Stored Communications Act, it can only turn over the stored contents of an account in response to a search warrant.

Sep 19, 2017 — “Twitter will also be called upon” by Congress investigators “to explain its role in the 2016 Russian meddling campaign”, writes Mother Jones.

Sep 20, 2017 — “Regulators and policymakers are waking up”, writes AdExchanger. “FEC Vice Chairwoman Caroline Hunter publicly stated that the Russian-sponsored ads on Facebook could trigger an enforcement action, although passing new regulations to address online campaigns is not on the agenda, at least not yet.”

In the piece, Mark Jablonowski of digital targeting firm DSPolitical also raises two important points. The first echoes the “tip of the iceberg” idea:

Spending $100,000 over a couple of years on a little over 3,000 ad variants — that sounds like a test to me, testing messages to figure out where to go and spend the rest of the money.

The second poses a serious challenge to lawmakers trying to inject transparency in social media ads funding:

You may know what credit card paid for it, but you don’t know who is depositing money into the account. It’s easy for anyone to set up a shell corporation to funnel money through in order to place ad buys, not even talking about how easy it would be for state-sponsored organizations to do it.

On the same day, The Daily Beast uncovers a Facebook group of Russian provocateurs, “Being Patriotic”, who “tried to organize more than a dozen pro-Trump rallies in Florida during last year’s election”.

The Page had around 200.000 members before being shut down by Facebook and, according to the report, “brought dozens of supporters together in real life”.

Also, Chief Operating Officer Sheryl Sandberg announces, in a Facebook post, some changes in targeting criteria for ads. “We’re clarifying our advertising policies and tightening our enforcement processes”, she writes. In reply to critiques pointing at algorithmic failures, she then says that Facebook is “adding more human review and oversight to our automated processes”.

However, many are troubled by the candid admission that what ProPublica uncovered — “Jew haters” as a targeting category — was until then unknown to the platform. Writes Sandberg:

We never intended or anticipated this functionality being used this way — and that is on us. And we did not find it ourselves — and that is also on us.

Facebook should have known better.

Sep 21, 2017 — Facebook announces it is now sharing the content of the ads with congressional investigators too.

But that’s soon overshadowed by another, arguably bigger announcement — this time from Mark Zuckerberg. In a Facebook Live broadcast, the CEO tries to make his case for the platform: “I don’t want anyone to use our tools to undermine democracy”, he writes, while at the same times admitting that there’s no definitive solution: “ I wish I could tell you we’re going to be able to stop all interference, but that wouldn’t be realistic”. Facebook will at least try harder and faster, however, as in France and Germany.

Zuckerberg then makes a bold claim, promising “to create a new standard for transparency in online political ads”.

How? Here’s Zuckerberg’s plan:

When someone buys political ads on TV or other media, they’re required by law to disclose who paid for them. But you still don’t know if you’re seeing the same messages as everyone else. So we’re going to bring Facebook to an even higher standard of transparency. Not only will you have to disclose which page paid for an ad, but we will also make it so you can visit an advertiser’s page and see the ads they’re currently running to any audience on Facebook.

Six years after having opposed what is “required by law” for TV or other media with the FEC, Facebook makes a U-turn and vows to go even further than current regulation — by strengthening the ad review process, increasing investment in election integrity, expanding partnerships with election commissions all over the globe, and sharing more threat information “with other tech and security companies”.

Many note that Zuckerberg is trying to push last minute self-regulatory efforts, possibly to preempt increasing calls for proper regulation. For example, Politico contextually reports that Sen. Warner and Sen. Amy Klobuchar “are seeking co-sponsors on proposed legislation that would require Facebook, Google and other digital platforms to disclose more information about political advertisements and the buyers behind them”.

How, exactly?

The two Democrats are writing legislation that would require web platforms with more than one million users to publicly disclose the names of individuals and organizations that spend more than $10,000 on election-related advertisements.
The sites would also have to provide a copy of the advertisement, and disclose details about the targeted audience, the number of people who view the ad, the time and date it was published, the amount of money charged and the buyer’s contract information.

In the meantime, others object that Facebook’s effort is limited to “currently running” ads —and, therefore, we would still have no idea about past campaigns or their targets even with the new transparency requirements in place.

Some pundits even argue that the “dark ads” scandal isn’t a bug, but a feature of Facebook’s business and existential model.

Here’s Julia Carrie Wong, in The Guardian, on Sep 22:

Facebook’s systems didn’t fail when they allowed shadowy actors connected to the Russian government to purchase ads targeting American voters with messages about divisive social and political issues. They worked. “There was nothing necessarily noteworthy at the time about a foreign actor running an ad involving a social issue,” Facebook’s vice-president of policy and communications, Elliot Schrage, wrote of the Russian ads in a blogpost.

And here’s Zeynep Tufekci, in The New York Times, the day after:

Here’s the hard truth: All these problems are structural. Facebook is approaching half-a-trillion dollars in market capitalization because the business model — ad-targeting through deep surveillance, emaciated work force, automation and the use of algorithms to find and highlight content that entice people to stay on the site or click on ads or share pay-for-play messages — works.

Sep 22, 2017 — The Editorial Board of The New York Times argues that legislation is surely needed, but it’s not enough. What it takes to open up the “dark ads” society is to replace the FEC with a “new agency” — one that actually works:

It was already illegal for foreigners to purchase political ads when Russian agents started their Facebook campaign. Stronger enforcement of those rules would help, but that would require broader reforms, like replacing the Federal Election Commission, a toothless watchdog often paralyzed by partisan gridlock, with a new agency that has an odd number of members, including a nonpartisan election law expert.

Sep 26, 2017 — “Facebook cannot say for certain whether profiles or pages connected to Russia purchased ads during the French and German election campaigns, a company official told BuzzFeed News.”

Sep 27, 2017 — The Daily Beast breaks another story from the batch of targeted Russia ads, this time involving the impersonification of an actual organization, “United Muslims of America”, with inauthentic Facebook, Instagram and Twitter accounts.

Using the account as a front to reach American Muslims and their allies, the Russians pushed memes that claimed Hillary Clinton admitted the U.S. “created, funded and armed” al-Qaeda and the so-called Islamic State; claimed that John McCain was ISIS’ true founder; whitewashed blood-drenched dictator Moammar Gadhafi and praised him for not having a “Rothschild-owned central bank”; and falsely alleged Osama bin Laden was a “CIA agent.”
(Source: The Daily Beast)

Considering previous revelations from the website, this means that Russian propaganda was at the same time promoting “political rallies aimed at Muslim audiences” and Islamophobic messages and gatherings — just to a different audience.

Sep 28 , 2017— Twitter publishes the result of its own internal inquiries, and announces to have found some 200 accounts implicated with the 450 Facebook identified as stemming from a Russian meddling effort. The company is cooperating with congressional investigators, promising greater transparency and concluding that it welcomes “the opportunity to work with the FEC and leaders in Congress to review and strengthen guidelines for political advertising on social media.”

But that’s not nearly enough. Sen. Warner, writes The New York Times, “called Twitter’s briefing for congressional investigators “very disappointing,” and accused company officials of ignoring extensive evidence of nefarious Russian activity.”

The company’s presentation “showed an enormous lack of understanding from the Twitter team of how serious this issue is, the threat it poses to democratic institutions and again begs many more questions than they offered,” Mr. Warner said, adding, “Their response was frankly inadequate on every level.”

One of the many unanswered questions is whether the ads were more frequently targeted to “swing states”. A newly released working paper by Oxford Internet Institute’s Philip Howard hints at a confirmation: “Average levels of misinformation were higher in swing states than in uncontested states”.

Sep 29, 2017 — “Facebook has shared some details about the Russian-operated profiles it discovered on its platform with Google”, writes Recode.

Oct 1, 2017 — The investigation is now aiming at what Politico labels “the secret inner workings of (…) online platforms” — at the very “mechanics” behind the social media era. The opening of the “Black Box society”, whatever the degree, seems finally at hand.

Oct 2, 2017 — Facebook delivers the ads to the Congress.

Also, the company “will hire 1,000 additional people to the internal team that reviews and removes Facebook ads, according to details shared via email with Recode by a Facebook spokesperson.”

CNN adds that Facebook will also “increase authenticity requirements on political ads by asking advertisers to confirm the business or organization they represent”. But only for ads “that mention candidates by name” — which is not the case for most of the Russian-bought messages.

Also, in revealing that Russian propagandists used ‘Custom Audiences’ and other Facebook tools to identify susceptible voters, and then target their messages “by demographics, geography, gender and interests”, the Washington Post provides a crystal-clear explanation by Jonathan Albright, research director of the Tow Center for Digital Journalism, of how the Kremlin’s targeting of its propaganda ads worked:

(…) hundreds of Russian sites were loaded up with ad tracking software, known as cookies, that would allow them to follow any visitor across the Web and onto Facebook.
The Custom Audiences tool enabled Russian advertisers to feed information from those cookies, which are long strings of numbers that advertisers collect, into Facebook’s systems, which could match them with the accounts of particular Facebook users.
The Facebook users were then shown ads featuring divisive topics that the Russians wanted to promote in their Facebook news feeds, which displayed the ads alongside messages from friends and family members.
As targeted users clicked on the Facebook ads, the system would eventually take them to Web pages outside Facebook, where they would be tracked with more-aggressive forms of tracking software, Albright said.
“A lot of this content is simply for tracking,” Albright said. “You need to get people out of the social networks, off the platforms, because that’s the place where you can attach the advanced ad technology.”

Oct 3, 2017 — Researchers Daniel Kreiss and Shannon McGregor detail, in a BuzzFeed article, the main results of a forthcoming paper — ‘Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google With Campaigns During the 2016 U.S. Presidential Cycle’ — that shows “how representatives at these firms serve as quasi-digital consultants to campaigns, shaping digital strategy, content, and execution”.

What does this mean? “Google, Twitter, and Facebook, we had people who were down there constantly and constantly working with us”, explains Gary Coby, director of digital advertising and fundraising for the Trump campaign. But it’s not only him. It’s everyone:

During and since the election, we formally interviewed dozens of staffers working on all the major 2016 campaigns, along with representatives of the big tech companies, to understand how campaigns use these platforms to reach the electorate. All of them echoed Coby’s comments that Google, Facebook, and Twitter play active roles in electoral politics.

And that is an issue, possibly even bigger than Russian bots and “dark ads”. “The entirely routine use of Facebook by Trump’s campaign and others — a major part of the $1.1 billion of paid digital advertising during the cycle — is likely to have had far greater reach than Russian bots and fake news sites”, they write.

In other words: Facebook is politics. And also, it is increasingly difficult — impossible — to do politics without Facebook actively helping. As they conclude in the abstract for their paper:

(…) political communication scholars need to consider social media firms as more active agents in political processes than previously appreciated in the literature.

Google is increasingly under fire, as Bloomberg reports that House investigators are focusing on Russian ad buying on YouTube and Gmail.

Other companies still escape scrutiny, though, notes Recode: while Snap “hasn’t found any Russia-backed ads on its platform”, we still know nothing about what occurred on platforms such as Reddit and Yahoo.

Oct 4, 2017 — Four sources interviewed by CNN claim that Russian ads “specifically targeted Michigan and Wisconsin, two states crucial to Donald Trump's victory”. Trump won the former by 10.700 votes, and the latter by 22.700. The subtext here is that, in such a context, ads targeting — and therefore the foreign meddling — might have made the difference.

Oct 6, 2017 — Facebook makes a last update to his “Hard Questions” post dedicated to the Russian ads issue. The company estimates that “10 million people in the US saw the ads”, claims that 56% of the impressions “were after the election”, and that “roughly 25% of the ads were never shown to anyone”. Of the more than 3,000 ads shared with the Congress, 5% were on Instagram — a response to a Fast Company story revealing that “at least three suspicious Instagram accounts”, almost 190.000 followers total, were involved. Lastly, “For 50% of the ads, less than $3 was spent; for 99% of the ads, less than $1,000 was spent.”

But is Facebook’s 10 million people estimate credible? No, answers Albright: “The best way to to understand this from a strategic perspective is organic reach.”

In other words, to understand Russia’s meddling in the U.S. election, the frame should not be the reach of the 3,000 ads that Facebook handed over to Congress and that were bought by a single Russian troll farm called the Internet Research Agency. Instead, the frame should be the reach of all the activity of the Russian-controlled accounts — each post, each “like,” each comment and also all of the ads. Looked at this way, the picture shifts dramatically. It is bigger — much bigger — but also somewhat different and more subtle than generally portrayed.

Consequently, the exposure could easily have been much bigger too:

For six of the sites that have been made public — Blacktivists, United Muslims of America, Being Patriotic, Heart of Texas, Secured Borders and LGBT United — Albright found that the content had been “shared” 340 million times. That’s from a tiny sliver of the 470 accounts that have been made public. Even if those sites were unusually effective compared to the 464 others, Albright’s findings still suggest a total reach well into the billions of “shares” on Facebook.

Albright has a last fundamental insight:

To the extent there is a discernible political motive in them, the goal seemed less to inspire enthusiasm for one candidate than to dampen support for voting at all. This fits with what many other researchers and investigators have said about the Russian disinformation campaign, that it drove directly at the fractures in American society and sought to widen them.

BuzzFeed’s Alex Kantrowitz also highlights a crucial point: Facebook is declaring that none of the ads used a Customer Audiences type of targeting that is based on email addresses, and that is crucial to understand whether any links between the Kremlin’s “dark eds” targeting effort and the Trump campaign can be demonstrated.

More details also emerge on how the political ads targeting actually worked in the 2016 campaign. Here’s what Trump campaign digital director, Brad Parscale, tells CBS News:

Parscale says he used the majority of his digital ad budget on Facebook ads and explained how efficient they could be, particularly in reaching the rural vote. “So now Facebook lets you get to…15 people in the Florida Panhandle that I would never buy a TV commercial for,” says Parscale. And people anywhere could be targeted with the messages they cared about. “Infrastructure…so I started making ads that showed the bridge crumbling…that’s micro targeting…I can find the 1,500 people in one town that care about infrastructure. Now, that might be a voter that normally votes Democrat,” he says. Parscale says the campaign would average 50–60,000 different ad versions every day, some days peaking at 100,000 separate iterations — changing design, colors, backgrounds and words — all in an effort to refine ads and engage users.

As the scandal keeps mounting, Business Insider notes that not only Facebook’s value is unaffected— it is actually growing. Wall Street doesn’t seem to care about what Washington and the media denounce of Silicon Valley, after all.

While Facebook has suffered a battering of public-relations scandals in recent weeks, from its admission of spreading roughly 3,000 Russia-linked ads to allowing hateful automated ad-targeting options like “Jew-hater,” the company’s roughly $500 billion public market cap has largely gone unpunished. In fact, the company’s stock is trading at historical highs.
Facebook’s stock price (Source: Business Insider)

Oct 7, 2017 — Another announcement from Facebook, this time revealed in an email to Axios:

Facebook is going to require ads that are targeted to people based on “politics, religion, ethnicity or social issues” to be manually reviewed before they go live (…). That’s a higher standard than that required of most Facebook ads, which are bought and uploaded to the site through an automated system.

Facebook’s PR is hard at work, trying to paint a different, more benign picture of the company:

Oct 9, 2017 — In another spectacular U-turn, Google too announces to have found “evidence that Russian operatives exploited the company’s platforms in an attempt to interfere in the 2016 US election”, writes The Independent. “Tens of thousands of dollars” have been spent on “spreading disinformation” on YouTube, Google search, the DoubleClick ad network and Gmail, the company declares.

One month earlier, a Google spokesperson had given the media the exact opposite answer: “we’ve seen no evidence this type of ad campaign was run on our platforms”.

In the meantime, a New York Times investigation uncovers greater details of the workings of Russian-led Pages “Being Patriotic”, “Secured Borders” and “Blacktivist”. Albright refers to it as “cultural hacking”.

Graphika founder, John F. Kelly, has the deepest insight:

Rather than construct fake grass-roots support behind their ideas — the public relations strategy known as “Astroturfing” — the Russians sought to cultivate and influence real political movements, Mr. Kelly said.
“It isn’t Astroturfing — they’re throwing seeds and fertilizer onto social media,” said Mr. Kelly. “You want to grow it, and infiltrate it so you can shape it a little bit.”

Oct 11, 2017 — Microtargeting firm Cambridge Analytica enters the scenario. According to The Daily Beast, the company, best known for its alleged involvement in the Brexit and Trump campaigns — and the related dystopian claims about profiling each voter with thousands of individual data-points — “is now facing scrutiny as part of an investigation into possible collusion between the president’s team and Russian operatives”.

Cambridge Analytica is also facing scrutiny in the UK.

Facebook’s PR efforts are also faltering. A clear example is how the company fails to provide any meaningful answers to 12 legitimate questions from the New York Times — right after its CSO ranted on Twitter about the media getting Facebook wrong, and how they should be speaking to those “who have actually had to solve these problems and live with the consequences” instead.

On BuzzFeed, Kantrowitz notes that Facebook executives are increasingly turning to tweetstorms as a crisis management tool.

Another issue is raised, by The Verge. Facebook, writes Casey Newton, structurally “rewards polarizing political ads”. This is what happens when engagement is the only metric of democracy, and you treat “commercial and political speech as equals”, as Facebook does: if the price of a political message changes with its virality — just like when marketing a product — it only takes the most ideologically extreme, filter-bubbled, divisive, sentiment-arousing message to make propaganda not only effective (through targeting), but (cost-)efficient too.

Actually, Russian ads were so effective they ended up on Pinterest, writes the Washington Post. “We believe the fake Facebook content was so sophisticated that it tricked real Americans into saving it to Pinterest,” says Pinterest head of public policy Charlie Hale.

How? Here are a couple of examples:

A Pinterest board dedicated to Ideas for the House, for example, features an image of a police officer with copy that says “Georgia Police Officer was Fired for Flying the Confederate Flag.” The Pinterest user found the image on Twitter, where it was originally posted by an account called Being Patriotic. (…)
In another example, a pro-Texas Pinterest board titled Texas Life includes a photograph of a man wearing a cowboy hat with a caption encouraging people to “like and share if they want to stop the Islamic invasion of Texas.” In the upper-right hand corner of the image is a tiny icon saying “Heart of Texas,” and a link to a Facebook page (…) Pinterest’s data shows that the image was pulled from Instagram, which is owned by Facebook.

Oct 12, 2017 — Albright is reached by three Facebook officials. They are concerned about his recently published research about the potential reach of Russian ads — way bigger than the company estimated. More than that: they don’t want to make his experiment replicable.

Here’s how the Post reports the facts:

(Albright) was not pleased to discover that they had done more than talk about their concerns regarding his research. They also had scrubbed from the Internet nearly everything — thousands of Facebook posts and the related data — that had made the work possible.
Never again would he or any other researcher be able to run the kind of analysis he had done just days earlier.

A “bug”, according to the company. “Public interest data”, according to Albright.

Some 77.000 signatories to a petition for ads transparency seem to agree:

Facebook may “owe Americans the truth”, but it won’t be that simple to get it to tell it. Sheryl Sandberg, in an exclusive interview with Axios, highlights another complication: If the Russian-backed ads had not been bought by “inauthentic” accounts posing as Americans, “most of them would be allowed to run”.

However, Sandberg is in favor of a full release of the ads:

Oct 13, 2017 — CNN gets inside the workings of another propaganda Page, “Don’t Shoot Us” — a possible reference to the “Hands Up Don’t Shoot” slogan. “Posing as part of the Black Lives Matter movement”, the Page “used Facebook, Instagram, Twitter, YouTube, Tumblr and Pokémon Go and even contacted some reporters in an effort to exploit racial tensions and sow discord among Americans” — at the same time “galvanizing African Americans to protest and encouraging other Americans to view black activism as a rising threat”.

Even exploiting the popular Nintendo-Niantic augmented reality game:

The website (…) links to a Tumblr account. In July 2016, this Tumblr account announced a contest encouraging readers to play Pokémon Go, the augmented reality game in which users go out into the real world and use their phones to find and “train” Pokémon characters.
Specifically, the Don’t Shoot Us contest directed readers to go to find and train Pokémon near locations where alleged incidents of police brutality had taken place. Users were instructed to give their Pokémon names corresponding with those of the victims. A post promoting the contest showed a Pokémon named “Eric Garner,” for the African-American man who died after being put in a chokehold by a New York Police Department ocer.
Winners of the contest would receive Amazon gift cards, the announcement said

On Facebook, the group’s Page — some 250.000 strong — publicized “at least one real-world event designed to appear to be part of the Black Lives Matter Movement”. The protest was supposed to take place outside the police department where the officer who shot Philando Castile worked, in Saint Paul, Minnesota.

Twitter, in the meantime, is also accused of deleting crucial information:

Twitter has deleted tweets and other user data of potentially irreplaceable value to investigators probing Russia’s suspected manipulation of the social media platform during the 2016 election, according to current and former government cybersecurity officials.

Oct 16, 2017 — Hillary Clinton proclaims that we are at the dawn of a “cyber cold war”:

In addition to hacking our elections, they are hacking our discourse and our unity. We are in the middle of a global struggle between liberal democracy and a rising tide of illiberalism and authoritarianism. This is a kind of new cold war and it is just getting starting.

Oct 18, 2017 — Facebook Head of Messenger, David Marcus: “When you design a platform that reaches 2 billion people every month, sometimes bad things happen.”

Like the ones The Washington Post finds:

Russian operatives used a fake Twitter account that claimed to speak for Tennessee Republicans to persuade American politicians, celebrities and journalists to share select content with their own massive lists of followers, two people familiar with the matter said.

Controversial political consultant Roger Stone and embattled former Trump’s National Security Advisor, Michael Flynn, are among the top figures that shared material from the account, named @ten_GOP.

“They were trying to influence influencers,” says Albright of the operation.

More “bad things” happened, though: the (predictable) fact, admitted by Marcus, that some Russian agents were also “using Messenger to communicate with their users”. And the finding, from ProPublica’s ads-gathering tool, that — notwithstanding the company’s reassurances and increased vigilance— some targeted ads from extremist right wing party AfD made it to the platform, as well as a concerted targeting effort from an unspecified entity called “Greenwatch”, aimed at mocking the German Green Party.

After all, Facebook actively helped the AfD’s digital team designing their targeting strategies, as Bloomberg wrote in September.

But the company, as well as Google, also helped target anti-refugee ads from conservative non profit ‘Secure America Now’, especially in swing states like Nevada and North Carolina. Among them, “a pair of controversial faux-tourism videos, showing France and Germany overrun by Sharia law”, and also “ads that linked Democratic Senate candidates with Syrian refugees and terrorists”, Bloomberg reports.

And there’s more:

Facebook’s collaboration with Secure America Now went beyond optimizing its ad reach, and included efforts to test new technology. In one instance, Facebook used the Secure America Now campaign to try out a vertical video format, which the Facebook reps were eager to see used on a large scale.

Oct 19, 2017 — Thanks to The Daily Beast, we now know why Sen. Warner was so mad at Twitter after the Congressional hearing. All the company presented the investigators with was a thumb drive containing a batch of tweets — 1,800 — from Kremlin’s outlet Russia Today. None of them helped “shed any light on the 201 Twitter accounts suspected to be Kremlin imposters that the company publicly identified in a blog post”.

We also have the first details of the first actually proposed legislation on online ads transparency. Proponents Warner and Klobuchar are joined by John McCain in the “Honest Ads Act”, of which Recode sees a preview copy.

And writes:

the new Senate bill — obtained by Recode before its official introduction on Thursday — seeks to impose new regulations on any website, web application, search engine, social network or ad network that has 50 million or more unique U.S. visitors in a majority of months in a given year.
For campaigns that seek to spend more than $500 on total political ads, tech and ad platforms would have to make new data about the ads available for public viewing. That includes copies of ads, as well as information about the organizations that purchased it, the audiences the ads might have targeted and how much they cost.
The new online ad disclosure rules would cover everything from promoted tweets and sponsored content to search and display advertising. And it includes ads on behalf of a candidate as well as those focused on legislative issues of national importance, according to a copy of the bill.

Will they succeed?

Possibly. The New York Times, however, reports (Oct 23) that

the tech industry, which has worked to thwart previous efforts to mandate such disclosure, is mobilizing an army of lobbyists and lawyers — including a senior adviser to Hillary Clinton’s campaign — to help shape proposed regulations.

The “senior adviser” is, in fact, the same lawyer that helped Facebook escape ads disclosure regulations back in 2011.

On Twitter, Daniel Kreiss raises two issues. First, political targeted ads might not be as effective as we presume

and second, would we be having the same conversation if Hillary, instead of Donald, made it to the White House, and as narrowly?

Oct 24, 2017 — Twitter announces “steps to dramatically increase transparency for all ads” on the platform. The “industry-leading” effort will be coordinated through a novel “Transparency Center”, detailing

All ads that are currently running on Twitter, including Promoted-Only ads
How long ads have been running
Ad creative associated with those campaigns
Ads targeted to you, as well as personalized information on which ads you are eligible to receive based on targeting.

The Center also features a “special section” for electioneering that will include

All ads that are currently running or that have run on Twitter, including Promoted-Only ads
Disclosure on total campaign ad spend by advertiser
Transparency about the identity of the organization funding the campaign
Targeting demographics, such as age, gender and geography
Historical data about all electioneering ad spending by advertiser

The updates will first roll out in the US, and then be implemented globally.

Oct 27, 2017 — Facebook is adopting new transparency measures for ads too. “Starting next month”, writes Vp of Ads, Rob Goldman, “people will be able to click “View Ads” on a Page and view ads a Page is running on Facebook, Instagram and Messenger — whether or not the person viewing is in the intended target audience for the ad”.

Also, for “federal-election related ads” Facebook will:

Include the ad in a searchable archive that, once full, will cover a rolling four-year period — starting from when we launch the archive.
Provide details on the total amounts spent.
Provide the number of impressions that delivered.
Provide demographics information (e.g. age, location, gender) about the audience that the ads reached.

Political advertisers will also have new identification requirements, including the verification of “both their entity and location”.

Zuckerberg calls this “one of many important steps forward” the company intends to make, with more to come “soon”.

And yet, “soon” might not be enough, as the changes will take months to be effective . “We will start this test in Canada and roll it out to the US by this summer, ahead of the US midterm elections in November, as well as broadly to all other countries around the same time”, writes Goldman.

What of the votes taking place in other countries before the full implementation of the transparency measures?

Oct 30, 2017 — “At least 60 rallies, protests and marches were publicized or financed by eight Russia-backed Facebook accounts,” writes The Wall Street Journal. Again, sowing deep into the country’s emotional fractures:

Also, Facebook itself suggested to advertisers — during the 2016 election — how to politically target a “fractured” USA through 14 electoral segments. It’s “a blueprint for exploiting the country’s divisions”, writes BuzzFeed’s Kantrowitz.

According to a political advertising sales pitch obtained by BuzzFeed News, Facebook carved the US electorate into 14 segments — from left-leaning “youthful urbanites” to a pro-NRA, pro–Tea Party group it bizarrely labeled as “the great outdoors.” It detailed their demographic information — including religion and race in some cases — and offered them to political advertisers via Facebook’s sales teams. For advertisers using Facebook’s self-serve platform, the segments could be reached by purchasing larger bundles ranging from “very liberal” to “very conservative.”
(Source: BuzzFeed News)

It’s not as sophisticated as the micro-targeting deployed for example in the Trump campaign, but it can still be useful “to those without access to proprietary data”, argues a Democratic operative interviewed by BuzzFeed.

Oct 31, 2017 — Ahead of their upcoming Senate testimony, Facebook, Google and Twitter revise the estimates of the reach of Russian ads on their platforms, reports say.

Facebook, in particular, has gone from dismissing comments on the influence of the platform on the election (a “pretty crazy idea”, said Zuckerberg immediately after Trump was elected), to regretting those comments (“Calling that crazy was dismissive and I regret it”), then admitting the Russian-backed ads could have reached around 10 million users, and finally putting that estimate at 126 million in its testimony.

“One hundred and twenty fake Russian-backed pages created 80,000 posts that were received by 29 million Americans directly but then amplified to a much bigger potential audience by users sharing, liking and following the posts”, writes the Guardian.

And yet, the company is still trying to downplay their importance. In the testimony, Colin Stretch, a lawyer for the company, writes:

Our best estimate is that approximately 126 million people may have been served one of their stories at some point during the two-year period. This equals about four-thousandths of one percent (0.004%) of content in News Feed, or approximately 1 out of 23,000 pieces of content.

Twitter will also announce to have found “more than 2,700 accounts” tied to the Russian Internet Research Agency — a tenfold increase compared to the 200 previously disclosed.

“Google which previously had not commented on its internal investigation, will break its silence”, writes Recode.

In a forthcoming blog post, the search giant confirmed that it discovered about $4,700 worth of search-and-display ads with dubious Russian ties. It also reported 18 YouTube channels associated with the Kremlin’s disinformation efforts, as well as a number of Gmail addresses that “were used to open accounts on other platforms.”

Again, none of this seems to be having a financial impact on the tech giants:

A SurveyMonkey/Axios poll confirms: most users are still in love with Big Tech, no matter the scandal.

(Source: Axios)

Just before the hearing, Business Insider reveals that a small amount of ads made it to other sites.

Facebook, Google and Twitter provide their first testimony in Capitol Hill, in front of the Senate Judiciary Committee. The companies are represented by general counsel Colin Stretch (Facebook), deputy general counsel Sean Edgett (Twitter), and information security director, Richard Salgado (Google). Some would have preferred CEOs to testify — but that won’t happen.

During the two and a half hours Q&A, many things emerge.

Among them:

  • Internal investigations from all three companies are still ongoing, which means that we — and them — still don’t have a full picture of what happened on the platforms, and that more revelations might be yet to come
  • Facebook claims that Russian-backed ads were about “fomenting discord about the validity of his election”, and announces “it will double its safety and security staff to 20,000, including contract workers, by the end of 2018”.
  • Twitter argues that less than 5 percent of Twitter’s 330 million active user accounts are “false, spam or automated”. This means hotly debated “political bots” are included — and many think the estimate is too low
  • The content of some of the ads is revealed to the public. For example
(Source: Recode)
This tweet depicts a Photoshopped image of actor and comedian Aziz Ansari encouraging voters to submit their vote for president via Twitter, which is not a legitimate way to vote in a U.S. election. Sen. Richard Blumenthal called it “a deliberate misleading of people.” Twitter says it took down this tweet, “and all other tweets like it,” but could not say how many people may have tried to vote via Twitter.
  • Confirming what COO Sandberg stated in the Axios interview, Facebook argues that the problem with Russian-backed ads is not their content, but the fact they’ve been promoted by inauthentic accounts:
Foreign agents using their real names on Facebook can try to cause chaos as long as they’re acting within Facebook’s rules, Facebook General Counsel Colin Stretch’s testimony revealed. “It wasn’t so much the content — although to be clear much of that content is offensive and has no place on Facebook — but the real problem with what we saw was its lack of authenticity,” Stretch said. If Kremlin agents used real accounts and abide by the platform’s rules, that implied, they could go ahead and post away.
  • The three companies constantly remark the fact that numbers show that Russian operations make for just a minimal amount of the content on the platform. From Facebook’s testimony:
Our best estimate is that approximately 126 million people may have been served content from a Page associated with the IRA at some point during the two-year period. This equals about four-thousandths of one percent (0.004%) of content in News Feed, or approximately 1 out of 23,000 pieces of content.
  • Sen. John Kennedy (R-LA) gets Facebook to admit it can’t know all of its advertisers: “How does Facebook, which prides itself on being able to process billions of data points and instantly transform them into personal connections for its users”, he asks, “somehow not make the connection that electoral ads, paid for in roubles, were coming from Russia?”
  • None of the companies explicitly support the ‘Honest Ads Act’ that’s being devised by Warner, Klobuchar and McCain to regulate political ads on social media
  • Democratic Senator Sheldon Whitehouse raises an important point, a tough one for both platforms — and their self-regulatory efforts — and upcoming legislation: “How do you deal with the problem of a legitimate and lawful but phony American shell corporation, one that calls itself say ‘America for Puppies and Prosperity,’ that has a drop box as its address, and a $50 million check in its check book that it’s using to spend to manipulate election outcomes?”. Twitter admits “it’s a problem” it still hasn’t figured out how to solve.

As Recode notes, however, “legislators missed one key point”: that this is not just about ads.

Sure, Russian trolls ran a limited number of ads on Google, about 3,000 of them on Facebook and then targeted Twitter through organizations like RT, a news agency supported by the Kremlin. But there’s a whole universe of other content — organic posts, stories, tweets and more that cost nothing to publish — that received far less attention.
That included the roughly 80,000 organic posts on Facebook that appeared in roughly one-third of Americans’ news feeds. It included bots on Twitter, which received some attention during the hearing but little follow-up, even after the company offered a low estimate that only about 5 percent of accounts are automated fakes. This was the real trouble — not only because organic content has vast reach, but because it is seemingly impossible to regulate it in a way that doesn’t create conflict with the First Amendment.

Also, the first hearing didn’t provide additional clues as to how effective the Russia propaganda effort was in actually influencing the vote.

Sen. Mazie Hirono, a Hawaii Democrat, asked: “In an election where a total of about 115,000 votes would have changed the outcome, can you say that the false and misleading propaganda people saw on your Facebook didn’t have an impact on the election?” Stretch responded: “We’re not well positioned to know why any one person or an entire electorate voted the way that it did.”

Nov 1, 2017 — Facebook, Google and Twitter face two more hearings, in front of the Senate and House Committees. And finally, Committee members make a sample of the Russian-backed ads available to the public.

(Source: Sen. Mark Warner, on Twitter)

To many, they demonstrate the “sophistication” of Kremlin trolls and propagandists, showing “a striking ability to mimic American political discourse at its most fractious”, and also “a shrewd understanding of how best to use Facebook to find and influence voters most likely to respond to the pitches”, writes The Washington Post. It’s “a surgical deployment of incendiary content across all three platforms with the aim of dividing Americans at critical moments in the election season”, adds the New York Times.

Facebook’s Stretch concurs: “It was undertaken by people who understand social media. These people were not amateurs.”

On Twitter, however, users point out how weak the American democracy must be to actually be endangered by trolls and memes of this sort:

But “that was far from the only Christian-focused meme” shared by the 217.000 likes strong “Army of Jesus” Page, argues The Daily Beast, while revealing its Instagram presence too.

Many of Army of Jesus’ posts simply encouraged viewers to like or re-share content, much more than other Russian government-linked troll groups, allowing them to grow an outsized audience at no immediate cost.
One particular post implored users to “Like for a Christian American,” in an effort to target religious Facebook users and grow subscriber growth. Once subscribed, later posts would ask users to to “Like for Trump and Ignore for Hillary.”

The ads focussed on appealing to independence

on illegal aliens paranoia/hate

on “keeping social divides fresh” on LGBT issues

and police violence, boosting both Black Lives Matter

and Blue Lives Matter.

Also, from the hearings we learn that:

  • Twitter banned 106 accounts for creating some 700 “vote-by-text” tweets, apparently targeted at Clinton voters.
  • Stretch adds 16 million people — plus 4 million prior to October 2016 — reached by Russia-backed ads on Instagram to the 126 million previously reported for Facebook. This makes for a total reach of a 150 million users on both platforms — many more than the initial 10 millions estimate. And yet, some are not convinced:
  • Twitter changes its mind overnight, and decides to support the “Honest Ads Act”:
  • Sen. Richard Burr and Sen. Warner voice more doubts about the 5% estimate of fake and automated accounts “pushing disinformation” on the platform given by Twitter the previous day. The company will have to provide more data
  • Sen. Dianne Feinstein, in a strongly-worded attack on the platforms, argues that this is “the beginning of cyber warfare”:
  • Sen. Marco Rubio asks whether foreign meddling in the election actually amounts to a violation of Google, Facebook and Twitter’s Terms of Service:
  • Sen. Warner asks: were the 30.000 accounts that Facebook took down ahead of French election also active during the US election? The answer comes via Twitter, from Facebook’s CSO, Stamos:
  • Among the targets is Maine Gov. Paul LePage:
In 2016, Kremlin-backed disinformation at times accused him of trying to “kill blacks,” and a year later, another Russia-tied page took aim at LePage’s critics, saying that “liberals are now acting like terrorists.”
  • However, when Rubio asks if users were “targeted by name”, all three companies say it didn’t happen
  • Facebook acknowledges that the Clinton and Trump campaign spent a combined $81 million on the platform
  • All three companies admit they don’t know how much they spend on combating bots and disinformation on their platforms. Zuckerberg, absent from the hearing, has an indirect answer while showing the (astonishing) financial results of his company in Q3, 2017:
Our community continues to grow and our business is doing well. But none of that matters if our services are used in ways that don’t bring people closer together. We’re serious about preventing abuse on our platforms. We’re investing so much in security that it will impact our profitability. Protecting our community is more important than maximizing our profits.
  • As previously seen, this is not just about ads: it’s about organic content. What to do about it?
  • “Don’t let nation-states disrupt our future, you’re our front line of defense”, argues Sen. Burr, echoing another comment that would enlist tech companies among US patriots and, in particular, as instruments of US foreign policy:

Some fundamental questions however remain. First, we don’t really know how the platforms established whether an ad was linked to the Kremlin or not.

Second, a fundamental contradiction remains at the heart of the companies’ defense line: