Tracer Newsletter #4 (11/03/19)- McAfee warns how deepfakes could circumvent cybersecurity protocol

Henry Ajder
Sensity
Published in
6 min readDec 5, 2019
11/03/19

Welcome to Tracer, the newsletter tracking the key developments surrounding deepfakes/synthetic media, disinformation, and emerging cybersecurity threats.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

The Recap

Important stories, papers, and recent developments

McAfee warns how deepfakes could circumvent cybersecurity protocol

In a keynote speech at the RSA cybersecurity conference, McAfee’s Steve Groban, CTO, and Celeste Fralick, Chief Data Scientist, warned of the threat of increasingly realistic deepfakes.

What is the fear?

The key concern voiced by Groban and Fralick is that highly realistic deepfakes may enable a new wave of social engineering attacks that capitalise on our inability to distinguish between deepfakes and real audiovisual media. Groban, in particular, emphasised the “alarming rate” of progress being made with deepfakes, and the immediacy of this threat to individuals and businesses.

Accessible and Automated

The manipulative potential of deepfakes was demonstrated onstage, with a deepfake ‘lip-sync’ video of Groban speaking Fralick’s words. Fralick emphasised that the deepfakes was created with free, opensource software and that the process was completed over a weekend, with publicly available footage of Groban.

Spear phishing and the challenge of deepfakes to cybersecurity

The main use case singled out by Groban is how deepfakes could enhance spear phishing attacks, where bad actors pose as an individual familiar to a victim in order to obtain valuable or compromising information. Deepfakes that accurately impersonate family or friends may enable spear-phishing attacks to expand out of the email domain, and into phone and video calls. Groban also acknowledged the current means of protecting against these attacks may fail, as high-quality deepfakes would likely fool image classifiers that verify audiovisual media.

Black activists fear being delegitimised online by ‘blackfishing’ bots

With the 2020 US presidential election looming, black activists fear bots on social media posing as black voters will undermine black social media users and their political activism.

What has sparked these concerns?

The current controversy centres on the use of the hashtag #ADOS (American descendants of slaves), a political movement that encourages black presidential candidates to engage with policy supporting black Americans whose ancestors were enslaved during the transatlantic slave trade. This hashtag has reportedly been hijacked by bad actors who seek to spread disinformation and manipulate voter behaviour surrounding certain Democratic presidential candidates.

Blackfishing and the weaponisation of black voices online

Whilst the #ADOS hashtag and movement are legitimate, bad actors who create ‘blackfishing’ bots that fraudulently pose as a black person, threaten to create a universal suspicion as to whether politically active black accounts are real or not. This has led to certain activists tweeting under #notabot and posting pictures of them holding signs with their Twitter account details. Many have also called on social media platforms to invest more resources into tackling the problem before the height of campaigning.

It’s not the first time weaponised misinformation has targeted black voices in the US

Whilst blackfishing has been observed as early as 2014, this was primarily human trolls creating fake accounts and manually operating them. The 2016 US election saw Russian state actors automate blackfishing on an unprecedented scale with bots that systematically criticised Hillary Clinton, whilst also posting inflammatory adverts on Facebook under the banner of the Black Lives Matter movement.

Google’s controversial AI call assistant launches at restaurants in the US

Google’s AI call assistant Duplex is being rolled out following its controversial demo at I/O 2018.

What is Duplex?

Announced in May last year, Duplex is Google’s AI assistant that can place calls on behalf of a user, performing tasks such as booking a haircut or a table at a restaurant. The demo generated significant coverage when first released, based on the synthetic voice’s uncanny realism, and how it replicated ‘human’ elements of speech such as pauses and ‘umms’/ ‘ahhs’.

How is it being currently being used?

Duplex is currently only available to Google Pixel 3 smartphone users, across the 43 states that do not have legislation that currently blocks its implementation. At launch, it will be solely operational for booking restaurant reservations where an online booking system is unavailable. However, Google has previously indicated the desire to roll out a more universal version that can book doctors appointments and other services.

Ethical questions still remain

Duplex generated significant controversy when announced, with much of this focusing on the deceptive intent of making an AI assistant that sounds indistinguishable from a real person. Whilst Google have since assured users will be notified that they are speaking with Duplex prior to taking the call, a pushback remains against synthetic media that imitates and ‘devalues’ real human interactions.

Other key developments

1) French President Emmanuel Macron has proposed a “European Agency for the Protection of Democracies” to combat “cyber-attacks and manipulation” aimed at EU member states. (Guardian)

2) A US cyber operation disabled the internet access of a notorious Russian ‘troll factory’ during the US mid-term elections last year, in order to counter the spread of weaponised misinformation. (Vox)

3) Youtube is rolling out a fact-checking feature where searches for sensitive topics, such as conspiracy theories or vaccinations are given a truth rating at the top of the search page. (Buzzfeed News)

4) A clip from a video game released in 2009 went viral in India, presented as footage of the Indian military missile strike in Pakistan that heightened tensions between the two countries (Boom)

5) In a blog post, Mark Zuckerberg outlined plans to shift Facebook’s ethos from ‘the town square to the living room’, emphasising small group interactions, privacy, and temporary posts. (Washington Post)

6) A deepfake of a Chinese actress created significant controversy within China. Whilst the creator insisted it was meant to be educational, he went on to apologise and delete the video. (Sixth Tone)

Audio-Visual

Videos, podcasts, and demos

Automatic face ageing through deep reinforcement learning

A recent paper by a team of researchers from US and Canadian Universities outlines a new deep learning algorithm that can specifically ‘age’ people based on existing video footage. This is demonstrated on a variety of celebrities, including the surprisingly accurate Trump example above, who have been synthetically aged using the algorithm.

GANSynth: Learning to produce musical notes on different instruments

Google AI’s GANSynth is a GAN generator for creating audio samples from noise, that in turn can be used to produce high-fidelity and locally-coherent music samples. The latter is usually achievable by auto-regressive models of the WaveNet family, but it has been proven difficult with GANs before. A key ingredient of this contribution appears to be the audio representation by means of magnitudes and frequencies in the spectral domain. The end result is synthetic music that accurately replicates the individual timbre of different instruments.

Photo tampering throughout history

Whilst deepfakes may have refocused attention on the manipulation of visual media, the practice of doing so is as old as photography itself. This slideshow assembled by Fourandsix Technologies chronologically illustrates famous cases of photo tampering from 1860 to the present day.

Bold Ideas

Arguments and opinions that got us thinking

What Beauty GAN can show us about individuality and homogeny

Michael Dempsey explores how Beauty GAN, an algorithm that creates new makeup styles, provides a looking glass into how recreating the perfect ‘familiar’ also drives desire for the aesthetically unique.

Why low use of GPS tweets suggests a challenge for digital signatures

Karev Leetaru argues that the low use of GPS tagged tweets, and preference for location privacy poses a challenge to the adoption of secure digital signatures in the fight against deepfakes.

Want to receive new editions of Tracer direct to your inbox? Subscribe via email here!

Working on something interesting in the Tracer space? Let us know at info@deeptracelabs.com

To learn more about Deeptrace’s technology and research check out our website

--

--