Spoofing face recognition with deepfake

Korvin AG
Predict
Published in
4 min readApr 10, 2018

(Originally appeared at: https://korvinag.com/spoofing_face_recognition_with_deepfake_march_07_2018.html )

It is now widely discussed that faking media with FakeApp is pretty much possible. Some even calls it a grave danger to the security of nations. But that’s just the tip of the iceberg. Why? Because it is as easy to fool the surveillance systems as to make a revenge porn video, that’s why.

The DeepFake — FakeApp story is developing and now even the mainstream is all about it. While most commentators are talking about how politicians and ex-lovers could be blackmailed or revenged its most interesting aspect certainly isn’t this.

It is that the video surveillance systems of today now have a very powerful countermeasure.

FakeApp is certainly neither the first, nor the last piece of code that is using commercial AI or other adaptive logic to fool some control mechanisms. In this case the control mechanism is no other than the viewer of the doctored video, who might or might not believes in what is on the screen. The actress of the porn video might be the celebrity, whose face it appears to have or it might be just a fake. Damages done, responsible parties will be searched for and eventually someone will be named. Perhaps he who created FakeApp. Protracted legal battles will ensue and the dust will be settled.

But in another arena, the consequences of this new technology are much more widespread. Within the lairs of big bad security organizations this technology is in the talks for a while now. Right after the Mumbai Attacks of 2008, some started to understand that very often the only use-able evidence is surveillance video footage. Back then it was supplemented with OK voice intercepts, but in the audio there was little to actually ID the terrorists. Not that the low-quality videos were much better, but this started a thinking about the possibility to somehow spoof the cams.

Not long after this it became clear that the quality of the surveillance video and the integration into a workable system is key to evade identification. In 2010, a hit squad assassinated a leading figure of Hamas in Dubai. Under just a couple of hours the local security services were able to crack the case and to ID the suspects, even though they were using all methods to cover their tracks. And not to mention, they were all hardcore professionals of the trade.

It was the moment when everybody started to worry, whether the new era of mass surveillance brings about a change in the ways how covert ops could still be run.

Or at least they were rethinking how such ops should be run in a country where there are a dozen CCTVs for each resident.

And one of the solutions (apart from efforts to further lessen the operators’ digital footprint) was to find a way to hack surveillance systems and alter the evidences.

In the ensuing work it was found that most mass surveillance systems are pretty easy to hack, since almost none of them has been designed to be robustly resisting intrusions. And all around a World new techniques appeared that are targeting these. Since their vulnerability vector is usually their frontend, it was sometimes enough to crack a small number of Wi-Fi routers or set-top recorders.

This job surely paid back its costs, as it gave insight for some into the lives of many. One notable case is when the Dutch AIVD hacked into the Russian APT29 Cozy Bear’s office video surveillance system, while they were hacking into the servers of the Democrats in the USA.

But all these are not the full pack. No, because the big time faking campaigns of the past years gave a strong rise for the need to fake videos. We read fake news, we use fake geospatial imagery and even some screenshots from video games to prove points at the bargaining table. And it works. And if it works to make it unable to bring to justice who shot down a commercial airliner over the Donbas, well it will be good enough for any other scenarios as well.

Now let’s make an educated guess that FakeApp is not the first such software to fake videos using CGI. For example because such software had been around the film and computer game industry for ages now.

What we need to ask ourselves is that are there some who started mating these two techniques? I mean the access to video surveillance and video faking? Because if so, then there are much to worry about for those countries who invested heavily in this field. And also, it means that Big Brother could be fooled …

… by the technically adapt.

And what would be a possible attack scenario? Perhaps this to-do list proves an insight:

1.) Gain access to the CCTV’s system, presumably over its closed Wi-Fi network.
2.) Locate the recorder’s HDD or SSD storage unit.
3.) Understand the file naming sequence of recorded stills and footages.
4.) Locate the camera providing cover for a given field-of-view.
5.) Download a footage where there are no people seen, just the background.
6.) Download the footage that needs doctoring.
7.) Fake the footage using FakeApp or other CGI software.
8.) Upload the resulting file with the same file attributes to the CCTV system’s storage unit.
9.) Job done, someone else was there! :)

As this vector needs no attempts to go about the seriously encrypted communication with those databases containing digitized samples of ID’d persons, chances are that this will be possible without any 0days or other sophisticated tools.

Thanks for reading and please feel free to share:
Korvin AG

( If You want to contact me regarding this article or for any other reason, You could do so by sending an email to info@korvinag.com )

--

--

Korvin AG
Predict
Writer for

Ex-military pro-stability thinker and a serious believer of a brighter future.