Facebook’s Cryptocurrency, Cyberwarfare Escalation With Russia, and Deepfake Countermeasures
Facebook announced Libra, a product that may or may not be a cryptocurrency depending on who you ask. The announcement has not been met with the enthusiasm Facebook was probably hoping for. Charles Arthur called it “Alarming and Unnecessary,” and Jennifer Grydiel suggested that regulators block the product. Wired published a less opinionated, more explanatory piece about the proposed currency. My own article about Libra is forthcoming, but suffice it to say that I don’t trust Facebook as far as I can throw them.
Meanwhile, traditional banks are flirting with cryptocurrency as startup crypto firms continue to grapple with regulators. All signs point to the continued domestication of cryptocurrency, as the anonymity preserving cypherpunk vision of cryptocurrency has been largely co-opted by mainstream powers. Even Facebook — the most flagrant privacy violator around — has now thrown in with cryptocurrency, signifying an enormous sea change in how the technology is being applied, by whom, and to what ends.
This week the New York Times broke the news that the U.S. has inserted malware into Russia’s electric grid. The move is being branded as retaliation. For years the F.B.I. and D.H.S. have warned that Russia has been inserting malware into the U.S. grid. At the same time, Wired’s article on the subject is aptly titled, How Not To Prevent a Cyberwar With Russia. The argument in favor of the latest action reminds me quite a lot of the Cold War’s Mutually Assured Destruction, further confirmation that history may not repeat itself exactly, but it rhymes.
This escalation could also be a sign that U.S. intelligence believes Russia is behind Triton, a malware that targets industrial safety mechanisms and shut down a Saudi Arabian oil plant earlier this year. The same hackers behind Triton, called Xenotime, were found to have probed the U.S. grid. Triton is incredibly dangerous, and could be used to do much more than just hamper the electrical grid — for example, researchers have said the malware could easily have caused the Saudi oil plant to explode.
While many AI researchers agree that we don’t have the forensic tools necessary to effectively combat deepfakes, this week Adobe announced a new piece of software that may help. In conjunction with researchers at UC Berkeley, Adobe wrote about their new AI which can detect if a photo has been manipulated by Photoshop. The research is promising and timely — congress had its first hearing regarding deepfake technology last week, and anxiety about fake news and other propaganda are rising as the U.S. 2020 election approaches. It appears as though Russia has continued to run voter influence campaigns in the E.U., and we have every reason to believe they’ll be up to the same tricks in advance of 2020.
Some tidbits: Working as a Facebook moderator is still horrifying. California may force Uber/Lyft drivers to be classified as employees rather than contractors, with big implications for both drivers and their corporations. NASA’s Jet Propulsion Lab was the victim of cyber espionage, and a rouge Raspberry Pi created the vulnerability. A Florida city paid the $600,000 ransom after falling prey to a ransomware attack. And Wired profiled a pro-social use of facial recognition software: combating child trafficking.