Go Phish: Unexpected Hacks and Attacks Deliver Unwelcome Surprises

This week in Fraud Trends, February 28, 2020.

Christopher Watkins
DataVisor
4 min readFeb 28, 2020

--

Successful fraud prevention today requires total, complete, comprehensive, non-stop protection. No let-ups, no breaks, no vulnerabilities. An attack could come at any time, from anywhere, and anyone could be a victim.

We learned about some surprising victims this week. For example, in this story from CNN:

“Corcoran fell for a phishing scam. It’s common, too: Nearly 30,000 people reported being a victim of that type of scam last year. Together they reported nearly $50 million in losses, according to the FBI’s 2018 Internet Crime Report.”

In the aftermath of the attack, Barbara Corcoran offered some sage advice on Twitter:

The folks over at the Desjardins Group got a bit of a surprise this week as well:

“Original estimates by the Quebec-based financial institution set the cost of recovering from the breach at $70m. The co-operative has now said that the final breach bill is likely to be $108m.”

In one of the bigger stories this week, 3 billion of us got a bit of an unpleasant surprise:

“Clearview AI, the company whose database has amassed over 3 billion photos, has suffered a data breach, it has emerged. The data stolen in the hack included the firm’s entire customer list–which will include multiple law enforcement agencies–along with information such as the number of searches they had made and how many accounts they’d set up.”

The fallout from the Clearview AI story is going to continue for a long, long while, and new stories are already hitting the news, like this one:

“Clearview AI told the Times that its app finds matches up to 75 percent of the time, though the app has yet to be tested by any independent organizations, like the National Institute of Standards and Technology. In December, NIST released a report that found current facial recognition technology came up with false positives more often for Asian and black people when compared with false positives for white people.”

False positives of any kind are a clear sign that something isn’t working when it comes to detection solutions and approaches. DataVisor published a new e-book this week that offers a deep dive into why false positives happen, why current solutions aren’t working, and how machine learning can be used to reduce false positives without introducing risk or CX friction:

A critical part of the fraud prevention puzzle requires a paradigm shift away from focusing entirely on trying to identify the bad actors, to identifying and understanding good users as well. This shift is particularly important when it comes to addressing the problem of false positives.

That said, it’s never a bad thing when we do identify a fraudster:

“Josh Minkler with the U.S. Attorney’s Office says 32-year-old Tuong Quoc Ho stole information from hundreds of individuals to open PayPal and eBay accounts and sell items that were purchased with stolen credit card information. He collected more than $2 million.”

Given our focus on fraud prevention in the big data age, we talk a great deal about data, and those who work with it. So, for our tweet of the week selection this week, we’ve got a little something for the data scientists!

Tweet of the Week!

One last item: Will you be attended MRC this year? If so, we’re calling fraud fighters to come to Booth #207 at MRC 2020! If you can spot the hidden threat, you’ll win a free T-shirt!

Additional Reading:

See you next week, for another edition of This Week in Fraud Trends!

--

--

Christopher Watkins
DataVisor

I type on a MacBook by day, and an Underwood by night. I carry a Moleskine everywhere.