Why Web3 Needs To Learn From Web2

rootsoftrust
DataBulls
Published in
8 min readAug 11, 2023

--

From hacks to the latest debacle from Worldcoin, web3 companies need to apply the same security and risk management processes required by law from web2.

Don’t get me wrong, I’m an avid web3 fan. I have watched it defy critics, spur new research into cryptography, distributed processes, trustless computing, create new economic and business models. But I’ve also seen it let itself down with some complete howlers which would not be acceptable in a commercial establishment built on web2, so why would it be acceptable in a web3 company wanting to scale and increase overall adoption? The question that should be asked about Worldcoin is not what happened, but as an ex-colleague put it, “How did they get this far BEFORE being raided?”

Having a blockchain centred view on security and risk without accepting the regulations that safeguard the rights of the people you want to adopt your product is probably doomed to failure, law suits and prosecutions. Security and privacy are more than 51% validators behaving well and keccak256. To explain my concerns, I will randomly pick a couple of companies that stood out to me, starting, of course, with Worldcoin.

I first became aware of Worldcoin when the same ex-colleague put the following link in a Slack channel: https://www.technologyreview.com/2022/04/06/1048981/worldcoin-cryptocurrency-biometrics-web3/

It was of concern from a human dignity point of view and also from a privacy law standpoint. In my career I have been tasked with a number of roles including Data Protection Officer so I see a number of red flags when companies collect biometric data (the most sensitive type of personally identifiable information) and ship it to countries outside of the EU; especially when said company has an office inside the EU. Possibly, in answer to the above allegations, Worldcoin requested an audit by two blockchain centric companies and only focused on the code (which, itself, unearthed critical issues).

This is where web3 companies fall over — any ISO27001, SOCII, data privacy etc auditor with a web2 background would have spotted within seconds of reading their (probably non-existent) DPIA and data protection policies that a raid was not only possible but imminent. What’s disappointing about Worldcoin is their apparent lack of consideration for data privacy regulations around the globe.

But then, Worldcoin may not be the only ones to not understand the ramifications of commercial decisions. I attended ethcc Paris virtually and became aware of Gnosis Pay — an idea to tie an IBAN number, debit card (Visa), and blockchain wallet all into one. Sounds a great idea, so I had a look at their website. The first alarm bell that went off was reading “compliant with EU regulations thanks to the full suite of payment licenses from our partners.”

Hmm.. Partners and outsourcing are tricky words in terms of the European Banking Authorities regulations. They wanted to clamp down on “empty” companies that claim to provide financial services such as payments but then outsource all the critical processes. After the introduction of PSD2 which gives not just banks but other organisations access and the right to process payments, alot of my work involved helping these same organisations create third party management programs because they had outsourced all their functions not realising that the EBA really doesn’t care too much about your outsourcing model — they have their own which you are legally obliged to follow and makes you just as responsible for the function whether its inhouse or outsourced.

To be fair, Web2 payment service providers get this wrong, even now, so I can imagine it being a shock for Web3 companies. Either way, you still have to implement thorough security practices (having a ISO27001 or SOCII certificate usually goes some way to appeasing a regulatory body), Web2 transaction monitoring, secure customer authentication (SCA) and audit programs (internally, and, dependent on what’s outsourced, auditing of third parties annually or every two years).

Not only does this have to be in the contract with the third party, but also a right of audit for the regulator as well (who also has a right to audit you). Its better to discuss these things upfront so that you know whos paying for what otherwise things can get awkward (like eating out together at a Michelin star restaurant expecting the other one to pay). The EBA has released a number of documents covering the above including internal governance, PSD2, third party management, ICT risk and security management etc.

When you see a company advertise “Security First, enjoy peace of mind with the highest security standards available” without backing it up, run for the hills. What standard exactly? SOCII, ISO, PCI, their own? Is there an independent audit to prove it? Is there a bug bounty, vulnerability disclosure, threat hunting and threat intelligence program which are usually linked to “advanced” security? I’m not saying a certificate means 100% secure, but external audits do help weed out the deluded that think everything’s fine until it isn’t… I go more into this in the final “bugbear” section.

I would have preferred to see “PCI DSS certified” as opposed to “VISA certified”. Basically, if you issue any of the cards from the members of PCI (e.g. Visa, Mastercard etc) then you have to comply with their security standards (which, to be fair, is just a cut down version of ISO/SOCII to make sure cardholder data is secure but tough luck to anything else). Its not just the security side of things that worry me here but also the re-engineering that has to happen when people try to bolt on security afterwards (rarely works, cheaper to start again). I think I’ll hold off on becoming a customer until I see more facts.

Maybe I shouldn’t have installed it on my work laptop…

Finally, my biggest bugbear — the hacks that have led to the loss of hundreds of millions of euros/pounds/dollars (whichever currency, they’re big losses) in cryptocurrency ecosystems and more importantly, a loss of faith in Web3. Why does this cause a tired eye roll? Firstly, the hot air surrounding them. Discussing a week that included the disclosure of attacks such as on Conic Finance, Coinspaid and Alphapo, someone asked if it was all the responsibility of nation state actors. But isnt this always the excuse?? It sounds to me like an adult being beaten up by a 14 year old schoolgirl but claiming they got jumped by 10 men with baseball bats. Why am I mentioning this?

Heres Coinspaid response to being attacked: https://coinspaid.com/tpost/0zx28tmj51-coinspaid-is-back-to-processing-after-be It mentions “we suspect Lazurus Group, one of the most powerful hacker organisations to be involved” but read “10 men with baseball bats.” What makes me say this? Well, one thing is being involved in incident response both during and afterwards. These usually involved known attack patterns that could have been avoided or user behaviour that could have been corrected. If you click a dodgy link then it doesnt matter if its a nation state or bored school girl.

Coinspaid was apparently under attack for 6 months before an employee fell victim to a social engineering campaign of a fake job offer (as an interim CISO for a company trying to become compliant, I organised a phishing campaign based on job offers — always works). Having said that, six months is a long time in which to heighten security and increase monitoring if you know you’re under attack. Did they? Also surprising that a company regulated by a financial regulator didn’t have work-only laptops for developers and an acceptable use policy.

Coinspaid also say Lazurus got away with a record low even though at 37 million, Coinspaid lost more than the 23 million they claim Alphapo lost. Apparently Coinspaid is now “getting up and running … in the new secured environment.” Wait, what? So the old one wasn’t?? They recorded high levels of network traffic on July 7th but claim to have been attacked on July 22. What steps were taken in-between? Why didn’t the incident response team act? Is there a team? Or did they only start now to look through log files.

Operational security is just as important in Web3.

What I say next isn’t new, but here goes. Firstly, develop a risk management program. Be aware of assets and how they interact, how they are dependent and who owns them. Which ones are worth securing and which ones aren’t. What risks affect Web3 processes more than Web2 (e.g. heavier reliance on open source)? What are the external risks that cannot be controlled e.g. regulation; financial and privacy. Develop threat models for valuable assets and processes. Use common models such as STRIDE and OWASP.

After developing a risk framework and identifying assets, create policies on privacy, security and risk. Implement device management and acceptable use practices. Have an incident response team that responds to anomalies and doesn’t stand down until there’s a clear picture of what is happening.

Web2 or Web3, having a secure SDLC process is a necessity. Implementing access control, segregation, testing (not just functional) on code is not a bad thing. I was always surprised on code reviews of nodejs projects how many developers hadn’t even run npm audit. Integrate opensource scanners into CI/CD pipelines and hash code. No releases until fully tested, reviewed and audited.

Monitor, monitor, monitor so that you know when someone’s on your system. Use machine learning to help detect anomalies. Run your own social engineering and phishing campaigns to see if your security awareness training is really working. And use segregation, device management and PAM solutions just in case its not.

These are only a few of the tasks that need to be implemented to protect the business and most of the customers. But as I said, its not new.

More…

--

--

rootsoftrust
DataBulls

ISO27001 Lead auditor and Lead Implementer and believer in blockchain GRC