Open Source Hacks And Other People’s Money: Why Crypto Needs Community Developed Internal QC Standards

Drew Hinkes
2 min readNov 9, 2017

--

Both TheDao attack and the (second) Parity multisig loss events were the result of coding errors, or system design that resulted in major vulnerabilities. These events directly harmed third parties that relied upon code written by open source communities that were intended to protect value owned by third parties but fell short.

In the case of TheDao, multiple parties outside of the platform’s core development team identified multiple “zero day” flaws, published them to the world, but because of developer decisions and delegation of control over the codebase to its under-engaged participating community, the code was allowed to remain in production, until the inevitable attack and financial loss occurred. In the case of the second Parity wallet hack, an apparently inexperienced user was allowed to use an “undiscovered” vulnerability to delete a software library that locked all multi-signature wallets created on Parity after July 20, and froze all of the value in those wallets.

These events have one critical commonality- a failure by platform developers to detect and correct cataclysmic coding errors prior to putting code into production, leading to a failure to protect third parties. These mistakes, in total, affected more than $400 million USD of other people’s money, and inflicted significant collateral, reputational damage across the entire community. Who should answer for these mistakes? Should it be the parties making design decisions, the parties actually implementing the code with vulnerabilities, or the party exploiting the code vulnerabilities? Likely, everyone involved shares some liability. While litigators, judges, and legislators will answer that question, the community of open source developers should consider what steps it can take to avoid future incidents.

The community of open source coders working on open source projects that control third parties’ economic assets should create their own standare of care based upon best practices for software development, security, and implementation. These standards can protect that community if they are determined by the community of developers, and if they are enforced and adhered to by that community’s participants.

Certain core concepts should be addressed, including formal verification, testing, peer review, standardized zero day handling, etc. Should projects be required to have a QE responsible to report that certain basic QC tasks have been completed? What qualifications should the QE have? How much peer review/audit is necessary?

Should open source developers of systems that handle other persons’ value be subject to the same scrutiny as commercially developed, closed source, centrally managed software projects? Is this a reasonable expectation? If not, what set of compromise expectations make economic sense?

--

--

Drew Hinkes

General Counsel/Co-Founder @ Athena Blockchain, #Bitcoin #blockchain #SmartContracts #Floridian #privacy @propelforward