“Can’t be evil” is a good idea. As you point out, we can use math and decentralization to make sure that a single actor can’t “make the call” on any given piece of content. They don’t have to struggle over the ethics of a single takedown, because they just can’t take it down.
They “can’t be evil” for a definition of evil that includes censorship.
Locking the internet open matches my own politics, so it’s easy for me to be biased toward it. However, there are a lot of smart, ethical people who disagree. There are a lot of open, stable, and functional countries that put heavy restrictions on certain kinds of speech. From that perspective, a “can’t be evil” that guarantees that hate speech, revenge porn, or other vile content will continue to exist is actually “must be evil”, not “can’t be evil”.
What’s missing here is a discussion of who gets to define what evils we are trying to prevent. The way this technology is being developed lets a few actors make an even bigger call for all of society. We are deciding that censorship resistance is an unmitigated good now and for all time. Are we sure this is the case? And what if we change our minds in the future?
We need a better governance models to guide the development of these technologies, and we need to get more people involved in the process. Otherwise we wind up with another small group of people making the call about what is evil, but on a much bigger scale.
