How will you ensure you don’t unintentionally cause significant harm or enable others to do so? What monitoring, accountability processes, threat modeling, etc. will you use, and how can you ensure you continue to scale up these (cost centers…) as your impact scales?
We need new institutions to handle these challenges and enable both accountability and best practice sharing — and these institutions are finally starting to form (funding cycles are slow…). Policing is slightly different, and may not be the right frame.
Credibility scores are critical. For platform usage and accountability; for publisher accountability; as training data for machine learning; etc.
Without credibility scores, how can we even measure progress on misinformation goals? The crucial property for these scores is that they are meaningful and defensible — that they…
How about the “pollution” framing? Information marketplace issues are often negative externalities of useful new technology.
The analogy can even go deeper. “Fake news” makes people act less intelligently, just like leaded gasoline. Harassment and spam destroys communities, just like industrial waste.