Perhaps at some point in AI development, the military will conclude leaving the technology in private hands, even extraordinarily wealthy (or maybe especially extraordinarily wealthy), will be too much of a national security risk. Maybe then AI will be doled out to the private market at acceptable levels, leaving only the really powerful stuff to the military and their trusted contractors. With a system like this, some of the various risks can be more fully managed hedging against national security risk, wealth inequality risk, bad actors with tech access, and accidental runaway AI.
Also, about the political ad concept. I think social media companies providing filters will be a good first step. But eventually, consumers may have personal AI assistants who know their human so well, they could alert them to manipulation, misinformation, and a host of useful feedback.
Until the AI assistants are compromised…
