Given: “In other words, the same sorts of machine learning and sub-domains of AI that can be used to fight fake news can also be used by others to propagate new types of misinformation,” I would say the conclusion needs to be stronger. Both journalists and the reading/viewing public, need to understand not only AI, but how it can be used, and especially, abused to promulgate disinformation.
This is particularly difficult because “fake” can often be used as a label for “contrary to my beliefs and biases”. Heavy emphasis on the ability to fake news can lead to the dismissal of inconvenient or uncomfortable truths.
I would thus suggest that one of the ways that AI needs to be used is to create more sophisticated news filters, filters that rather than reinforce bubbles, serve to bust them. I wrote about this some time back. The notion is that rather than selecting 100% based upon what I have liked before, what’s in my bubble, my news results/feed run something like 80% inside my bubble, 15% just beyond the edges of my bubble, and 5% insights from way outside my bubble. That last category can even be tuned to reflect things that when presented to others like me managed to be selected often, even though they were well outside the bubble.