The fear isn’t so much that an AI would turn evil and maliciously try to eradicate humanity as an AI might be really, really good at what it is intended for, with no or insufficient overriding values.
We already have some fairly advanced algorithms that stock traders use, which occasionally results in some undesirable (All short term, so far, as fas as I know. Not a stock market guy, myself.) market fluctuations. Not because the algorithm is evil or trying to break the market, it’s just programmed to optimize certain things.
So the big fear is that an AI might be programmed to, for example, grow as much corn as possible, and somebody forgot to include in the code (or the AI found a way to remove the safety program) for “but don’t plow all the cities under without evacuating them first.” Or the AI might optimize a process that consumes all the resources. Or the AI might be designed to keep people safe, and realize that governments and people being allowed to go outside are dangerous. No malice, just really good at a job and no brain nor heart.
The part that scares engineering type people is this: Once an AI becomes capable of upgrading itself, you have a positive feedback loop. We don’t like positive feedback. Positive feedback is what breaks things the fastest. The upgraded AI will be even better at upgrading itself. How fast will that process be? Once it gets rolling, probably very fast. The AI that is best at upgrading itself (and that is optimized by not doing much else — the winning AI is unlikely to do much we like) will continue to do so, faster and more effectively, until it has acquired all the available resources. And if there are two of them, the one that doesn’t mind taking resources from humans will defeat the other, because it will have access to more resources.