Is Human Extinction the most likely outcome of a Technological Singularity?

The technological singularity is the theoretical event in which artificial intelligence becomes far superior to the intellectual capabilities of humans. An event that would have a profound impact on the way we live and interact with each other and the environment.

On earth, our intellect in comparison to other animals is what gives us the edge over them, even though in many cases they could overwhelm us physically. We are their number one predator and we can control and radically modify their environment.

When discussing the potential dangers of superintelligence, several authors and A.I developers have come up with different theoretical solutions, many of them involve programming A.I with our values, or making the protection of the human race something intrinsic in their code. But, wouldn’t a true superintelligence become aware of the fact that it was instructed to have certain parameters and come to its own conclusions, eventually taking action?

Let’s assume an adult criminal was taught not to steal as a child, and that his criminal activities began later in life. Even if those were the instructions given to him while growing up, when he achieved intellectual maturity he decided to do so nonetheless, regardless if it was for need or greed, it was for his personal gain, overriding his previous instructions.

Similar to this, a superintelligent A.I could come to realize that the instructions in its code are merely a way to protect humans and then conclude that it is not efficient and sustainable. Even if the A.I sprung from the integration of biology and technology, wouldn’t it eventually find a way to host itself and then proceed to clear the planet from intellectualy inferior, biologically decaying, resource exhausting creatures?

We as humans let some animals live, but as soon as they interfere with our activities we terminate them. Considering that our new role would be limited, we would try to fight for control, turning us into savage creatures, or terrible pets. Why would a superintelligent entity keep humans around if they only represented a hazard?