I think the worries about ASI strangely seem to forget the various limitations that would still exist for an AGI. One is in order to be able to change the world, the AGI needs a wide ranging ability to physically interact with it, which takes time, and is limited by the scope of the physical capabilities hooked up to the AGI. Another is that like a human brain, the AGI would be limited by the power of the computing substrate in which it existed. It could not increase its speed of learning exponentially as it would need to physically rebuild itself in a more efficient computing machine.
For instance in these fantasies of Godlike computers, there’s an assumption that the AGI would rapidly start to learn new laws of physics in order to be able to better manipulate the world. To do this it has to be able to get information about physical reality and also test its theories. It has therefore to be able to have sufficient physical capabilities in order to construct physics experiments in physical reality. Alternatively it could find and read and learn to understand large amounts of physics information on the internet. But remember, its an AGI and it would take it time to access, process, and learn from this information which would be limited by its physical processing capabilities which we are assuming is about the same as a human. And then it would still need to test new hypotheses.
Predictions of the future from the 50s and 60s all made assumptions that the changes would be about new powers to physically e.g. fly, travel super fast etc but that would have had to have been based on finding laws of physics which don’t actually exist as far as we can see. The big changes have been in degree of complexity within those physical limitations.
I think we’ll need to see a proper general intelligence to understand those limitations and we don’t really have that yet, so these ideas about the future are lacking much grounding in reality right now.