I’m fascinated by all the potential branches and the positive and negative outcomes they could make more or less probable. EG I think my scenario is more probable where genetic engineering breakthrough lag AI breakthroughs. We will be plain old humans but AIs could surpass us and entertain us to death. But if genetic engineering keeps pace, we could re-engineer our own code so we can adapt to life in space and initiate a new era of human exploration, hand in hand with our AI partners.
But genetic engineering could also cause frightening scenarios where some elements of humanity are boosted and merged with AI such that they become the living malicious AI we fear.
Should we be more afraid of a strong AI making bad decisions, or a human — with all the emotional baggage, lizard and primate brain, etc — boosted with strong AI making bad decisions?
And perhaps those who choose to be entertained to death are just evolutionary dead ends. And those who find some meaning in life beyond just being plugged in are the real future of humanity.