Appropriate Caution But Overlooks Key Variables
Doc Huston
1

Hi Doc, I appreciate your comment.

So far there’s no such thing as coevolved AGI and nobody is “playing by these rules”. I doubt I’m the only person designing a coevolved AGI, but it’s not a real thing yet. If it was, I imagine we’d have heard of it and we might both be contributing to its development (evolution).

Until there’s such a thing as coevolved AGI, every AGI effort is lab-grown and thus carries all of the risks of an invasive species or contagious virus. Those AGI efforts, whether academic, commercial, or military, might explode onto the scene but with predictably destructive results because they did not coevolve. Important details were missed. Blind spots were baked in. Defenses could not be put up in time.

A coevolved AGI might also explode, but due to the nature of widely distributed natural selection it’s much more likely to be a controlled explosion like we’ve seen many times in the past. We might say that life exploded, and it was OK. We might also say that electricity exploded, and computers exploded, and then the Internet exploded, and so on. Modern times are very different than 30, 70, and 100 years ago and yet we’ve managed to keep our wits about us because the explosions were slow enough for us to adapt to the changes (and for the changes to adapt to us).

In sum, a coevolutionary approach is much more likely to result in a controlled explosion than the alternative. AGI approached this way may not even result in superintelligence; we might get half-way there and realize the danger with the help of our personal AGI better half. A few people (and/or their AGIs) might go rogue or become careless and initiate an explosion at any time, but at least a counterbalance could arise out of the widely distributed partial-AGI that is in the process of evolving. Thus the whole situation is more likely to remain under control: defenses and counterbalances become baked in, like in every other evolutionary system.

The question, in my mind, is do we continue on with absolutely no defenses to the risks of AGI, thus putting ourselves in line for catastrophe, or do we start building defenses? I say we build defenses and go straight to what we already know will work.