If you give A1 a big computer, then they can run the extensive process of reflection. So if you agree that extensive reflection works, then I don’t think that saying “if the bootstrapping process works, then A1 could have just solved the problem using a big computer” is a compelling argument against the bootstrapping process.
The argument for the bootstrapping process working is: at every step, the bootstrapping process seems to make the agent significantly smarter in most or all relevant respects without changing its values. So if we keep going, we end up with a very smart agent that shares the original agent’s values.
So it seems like we basically have to evaluate this claim about the bootstrapping process directly.