I’m not sure constructing such an agent “manually” (without using self-improvement) is feasible within the relevant timeframe.
I think that the feasibility of avoiding goal-directed agents entirely is questionable.
Vadim Kosoy
1

Weak act-based agents need not have any model of their operator — indeed, most existing AI systems are act-based. To make this argument more compelling I would want to see a more explicit presentation of the argument, and an explanation of why it doesn’t prove too much.

If goal-directed agents are much more effective than act-based agents, then there will indeed be lots of pressure to build them. This holds whether they are more effective at the AI design task, or at any other task.

But the goal is to design act-based systems that are competitive with their goal-directed counterparts. And if that project succeeds then (by hypothesis) the act-based agents will be equally effective at building other act-based agents. So the question is whether that project can succeed.

You suggest that maybe it can’t succeed at low capability levels. I grant that this would be a serious problem if true, but don’t find this particular argument convincing (see first paragraph).

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.