It seems to me that we are still pretty far from having such reasonable views with regard to philosophical arguments, and it seems hard to create an AI with such reasonable views when we don’t ourselves have them. Do you disagree with either of these statements?
>I don’t think that an AI should unpack the entire action a or state s
Wei Dai
3

We have some meta views about how we ought to evaluate arguments, and how to evaluate possible norms for evaluating arguments, and so on.

If you think all of those views are wrong, then I don’t see why we would ever be able to evaluate an argument correctly, or produce any artifact that could do so. (And the same discussion applies to figuring out the truth without considering arguments, if you think that considering arguments is not part of what we would do upon extensive reflection.)

And if you don’t think all of those views are wrong, then that seems sufficient to get bootstrapping going.

Show your support

Clapping shows how much you appreciated Paul Christiano’s story.