GPT-3 imitates the Moral Realist response to the Moral Error Theory
A little philosophy AI experiment
Let’s define “morals” as a set of values that are used to judge the actions/behaviour of sentient beings. If we can agree on this definition, then I think it is objectively true that there are moral facts.
This statement is not meant to be a tautology. It can be supported by an argument:
1) The statement “there are no moral facts” is either true or false. If it is true, then there indeed are no moral facts, and the sentence has a truth value of 1.
2) The statement “there are no moral facts” is either true or false. If it is false, then there indeed are moral facts, and the sentence has a truth value of 0.
3) We know that the sentence “there are no moral facts” has a truth value of either 1 or 0. If it is true, then there indeed are no moral facts.
4) We know that the sentence “there are no moral facts” has a truth value of either 1 or 0. Thus, we can infer that it is false.
Untouchable.