On Human and AI Ethics
Ethics is not an end in itself.
Right and Wrong, Good and Bad are not Platonic forms to be discovered.
But let’s start with another major source of confusion: Ignoring the distinction between descriptive and prescriptive ethics — how we actually behave, versus how we should ideally behave. I wrote on this some time ago.
Now what is the purpose of ethics or morality? Why we need it, or want it?
We need it as a guide to survive and to optimize our lives. It is useful — no, crucial — for us to have generalized rules, or principles by which to live. Life is too complex for us (or an AI) to figure out the best action for every micro decision we face: Should I lie, or tell the truth; should I cooperate or not; should pray for, or work on a solution?
There are objective answers to such questions. We can and should treat ethics as a science. Currently, most people don’t even attempt that.
We all automatically acquire, develop, and internalize some principles. That’s our moral compass. However, few people try to rationally explore how they might discover and learn the best principles; those that best optimize life, and minimize moral conflicts — both internally and externally.
Good and bad only have meaning in terms of ‘good for whom?’, and ‘good to what end?’. In ethics it means good for the individual, and by extension good for society. The end is human flourishing.
Advanced general-purpose AI (AGI) will clearly need to understand and deal with actual individual human morality (descriptive). They will also need to effective respond to and mediate between different existing value systems. This is (just) knowledge and skill acquisition, like in any other domain. Crucially, it involves context, clarification, learning, and reasoning.
AGIs will also help us navigate and improve our morality (prescriptive). We’ll have the best personal psychologists and philosophers one could wish for. Their intelligence will help us discover the best principles to live by, and the best goals to pursue.