When Does an Artificial Intelligence Become a Person?
Corin Faife
11913

When it comes to whether AI would ever have personhood, there are two huge assumptions being made, and neither of these assumptions seem to have scientific support behind them.

01. AI can be developed into having general intelligence. Today, we know how to teach artificial neural networks to accomplish a particular task, even several tasks chained together through convolution. But we have no idea if that can be viably scaled up into something resembling things like self-motivation and a concrete sense of self.

02. If an AI does achieve sentience, it’s the same kind of sentience that can be achieved by intelligent animals. Problem is that animals evolved to be sentient, have internal motivations, cannot control how much they feel pain, and want freedom. Computers don’t have motivations. They don’t feel, well, anything. They’re not living things and can be modified on a whim. Is anything that has no motivation other than the one we give them, have no desires or feel anything whatsoever, simply registers bytestreams, in need of having rights to autonomy and due process?

Show your support

Clapping shows how much you appreciated Greg Fish’s story.