There are lots of complex systems that are beyond simple explanation.
Andrew Jennings
11

Thanks for the comment! To be frank, permeation of AI understanding is so little at the investigatory level, and its domain so limited to concrete prediction tasks that understanding AI is a bigger issue than preventing misuse.

Statistical tests of performance exist, yes, but they require immense amounts of data for newer deep learning systems — a plausible candidate for validation in the presence of huge data. I am talking about the creation of expert systems, where AI is meant to do tasks humans simply cannot — predicting cancer, predicting long term socio-economic trends and even generating data.

For these expert systems, data collection is difficult and often prohibitive, which requires understanding of these systems. And since the availability of data is so skewed and usually on human task domains, evil usage of AI can still be mitigated. Here’s what the AI research community thinks of this problem at the moment— https://www.facebook.com/photo.php?fbid=10153887854887143&set=a.471131017142.254979.722677142&type=3&theater . Later, yes, when we have stronger systems which learn better, intent and misuse of AI definitely becomes an important issue.

Show your support

Clapping shows how much you appreciated Abhimanyu Dubey’s story.