Ethical Machine Learning — Just Because We Can? Should We?
Being a data scientists, we have the abiltiy to create models that interact with people!
These models can help better recommend products, create better customer experiences, decrease diagnosis errors, increase revenues and so much more.
However, we have to remember, these algorithms, models and systems often interact with people!
Not only that, but sometimes, we don’t 100% know what the models may do.
For instance, neural networks make decisiosn in a black box of linear algebra and GPUs.
Ethical Algorithms?
What if a predictive model decides to target alcoholics with alcohol ads?
Is it right for companies like Facebook to run live experiments on their users like they did in 2014?
Where do we draw the line?
How do we know what is ethical?
We wanted to share our talk for a few days ago that discussed how we experienced a similar moral dilima when we had a company reach out to us with an ask to develop an algorithm that had plausible negative effects on customers.
Ethical Machine Learning — Just Because We Can? Should We?
Here was our abstract:
Abstract: Non-technical companies are slowly finding ways to increase their business value using the increased speed of computing and statistics. The problem is, business has always been more concerned about increasing the bottom line, vs. social impact. It is one thing when we joke about large e-commerce sites selling us that extra toaster. But what about when companies that have products that have been proven harmful reach out to data scientists and attempt to have them develop systems that increase the profit for a product that has a negative social impact, or when companies use data science to manipulate the customer, rather than benefit them. Should we? Is it right to forget about the social impact just to make an extra dollar?
