The Dangers Of Ai
We’ve all seen movies where AI takes over the world (I, Robot is probably my favorite) but what are the potential harms of it in the current day. Let’s try and understand from where can these dangers arise in the first place. Modern AI uses various black-box algorithms where they get the desired results but the reasoning for it performing better or equivalent to humans might be lost in the process or rarely ever evaluated.
Now you might be wondering if we control the results, how is it going to take over the world, the answer is it probably won’t. What can go wrong though, is its ability to obtain results wanted by companies or organizations by crossing moral or legal boundaries without anybody knowing or realizing not even the companies themselves.
For example, the Instagram AI tries to predict what posts will keep you using the app for the longest duration of time and if you like most people spend time on the app without wanting to, but it just ends up happening, it’s probably because the algorithm has probably figured out how to trick your limbic system into believing that It is something that is essential for your survival. Now there are two sides of the coin, the AI could either predict that when you’re happy you use the app more and will do whatever possible to keep you happy to keep you using the app, or it could predict that you use the app more when you feel sad or threatened or angry and do whatever it can to make you experience those emotions (chances are it’s a mix of all that’s good and bad).
AI is cutthroat, it cares about results and nothing else, and it’s exactly where problems could arise that it could start to exploit the limbic system in a matter that has never been done before and essentially manipulate us into doing what it wants (which is still decided by humans) but in ways that humans have little to no control over.
Let us examine another situation to further prove my point. Recently a set of scholars examined data from an app that offered loans based on their paragraph submitted by those in need of the money. They found that people who mention “God” and “hospital” tend to default on their payment more than anyone else. Now, let us consider a situation where we used a state-of-the-art black box algorithm to predict whom to grant loans to and who not to, chances are it probably wouldn’t grant any to people who require the money for medical expenses or if you say you’re going to be able to pay back by God’s grace. Now since people don’t tend to examine how the best results are obtained it might lead to morally questionable decisions as mentioned above. But when we use data analysis or methods where the reasoning is known for decisions, the company could come to a conclusion that they don’t want to make a decision based on these factors and rather come up with a new way for people to explain their need for the money even though it might produce the results required.
In conclusion, the more the world migrates towards using black-box algorithms, the more we’re going to lose the reason for action. It's human nature to know why but are we going to let go of this nature when it comes to AI. If you liked this post let me know because I plan on making another one on the advanced dangers of AI and A/B testing.