AI and its Risks

Muskan Arya
The Bridgespace
Published in
3 min readJun 1, 2021

Muskan Arya, Lady Shri Ram College for Women

While AI has been developed exponentially and put to use in many fields, such as engineering, teaching, art, and, most importantly, medicine, there is always a flip side of the coin where AI poses certain risks. As AI grows more ubiquitous and sophisticated, the voices warning against its current and future dangers grow louder. Whether it’s the increasing automation of certain jobs, autonomous weapons operating without human oversight or gender and racial bias issues stemming from outdated information sources, just to name a few, unease abounds on several fronts.

  1. Job Automation: This is treated as the most immediate concern; people losing their jobs to automation have already taken place before industrialisation and automation. However, the degree of replacement of AI with human capital is the greater concern. Industries consisting of jobs requiring predictable and repetitive tasks can expect greater disruption than those requiring more specialized skills. According to a 2019 Brookings Institution study, 36 million people work in jobs with “high exposure” to automation, meaning that before long at least 70% of their tasks will be done using AI.

2. Privacy, Security and ‘Deepfakes’: Naturally, the malicious use of AI could threaten digital security through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance, physical security (non-state actors weaponizing consumer drones), and political security (through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns). AI also risks the surge of audio and video “deepfakes” created by manipulating voices and likenesses. This could be used to manipulate the videos and speeches of public figures to spout fake information. In this case, there is no way to know what is true and not, one cannot believe their own eyes and ears. The reliability of evidence is feeble in this case and cannot be deemed credible.

3. AI bias and widening of socio-economic inequality: Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern. Along with education, work has long been a driver of social mobility. However, when it’s a certain kind of work — the predictable, repetitive kind that’s prone to AI takeover — research has shown that those who find themselves out in the cold are much less apt to get or seek retraining compared to those in higher-level positions who have more money. Various forms of AI bias are detrimental, too. Olga Russakovsky (Computer Science professor at Princeton) said it goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans and humans are inherently biased.

While these are a few risks of AI, there are numerous more that pose a threat to society. One way to mitigate these risks is by imposing government regulations on AI. There needs to be a public body that has insight and oversight to confirm that firms and individuals are developing AI safely. Without restricting the progress and growth in AI, the way AI is used should be monitored; it should not be the case that the development of an AI weapon goes unnoticed. While society makes great strides towards the future, it is important to keep a check on its growth to make sure it works for the greater good at all times.

--

--

Muskan Arya
The Bridgespace

A 20 year old econ student who's largely dependent on coffee, anime and hugs for mental stability.