A Look Into the Future: What Will the Dangers Of the Artificial Intelligence Industry Look Like in 10 Years?

Daviemwangi
6 min readAug 22, 2022

--

Introduction

Artificial Intelligence is a group of technologies that are good at extracting insights and patterns from huge data sets. Also, AI implements these insights and patterns to predict what drives results. With time it learns how to improve predictions.

AI is so impressive that it can adapt to past results and future information to improve its abilities on its own. There are tools in AI that are developed to have capabilities of learning and advancing on their own. However, they have to be created and managed by people.

Despite its benefits, the technology can contribute to harm if not managed carefully. Also, AI is still being tested since it’s a technology that can be a danger to people, and it would be better if these risks are looked into so that we can anticipate and manage them in the near future.

There are several dangers that AI is contributing to today and in 10 years. Let’s take a closer look at these dangers, which include;

Job losses

As AI becomes more advanced in the coming future, there will be job automation, meaning it will take over jobs performed by people. For this reason, the automation of jobs by AI is the most immediate concern viewed by society.

Also, we can ask to what degree AI will replace certain jobs. Jobs that require workers to execute repetitive and predictable tasks will be most affected. A report by McKinsey & Company projects that by 2030 there will be about 800 million job losses due to automation.

Also, according to a study by Brooking institution, there are 36 million jobs that are exposed to automation where the majority of their tasks will shift to AI, especially those ranging from retail sales warehouse labor, hospitality and market analysis. Therefore, there are inquiries about where those who AI replaces will go. Some individuals believe AI will create opportunities to balance the equation. However, it’s unclear since many people will have no access to these jobs if they are not educationally qualified.

Others argue AI will shift people from repetitive and physical jobs to those that need strategic and creative thinking. Also, people will have more time to spend with their families.

Although this will most likely occur to individuals who have educational qualifications and rank higher financially, this will elevate income inequality even further.

If jobs become automated and robots take over, they don’t need a salary like human employees. Also, robots will reduce errors and save time and money; companies will make massive profits making them rich, while replaced individuals will be poorer.

AI Bias

Humans create technology, and humans are inherently biased. Therefore, AI can have various forms of bias, which can be detrimental.

Humans can be biased against other genders, races or religions, which is implicated in AI that flaws data the technology imitates. For instance, e-commerce giant Amazon discovered that the machine learning used to recruit jobs was biased against women. Moreover, they looked at resumes sent for the past decade, and those hired and men were favored more than women since they got most of the jobs.

Photo by Jackson Sophat on Unsplash

Also, there have been incidences where big tech companies have been biased. For instance, using facial recognition software, Google photos described two African-American individuals as Gorillas. This proves there was proof of racial bias that made the AI label these humans wrongly.

The private sector is betting hard on AI due to its pursuit of profit above everything else. In addition, they believe this is what they are supposed to aim for, which makes them not think of the consequences of the technology.

Terrorism

AI is a great technological advancement in our world; however, the technology can be dangerous in the next ten years if not managed properly. This is because it can give terrorists an upper hand in carrying out terror attacks around the world.

Terror groups have tapped into AI to carry out attacks in other regions, such as ISIS, using drones to execute a terror attack in 2016 that killed two people. Moreover, this formulates whether we will be safe with AI in the hands of dangerous people.

Photo by Maria Oswalt on Unsplash

Also, humanity is at risk in the next ten years since there could be a global AI arms race. Furthermore, there might be a possibility if a military power nation pushes for AI weapon development, a global AI arms race is unavoidable. For this reason, autonomous weapons will be the norm in our world.

Autonomous weapons require cheap and readily available components to develop; mass-producing them will be easy and affordable. This will make them available on the black market and in the hands of the wrong people they would control mass populations.

Warlords would use them to order ethnic cleansing. In addition, autonomous weaponry can destabilize countries, overpower populations, assassinate and selectively execute particular ethnic groups.

However, there are various ways in which AI can make wars safer for people, mostly civilians, without necessarily developing new machinery for executing people.

Stock market Instability

This might be a huge risk in the stock market in the next ten years due to ever-increasing algorithmic high-frequency trading advancements. In the next decade, it could cause a major financial crisis in the markets. Also, it might contribute to the downfall of the entire financial system, Wall Street you guessed it right.

I know you’re you are wondering what algorithmic trading is. Okay, it is described when trading is executed by a computer unencumbered by the intuition or feeling that contributes to cloud an individual’s judgment analyzing and executing trades by applying pre-programmed information. For this reason, these supercomputers apply extremely high volume, frequency and value trades that contribute to huge losses and market volatility.

Photo by Tech Daily on Unsplash

High-Frequency Trading (HFT) involves a computer placing thousands of trades at super-fast speeds aiming of selling in short period to earn small profits. This type of technology is raising lots of concerns in our markets. Moreover, these actions can make markets incur huge hits, which in turn cause investors to lose money.

HFT’s biggest problem is that it does not consider the interconnections of the markets and the fact that human emotions and logic contribute largely to the markets.

HFT algorithms aren’t always and can contribute huge losses to companies making them go bankrupt. For instance, Knight Capital Group had a glitch in their computers using HFT, where they streamed thousands of trade orders into the New York Exchange market, causing havoc in the firm. The glitch created volatility which made the firm to lose nearly $460 million in one day. In addition, this action put them on the verge of bankruptcy. Furthermore, the firm had to be acquired by another company, and they learned a huge lesson from the ordeal.

Overcoming AI dangers.

There is a strong campaign to regulate AI to prevent malicious AI from causing destruction or harm. Even Tesla’s CEO Elon Musk stated in a conference that he might not be an advocate of regulation but dealing with such technology, which can endanger people, should be looked at, especially on the side of reducing these things.

Photo by Yuyeung Lau on Unsplash

The future depends on social and tech specialists’ capability to work with individuals from diverse backgrounds. Developers of AI must integrate experiences, concerns, and insights of people across genders, cultures, ethnicities, and socio-economic groups. In addition, this collaboration should be implemented in the early development and implementation stages to have an impact.

AI is the future, and I believe it will change the world; however, regulation should be implemented to control it, not to cause harm.

If you want me to write for you can hire me.
Feel free to contact me anytime on
LinkedIn or via email. daviemwangi20@gmail.com
If you don’t want to miss my articles, please subscribe to my email notifications.
Due to Medium’s country restriction, I’m not a member of the Partner Program. If you love the article you can support me by buying me a coffee

Clap, share, and comment on the article if you enjoyed it.

--

--

Daviemwangi

I’m a diversified article writer. Technology. Business. Finance. Crypto Currencies. I love success. Ambitious. https://about.me/david_mwangi