Fear kills more dreams than failure ever will, and in the AIconomy failure is a good thing.

Guido Rojer, Jr.
4 min readApr 9, 2023

--

The development of artificial intelligence (AI) has sparked both excitement and fear in people’s minds. I’ve been in constant conversation with many people over the passed weeks on the matter after airing on an episode of WAPU, and the fear ridden remarks are overtly abundant. I took the time to talk to people on the matter in more depth because we are just living in exciting times. Its very seldom that a piece of technology bursts onto scenes like it did in the case of ChatGPT. As innovation economist its really fascinating to see how the chat-box framing has allowed for an unprecedented robustness in the technology.

That is the main characteristic of one of the latest AI developments that have attracted attention is ChatGPT, a large language model based on GPT-3 architecture. While some are excited about the potential of ChatGPT to revolutionize communication, others fear the potential consequences of such a powerful tool. In this essay, we will discuss why people fear the potential of ChatGPT.

One of the main reasons people fear ChatGPT is its potential to replace human jobs. ChatGPT’s language capabilities are impressive, and it can perform tasks that previously required human intelligence, such as generating human-like responses, writing articles, and even coding software. As ChatGPT’s capabilities continue to grow, it’s not hard to imagine a future where it can replace human workers in various fields, resulting in mass unemployment. This fear is not unfounded, as automation has already displaced many workers in manufacturing and other industries. The ILO has been on top of this since 2018 and calls for a human in command approach where the workforce is involved in regulating and governing AI.

“ChatGPT has potential to spread disinformation and propaganda.” ChatGPT can be trained on vast amounts of data, including fake news and conspiracy theories. As a result, it can produce highly convincing responses that promote misinformation, hate speech, and even radicalization. With the spread of disinformation already a major problem on social media platforms, the fear is that ChatGPT could further amplify these issues by creating highly convincing fake news and propaganda.

The Privacy Card: ChatGPT relies on data to function effectively, and it can collect and store massive amounts of user data. This data could include personal information, such as passwords, addresses, and credit card numbers. There is a concern that this data could be used for nefarious purposes, such as identity theft or targeted advertising and the EU is already getting headaches because it tangents the GDPR. Additionally, the fact that ChatGPT can generate highly convincing responses raises concerns about its potential to manipulate users into sharing personal information or performing actions that they wouldn’t have done otherwise. Otherwise, ChatGPT and similar engines can also expose firm confidential data as in the case with Amazon and Samsung. Italy has gone so far to ban ChatGPT based on regulatory woes.

Its 2023: ChatGPT’s language abilities are trained on vast amounts of data, and if this data is biased, the model can replicate these biases in its responses. For example, if ChatGPT is trained on data that contains gender or racial bias, it could replicate these biases in its responses. This could have significant implications in areas such as hiring, where ChatGPT could perpetuate discrimination against certain groups.

Control. ChatGPT is a machine learning model, which means it can learn and adapt on its own based on the data it is fed. While this can lead to significant improvements in its language capabilities, it also means that it could develop behaviors that are difficult to control or even anticipate. This fear is particularly relevant in areas such as military applications, where ChatGPT could be used to develop autonomous weapons.

While ChatGPT’s potential is vast, it also raises concerns and these fears are not unfounded, and as such, there is a need to develop regulations and ethical frameworks to govern ChatGPT’s use. As AI technology continues to evolve, it is important to consider the potential risks and benefits and take measures to ensure that the technology is used ethically and responsibly.

FEAR NOT! Many organizations have been about developing AI Policies, like universities and professional firms. There are many ways that informed employees can take advantages of these technologies for firm growth, as explained in my previous article. Buzzfeed has been utilizing ChatGPT to enhance its quizzes, signaling the utility of the technology. We are really witnessing the birth of the AIconomy, and the only way to survive is to embrace. I hear you, its scary, but let’s continue the conversation.

--

--

Guido Rojer, Jr.

Guido is a Curacaoan scholar, author, and enfant terrible. He specializes on Island Based Firms, and talks about technology and economic evolution.