Regulations and Ethics in the Tech Industry

Nick Sakunworaratana
b8125-spring2024
Published in
4 min readApr 9, 2024

Over the past decade or two, the tech industry has been booming. With the mindset of “move fast and break things,” several problems have come along with this growth. Although regulations are being put in place, lawmakers are moving much slower than the growth in the tech industry. The speed, combined with the technical capabilities lawmakers do not possess, causes regulation to fall significantly behind. As a result, many times these large tech companies choose to prioritize growth over ethics. We have seen several class-action lawsuits that followed these actions, including the Facebook and Google User Privacy lawsuit.

As of now, lawmakers are still questioning how to catch up with industry growth and whether or not laws are being broken. Public opinion also tends to be divided about how much the industry should be regulated. The key here is striking a balance between preventing consumer harm, encouraging fair competition and respecting the rule of law, versus driving innovation and promoting economic development.

Consumer Privacy
The invasion of consumer privacy and safety has been one of the most discussed topics when talking about regulations and ethics in the tech industry. Since computers are all connected through the internet, hence the term “internet-working,” everything we do online involves sending and decoding messages between different computers in the networks. Therefore, the hosts of these networks are able to track the activities of everyone on their network. Concerns regarding consumer privacy include tracking, crossing site boundaries, and identity.

The Facebook class-action lawsuit serves as a very good example of privacy concerns within the tech industry. Meta, Facebook’s parent company, was found to have misused the data of 87 million users in 2018 by selling it to Cambridge Analytica without the users’ consent. (1) This data was then used by Cambridge Analytica to assist President Donald Trump’s 2016 election campaign, creating psychological profiles of American voters. These profiles were sold to political campaigns to tailor ads that targeted voters, aiming to influence their decisions. The data was so powerful that researchers say private information from users’ profiles, those of their friends, and activities on Facebook could reveal more about a person than their parents or romantic partners knew.

Furthermore, there were even discussions about Cambridge Analytica working with Russia. (2) The idea that Russia could use this data to influence American voters is a serious threat to national security and raises deep concerns among the public. In response to these issues, Facebook agreed to a record $5 billion settlement with the Federal Trade Commission (FTC) and agreed to completely revamp its approach to privacy policy.

AI and Privacy Concerns
AI’s role in today’s society has increasingly become significant, especially with tools like ChatGPT that enable anyone to access the power of AI. Given the situation from lawsuits such as the Facebook class-action lawsuit, the public has become more aware of privacy concerns related to AI. AI tools learn from whatever information is given to them, meaning essentially everything entered could become public data. Despite growing awareness of these privacy concerns, people have been entering personal information into ChatGPT without understanding the implications.

Currently, there are already signs that major AI players in the market are cutting corners to harvest data in the race to the top. (3) OpenAI recently developed Whisper, a speech recognition tool that transcribes YouTube videos to gather data for feeding its AI models, thereby breaking Google’s rules which prohibit the independent use of video content for model training. Similarly, Google and Meta are following suit in this practice.

Final Note
Lawmakers are still behind in terms of their capability to keep up with the growth of the tech industry, despite having learned lessons from what happened with Facebook. With the rapid development in AI, this raises the question: how should this space be regulated, and by whom? Clearly, lawmakers are struggling to keep pace, and the only people who fully understand the space all work for the big players in the AI industry. This leads to another question: should companies like OpenAI and DeepMind be the ones setting the regulations? At the same time, is it acceptable for a company that develops the technology to also regulate its own space? How can we trust them? These are all questions many people are asking, and they are valid concerns.

From my opinion, there are three ways this could play out: first, AI companies regulate themselves; second, lawmakers continue to regulate the space; and third, a mixture of both. I believe the third option, a mixture of both, is the best approach. There is no way we can allow checks and balances to be done by the same entity, and there is also no way lawmakers will catch up with the growth in AI. It may seem like a simple answer to have both parties work together in this space, but in practice, with potential conflicts of interest, it is uncertain how this will play out.

Source:
(1) https://www.cnbc.com/2022/12/23/facebook-parent-meta-agrees-to-pay-725-million-to-settle-privacy-lawsuit-prompted-by-cambridge-analytica-scandal.html
(2) https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html
(3) https://www.nytimes.com/2024/04/06/technology/tech-giants-harvest-data-artificial-intelligence.html

--

--