Should Data Collection Be More Tightly Regulated?

Charles Hua
SI 410: Ethics and Information Technology
7 min readFeb 18, 2023

Are you being watched? It’s not just your computer or phone camera that you need to worry about. Big tech companies are collecting and analyzing giant amounts of data on our daily activities. From the key words we search on Google, to the photos we like on Instagram, to the products we buy on Amazon, our digital footprints are constantly being tracked.

From class reading “Theorizing Affordances: From Request to Refuse”, we know that technology products somewhat determine our social behaviors. Big tech companies encourage, or even demand users to input more personal information in some degree so that they can provide more personalized contents or ads. This data collection may seem harmless or beneficial at first glance. After all, personalized ads and product recommendations allow users to shop and browse more conveniently. But the truth is that the data collection practices of big tech companies have far-reaching affects for social and political systems, as well as for individual privacy and autonomy. Thus, the data collection activities of big companies must be more strictly regulated by measures such as implementing a consistent and flexible punishment scale for data breach activities, allowing users more autonomy over their data. Also, promoting more advanced technologies on top of regulations improves data safety.

One primary concern is the potential for misuse or abuse of personal data. A consistent but flexible punishment scale should be used to tackle this issue. The more data companies collect from us, the more power they have to influence our decisions and behaviors. This can be as simple as personalized ads for products we don’t need while they think “you might like” or as complex as using psychological and sentiment analysis to manipulate our political views. One typical example of data misusage can be the Cambridge Analytica Scandal. In 2018, Cambridge Analytica retrieved data from millions of Facebook users without consent and used it to target political advertising. The Federal Trade Commission later fined Facebook 5 billion dollars for sharing the data. Cambridge Analytica announced bankruptcy in 2018. And the parent company of Facebook, Meta, settles this scandal by paying 870 million dollars. And Besides this well-known scandal, there are other cases in which Facebook was not fined for data breaches. In another data breach accident, about 533 million Facebook users’ information, including sensitive data such as personal phone numbers, was exposed by hackers. I cannot find a record of Facebook being fined in this accident.

These scandals remind us that even when data is collected and used for seemingly good purposes, such as improving product recommendations or user experience, there is always the risk of unintended consequences or unforeseen outcomes. Also, the scandals expose a data regulation problem: it is not consistent and flexible enough to punish all data breach activities from big companies. The government should implement guidelines outlining the severity of the data leak and the corresponding consequences, such as fines or criminal charges. These guidelines should be designed to be flexible enough to address a variety of data breaches and should be regularly updated as new threats emerge. For example, in the later case of the previous paragraph, data was not leaked intentionally. Still, the consequence was equally severe as an intentional data leak since many users’ data was compromised. Thus, a decent amount of punishment should also be issued for this unintentional incident. This way, companies will be “forced” to adopt stringent data protection measures and regularly conduct security checks to identify potential data vulnerabilities.

Another concern is the impact of data collection on individual privacy and autonomy. The more data companies have on us, the more they know about our personal lives, interests, and behaviors. This can make controlling our personal information difficult and erode our sense of privacy and autonomy. To address this issue, we as users, should have more autonomy over our own information.

Weee is recently reported to accidentally users’ info

Recently I have suffered from data leakage. I used to purchase snacks and freshes from an online shopping site called “Weee” since it provided tons of products imported from Asia that could not be bought in other grocery stores. These few weeks, I kept receiving spam text messages and emails, and they apparently knew who I was because they texted me in my language (which implies that they at least knew part of my identity). Later, I received an email from Weee informing me that my information had been leaked. The email is below:

That was when I knew that I had no autonomy over my data. I don’t even know how the company consumes my data and who my personal information is shared with. I was quite angry and uninstalled the app immediately after the email. After calming down, I wondered what the consumers could do to prevent big tech companies from using our data the way we do not want. The reality is that there is not much we as consumers can do besides being cautious when inputting personal information and some government intervention must take place to offer consumers more autonomy over their personal information.

Though sometimes it is inevitable to share personal information with tech companies for a better experience, consumers should have complete autonomy over their data. Under current Federal Government regulations, users have limited influence over how companies utilize their personal information. Although certain regulations, like CAN-SPAM and TCPA, give users some degree of control by allowing them to choose if they want to receive personalized advertisements, they do not offer complete protection for customers’ data. Facial recognition technology is a good example. This technology is becoming increasingly ubiquitous daily, from unlocking our phones to security cameras in public spaces. However, our facial features are often collected and analyzed without our knowledge or consent, and few regulations govern its use. Companies can potentially use this technology to track our movements, monitor our behaviors, and make decisions about us without our input or knowledge. This lack of control over our personal data can seriously affect our privacy and civil liberties.

To address this issue and ensure that users can enjoy social media while retaining authority over their data, it is crucial to establish regulations that empower users to determine which information companies can gather and how it can be utilized. This will offer users greater power and authority over their personal data and help safeguard their privacy. One solution could be implementing laws similar to the European Union’s General Data Protection Regulation (GDPR). This would give users the right to know what data is being collected about them, who is collecting it, and how it is being used. Users would also have the right to delete their data or to restrict how it is being used. Companies and governments must obtain consent from users before data collection and provide clear and concise information about their data collection practices. In the facial recognition example, it could be that any facial scanning app should ask users whether they would like their face used as part of the training dataset before any facial scanning.

In addition to regulatory measures, developing and promoting privacy-enhancing technologies can also solve the ethical concerns surrounding big tech companies’ data collection practices. Decentralized platforms that use blockchain technology, for example, could give users complete control over their personal data, allowing them to decide who can access it and how it is used. Such platforms could also give users greater transparency and accountability, making it easier for them to hold companies and governments responsible for their data collection practices. Other technologies, such as differential privacy, can also be used to protect user privacy while still allowing for data collection and analysis. Differential privacy involves adding noise to data sets to prevent anyone from identifying any particular individual while still allowing for general trends and patterns to be identified. This can help protect user privacy while allowing companies to benefit from data analysis (under consumers’ consent of course). Overall, the development and promotion of privacy-enhancing technologies can be an effective way to address the ethical concerns surrounding big tech’s data collection practices and to give users more control over their personal data. Combining regulatory measures with technological solutions can create a more ethical and responsible data ecosystem that respects user privacy and autonomy.

In conclusion, the data collection practices of big tech companies are a major ethical issue that must be addressed. While data collection can have benefits, such as personalized recommendations and improved user experience, the potential for harm and abuse is significant. As a part of the online community, companies should ensure that our personal data is used ethically and responsibly and that we retain control over our lives and decisions. More strict regulations on data collection activities should be introduced to achieve this goal, including a more flexible punishing scale and giving users more autonomy over how their data is used. The promotion of cutting-edgy on top of these regulations would be an ideal solution to current data collection issues.

--

--