The Dark Side of Big Tech’s Billion-Dollar Empire

Over the past two decades, the rise of Big Tech companies like Google, Facebook and Amazon has been inevitable. These are the most powerful companies in the world and they are valued at close to a trillion dollars. However, the reason behind their success lies in their dark side — the collection and exploitation of user data, which they use to create personalized advertising and content for users in order to make profits for themselves. These practices have caused the growth of their trillion-dollar empires, but at a cost to society as it breaches privacy as well as raises other ethical concerns.

The sad reality is that the users of the sites of these Big Tech companies are unaware that their time spent online, whether creating or sharing data or even browsing through social media, is creating data that is being collected, analyzed, and monetized by Big Tech companies. This exploitation of personal data has given Big Tech companies power and control over our lives, as they can use our data to influence and shape public opinion. As a user of these sites, I find it astonishing as to how much power Big Tech companies have over me especially with the fact I have been unaware of the behind the scenes.

The Power Of Big Tech Companies

Big tech companies have become some of the most powerful corporations in the world. They have the ability to influence and control our lives, simply because of their access to our personal data.Our data is what keeps these Big Tech companies in existence as they have the power to collect, analyze, and exploit our personal data in ways that have an impact on our lives, such as making decision making processes easier. Our data is used by these companies to create personalized advertising and content to help them gain profits. In recent years, we have seen how users’ data has been used to influence public opinion during political campaigns, highlighting the devastating impact of Big Tech’s power on democratic processes. For example, during the 2016 U.S. elections users data was used to influence and shape public opinion. Also, something that I relate to when it comes to personalized content is how I search for something, like a trip to europe, a couple seconds later I see an ad about the same thing.

However, the power held by big tech companies and the power held by that of their users is grossly imbalanced. Users may not fully understand the implications of their data generation, or may feel pressure to share more data than they would like in order to access social or economic benefits. After all, everyone wants to have the best experience that they could have online. On the other hand, Big Tech companies continue to exploit the user data for their own benefit, without caring about their users. As seen, the Big Tech’s business model is founded on the basis of user data and often through unethical means such as manipulating user behavior is this data acquired. The exploitation of personal data has raised several ethical concerns for whether the benefits these companies offer is enough for a user to trade off their privacy.

Ethical Issues Brought Upon The Power Of Big Tech Companies

Big Tech companies provide users with free access to their platforms in exchange for their data. However, the significant power imbalance between users and these companies creates potential risks and concerns. Even though these companies have access to users’ personal information, it is their responsibility to ensure that this information is protected. Nevertheless, there have been cases of data breaches which indicate that the companies are not doing enough to safeguard users’ data.

The issue of privacy is one of the biggest ethical concerns which arises when it comes to Big Tech companies data control. As users generate a lot of data, mainly personal, on the sites of these companies, it falls within the companies responsibility to be private and protected. However, instances like the Cambridge Analytica scandal show that this isn’t always the case, putting users’ privacy and wellbeing at risk.

The Cambridge Analytica scandal really showed how lightly Big Tech companies took users’ data privacy as in the scandal several profiles were accessed without consent. Let’s have a look at the story of Christopher Deason. Mr. Deason was completing online surveys when he granted one of the sites access to his Facebook account. Mr. Deason stresses that if he knew granting permission would lead to data collection he would never have accepted. However, what is more concerning is that the data which was collected consisted of 205 of his friends profiles — without their knowledge or consent. The information consisted of names, birth dates and location which was added to the database for Cambridge Analytica.

Another concern which arises is that of Discrimination and Bias. As Big tech companies use algorithms to help make decisions based on the data they collect, the algorithm decision will be as fair or unfair depending on the data that it collects. This can create issues as computer programs only look at information about one group of people and not others which may create problems for others. This was seen when the technology of facial recognition came out and it was easier for the technology to identify white skinned faces compared to other colors. Hence, it is important to make sure algorithms are trained in a manner that programs are fair and do not discriminate. As a citizen of the Republic Of Kenya, It was saddening to see how the facial recognition failed on my fellow Africans.

Furthermore, Big Tech companies have the power to filter the information users see on their platforms. Algorithms decide what content users see, which can limit their exposure to alternative perspectives and ideas. This is concerning because algorithms can inadvertently reinforce users’ beliefs without presenting other viewpoints. It is similar to how individuals with different political affiliations watch different news channels, which reinforces their beliefs without presenting other perspectives. In my opinion, this is worrisome because algorithms can train users to only believe what they already think is correct, without providing them the opportunity to learn from other perspectives.

In Relation To The Power Chapter by D’ignazio & Klein

The issue of Big Tech companies having power over their users’ data can be compared to the themes brought up in “The Power Chapter” by D’ignazio and Klein in their book on Data Feminism. Throughout the reading, the authors talk about the importance of data and how data can sometimes be manipulative and results in loads of biases and discrimination. The first similar point that we encounter is that of algorithmic bias. Earlier in the blog, it was mentioned that algorithmic decisions will be as fair or unfair depending on the data it collects. In the reading we see the perfect example of such a situation, where Amazon’s algorithm which was used to screen first-round job applicants, developed a stronger preference for male applicants because the algorithm was trained on the resumes of male applicants. The authors do a good job arguing how the fact that algorithms are trained on biased data, can have concerns on users especially those who are underrepresented. There are other algorithmic biases covered in the reading such as that of the google search engine returning totally different results based on skin color which is an example of discrimination. Through the reading we see that the use of untrained algorithms is not beneficial to society as it has the ability to ruin the image of groups of people. For me, it is crucial for Big Tech companies to address and improve their algorithms to ensure fairness and reduce harm caused by discrimination and biases.

What Do Big Tech Companies Get In Return?

The selfish act of the collection and exploitation of user data helps these Big Tech companies to earn revenue. This is done by creating, through the data collected, a detailed profile of the user which is sold to advertisers. Advertisers use this information to create personalized content for each individual user based as the data provided includes stuff such as user interests. This targeted advertising generates higher success rates than traditional advertising, which means that advertisers are willing to pay a premium for it.

The rise of Big Tech companies has brought with it power and control over users’ data, raising ethical concerns about privacy, discrimination, and filter bias. While these companies offer users free access to their platforms in exchange for their data, the power imbalance between users and Big Tech companies means that users’ data may be exploited without their knowledge or consent. The potential for data breaches and the use of algorithms to filter and shape users’ experiences on these platforms add on to these concerns. In light of these issues, it is important for Big Tech companies to address these ethical concerns and improve their algorithms to ensure fairness and privacy for all users. As discussed in The Power Chapter by D’ignazio & Klein, the use of biased data to train algorithms can have consequences and worsen existing power imbalances. Therefore, it is important for Big Tech companies to prioritize ethical considerations in their business practices and ensure that users’ data is protected and used ethically.

--

--