Google Search Taking Heat for Being Unfair, Using Personal Information

by Nassim Benchaabane


The Gist:

Google has got some technology geeks raging about the possibility it is knowingly violating user privacy and discriminating between people who use its search engine.

The tech giant prides itself on a search engine that uses algorithms and information about you to tailor search results and advertisements to what you’re looking for. But increasingly, people are finding that Google’s automated system is deciding what they’re looking for — and sometimes its even sexist or racist. Other times the search engine uses personal information Google never made clear it was collecting.

U.S. consumer advocacy group Consumer Watchdog filed an official complaint July 7 with the Federal Trade Commission, arguing that Americans have a “right to be forgotten” from Google’s search queries.
As everything moves to automation, these incidents remind us that we can’t always trust robots to do our work for us, and that we need to know more about how our personal data is used by tech companies.


Wait, but I searched for cat videos on Google and it worked. What are we talking about here?

The fact that Google and other companies track user movement and collect personal data to target advertisements and search results is old news. But new data suggests that Google’s use of that information isn’t so user friendly.

We’re talking about two things: Google’s algorithm-powered search engine, and the targeted advertising that appears along with it.

Algorithms are computer processes and formulas that do the Internet searching for you, using “clues” like the search terms you used or the time and date of your search.

Targeted advertising uses information about your Internet browsing habits to show you ads for stuff you might want. As you visit different websites, Google stores information about that website to understand what you’re looking through then it matches advertisements to that.

Wait, how can a search engine discriminate between users?

You could google it, but picture this: You’re search for jobs using Google. If you’re a man, you’ll get targeted advertisements for better, higher paying jobs. If you’re a woman, you’re not feeling so lucky.

Google’s algorithms and targeted advertisements are designed to give you search results and ads that are the most relevant to what you’re looking for regardless of who you are. But now we’re learning that sometimes the search engine is limiting your options because of who you are, using personal information you don’t know it’s collecting.

How do we know this?

Researchers from Carnegie Mellon and the International Computer Science Institute built a tool from scratch called AdFisher to probe Google’s targeted ads on third party websites and published their findings in April. The results: users that Google thought were men (and these were fake Web users in the study, not real people) were much more likely to be shown ads for high-salary executive jobs when they visited a news website.

AdFisher sent out thousands of automated Web browsers on selected trails across the Internet to generate activities that targeted advertisement networks would interpret as certain interests. It then recorded what ads were shown where and recorded statistically significant differences in how ads were targeted using particular interest categories or demographics.

In other words, it simulated Internet browsing.

The results, however, are suggestive, and Google is examining the study’s methodology to understand its findings. But other tools designed to unravel targeted ads systems are being made, and the U.S. White House published a report in 2014 that warned of data analytics potential to “eclipse civil rights protections” in how personal information is used in the marketplace.

What’s wrong with targeted ads?

We know about the job-search discrimination (ads for high-salary executive jobs were shown to users thought to be men 1,825 times but only 318 times to female users), but there are other ways the algorithms and targeted ads affect your worldwide web experience:

So I should stop using Google because they’re an evil company? I’m not a fan of Bing.

Hold on, no one said Google is evil. It’s difficult to assign blamefor what’s happening. And with thousands upon thousands of web pages out there, it makes sense for Google to want to use high-powered math and automation to filter for best results.

Google is the largest, most powerful search engine on the Internet, and it could be that the company has lost a little control over its algorithmic system. In fact, the researchers from Carnegie Mellon and the International Computer Science Institute think that’s more likely.

What do you mean?

Algorithms are an example of machine learning, they’re programmed by people and taught to learn from our behavior — the idea is that if you search for “cat space train” multiple times, the algorithm will start to anticipate what you’re looking for and bring up better pictures of cats, space and trains at a faster pace. It’s a case of figuring out whether the chicken or the egg came first — it’s unclear whether the machines are feeding us results because we search in discriminatory ways or if bias was intentionally coded into the algorithms.

And Google’s advertising system is complex. Google uses data to target ads but advertising agencies and business that buy ad space also have control over what demographics they want to target. They can use information about Internet browsing habits or personal data they collected on their own to make their advertisements even more specifically targeted. In the job search study, for example, it could be Google that’s being sexist and targeting specific genders, or it could be the advertisers who are engineering their ads to be so selectively targeted toward men.

Shouldn’t Google be allowed to show whatever search results it wants? It is GOOGLE Search after all.

While it might not seem to directly affect your Internet search habits, think about how people use information to make decisions. When millions of people are being fed selective information it has tangible effects on the decisions they make.

But more importantly, it shows that automated search and targeted advertisement systems aren’t transparent. We don’t understand how they work, how much of our personal information is collected or how it’s being used.

You’re supposed to be able to check your Ads Settings, a Google transparency tool, if you feel like opting out of targeted ads based on your interests, but nowhere does Google inform you that your “interests” include potentially sensitive information, like when you’re looking for information on substance abuse.

Google’s selective search engine also affects business. In April, European Union regulators opened a huge antitrust caseagainst the California-based company, accusing it of using its algorithms to promote its own services over those of its competitors to dominate the market.

Isn’t there, like, some kind of law against this I could Google search?

Though the U.S. Federal Trade Commission forced Google in 2012 to agree to stop using its vast online influence to win an unfair business advantage, Google’s practices weren’t rampantly illegal.

The examples of Google’s selective searching and ad targeting don’t break any specific privacy rules (although it doescontradict some company policies, like forbidding targeting based on a person’s medical information). The Consumer Watchdog complaint filed July 7 with the FTC asks for the ability to remove our digital footprints from the Internet.

Can I google a solution to this?

If you’re concerned and want to get off the grid, you can see what Google’s got on you here.


Brief contributed by Nassim Benchaabane



Liked what you saw? Browse more complete, concise and contextual gists on our interactive world map at Gistory.co

Show your support

Clapping shows how much you appreciated Gistory’s story.