In the News: Safer Consumer Data, an AI Case in the Supreme Court, and AI for Layoffs

The Editors at Hoyalytics
Hoyalytics
Published in
5 min readFeb 27, 2023

This week learn about how data clean rooms are bridging the gap between consumer privacy and market research. A case against Google in the Supreme Court could determine the future of AI search engines. And will AI be used to determine who companies layoff? As you read, think about the ethical ramifications in all of these stories.

What is a Data Clean Room?

By: Meredith Lou

Source: ShareThis

In the face of an increasingly digital age accelerated by the COVID-19 pandemic, marketers are shifting more and more spending towards digital media platforms, both search and social. Digital marketing brings more personalized ads, accurate performance attribution, and ad measurement. However, with new data privacy laws and limited use of cookies, marketers are forced to change how they track and utilize user data. In order to use analytics to better personalize ads, companies are now looking towards a data-exchange platform where they can access customer datasets without exposing the consumer’s personal profile and underlying identifying characteristics, also known as data clean rooms. Data clean rooms are essentially a direct market exchange for ad sellers and buyers to integrate consumer information.

Instead of the system prior to regulations where marketers made full customer datasets available for media companies to then use in targeting ads, ad agencies can now still own their data and choose which anonymized data to share in the clean room. Clean room collaborators have the ability to incorporate third-party data, live update existing data, and manage who is able to access the data. With the limited cookie usage, attribution is more complicated; it is hard to tell exactly when a user converted. Clean rooms may be able to help this issue by allowing multiple parties to collaborate and determine which actions led to a conversion by comparing across datasets. As clean rooms evolve, companies should leverage this new platform in their newness by beginning to integrate the interfaces required.

The Case of: United States versus OpenAI

By: Chris Tengey

Source: The Supreme Court of the United States

Last Tuesday, Supreme Court Justice heard the opening arguments to Gonzalez v. Google, a case on whether big tech companies such as Youtube can be sued for allowing terrorists to use and post on their platforms. The future ramifications of this case are obvious within the tech world, and could potentially cripple search engines as we know them if the court rules in favor of Gonzalez. The future for AI-powered chatbots such as OpenAI’s ChatGPT and Google’s Bard is also linked with the case. The case centers around Section 230 of the Communications Decency Act of 1996, which grants big tech companies protection from content created by their users. It is the statute that makes it difficult to sue Youtube for defamatory content posted on it, as Youtube is not liable for third-party content.

Up until now regular search engines, and by extension AI language models, could rely on Section 230 if they link to inaccurate information. Since there is no legal precedence on AI search engines, the legal consensus remains ambiguous. In the future there will be the need to determine whether these search engines are producing their own content, or are simply drawing someone’s information from the internet. The Gonzalez case could make it so that a web service could lose Section 230 protections, by showing user-generated content in a way that promotes it. Because this is the first time that the Supreme Court has interpreted Section 230, this could potentially make the area of AI search development a precarious one. Some justices, such as Clarence Thomas, have already argued for removing Section 230 from the judiciary code. The current court has already overturned landmark legislation, so the reversal of another regulatory measure may not be all that unexpected. The courts are reconsidering the basics of technological law within a completely new frontier of technology, and as users of the internet, we all need to remain cognizant of how the decision may change our usage of AI search systems forever.

Layoffs may be decided by another two letter acronym, and it’s not HR…

By: Jason Yi

Source: Indian Express

As nearly 12,000 Googlers lost their jobs, many raised suspicion at Google’s claim that there was “no algorithm involved” with its decisions. In fact, in a survey of 300 HR leaders in January, 98% said that software will help with their layoff decisions. This new class of AI products in the workforce curates a “skills inventory” which tracks which work experiences, certifications, skillsets, and traits are indicative of a high performing employee. While these technologies initially started as a way to match people to projects they would be most successful on, some are now being applied to layoffs. One specific type of AI used within HR departments is software that can analyze performance data which many HR leaders say is an important factor in determining who to lay off. However, this doesn’t come without its own slew of problems.

Brian Westfall, a senior HR analyst at Capterra, a software review site is concerned that these algorithms can inadvertently bake bias into its code. He says, “If an organization has a problem with discrimination…people of color may leave the company at higher rates, but if the algorithm is not trained to know that, it could…suggest more of them for cuts.” With issues as impactful and sensitive as layoffs, there are calls to tread lightly, especially because many of the algorithms used are black box, which are algorithms where the inner workings cannot be uncovered. As AI’s use in decision making becomes more influential only more questions around its ethics arise. However, as these tools become more sophisticated, HR leaders will only adapt them more. While some may herald these algorithms as the future of HR, some are concerned that they shouldn’t be at the helm of such sensitive decisions. Where do you lie? Would you let your job be decided by an AI?

--

--

The Editors at Hoyalytics
Hoyalytics

A group of Georgetown University undergraduates eager to learn data science together. Twitter: @HoyAlytics | Publication: https://medium.com/hoyalytics