Disrupting Your Search: The Human Impact of Google’s AI Integration and Microsoft’s OpenAI Acquisition

rahul bhattacharya
ETHIX
5 min readMay 17, 2023

--

On May 11, at its annual developer conference, Google I/O, Google announced a number of new products and features. As I went through the list of announcements highlighting generative ai integrations, the manner in which AI Chatbot Search is challenging the existing semantics of search has begun to emerge as a dominant concern (As Microsoft and Google race to monopolise AI-powered search). Google has made several announcements regarding AI in search engines, including new features and a promised commitment to responsible AI. One of its key announcements, The Search Generative Experience (SGE), is a new AI-powered search experience that combines and summarizes information from around the web in response to search queries. This allegedly aims to make it easier for users to find the information they are looking for, and reduce the amount of time users spend scrolling through search results.

As a user of Google and Microsoft’s search engines, the recent announcements regarding the integration of Google Search and Bard together with Microsoft’s acquisition of a majority of the stocks of OpenAI have raised serious concerns. Search engines are an essential tool for representation and identity, as they allow users to access information that reflects their interests and values. However, the integration of AI into search engines has raised concerns about bias, censorship, and the marginalization of certain communities. The 2021 lawsuit filed against Google (Bard) by small business owners alleging that the AI chatbot search engine was biased against small businesses highlights the potential negative impact of AI-powered search engines on smaller businesses. This can have a significant impact on the livelihoods of these businesses and their owners. As of now, the recently announced Search Generative Experience (SGE) seems to further marginalise small businesses from the search engine ecosystem.

Google’s development of AI algorithms has been increasingly criticized for perpetuating gender and racial biases. The latest announcement to integrate Jigsaw’s new tool Perspective filter into Google search raises multiple questions and problems. Perspective filters are a type of API-based search filter that allows users to narrow down their search results based on a particular perspective. They have the potential to amplify googles database and algorithmic bias and at the same time perpetuate censorship in the guise of protection from toxicity.

In the guise of helping to narrow down search results, they can be used to censor or bias search results. For example, governments could use perspective filters to only show results that support their own agenda. This can lead to the exclusion of results that do not match a particular perspective, which can introduce bias. Additionally, perspective filters can create echo chambers, where users only see information that confirms their existing beliefs. This can lead to users becoming isolated from other viewpoints, which can perpetuate existing biases and beliefs.

The potential issues of perspective filters are not limited to bias, censorship, and tracking. There are also concerns about how these filters could impact important social issues such as social justice, diversity, and inclusion. For example, perspective filters could be used to exclude results related to social justice issues or to perpetuate existing biases related to race, gender, sexual orientation, and other identities. This could have serious implications for marginalized communities, who may already face significant challenges in accessing accurate information and resources.

Microsoft’s acquisition of OpenAI and its history of censorship raises concerns about the future of AI development and accountability. Microsoft is a major player in the search engine market, and its acquisition of OpenAI gives it even more control over how people find information online. This could lead to Microsoft using its power to favour its own products and services in search results, which could harm competition and innovation. Furthermore, Microsoft has a history of censorship, such as when it blocked access to the Chinese search engine Baidu in response to government censorship in 2014. This raises concerns that Microsoft could use its control over AI chatbot search engines to censor information that it deems objectionable.

Even before the merger, OpenAI has faced legal challenges in recent years related to the use of its AI technologies. These legal challenges include lawsuits related to copyright infringement and the collection and processing of personal data without users’ consent. One of the most significant legal challenges faced by OpenAI is the lawsuit filed by Matthew Butterick. Butterick alleges that OpenAI’s AI technologies have been used to violate copyright law by creating and distributing derivative works of copyrighted material without permission. This was backed by a lawsuit filed by Getty Images alleging that OpenAI’s AI technologies have been used to infringe on Getty’s copyrights. This case highlights the challenges faced by copyright holders in the age of AI and the need for legal frameworks that can effectively address these challenges. OpenAI’s lack of transparency around its censorship policies makes it difficult for users to hold the company accountable for its decisions.

The shift towards profit-driven applications of AI can have a significant impact on the representation of underrepresented and marginalized communities, perpetuating existing inequalities and biases. Facial recognition technology and predictive policing algorithms have been criticized for perpetuating racial biases and over-policing communities of colour. Ethical and social considerations, transparency, and accountability must be prioritized in the development of AI technologies to ensure that they promote inclusion, equity, and sustainability.

Microsoft and Google, as two of the largest tech companies in the world, have a significant responsibility to prioritize fairness in their machine learning models. Continued research and development, along with an ongoing commitment to responsible AI, will be necessary to ensure that these technologies are developed and used in a way that benefits society as a whole. Investing in these techniques is not only important for ethical and moral reasons but also for legal compliance.

Data scrubbing and bias migration are critical techniques for reducing bias in machine learning models, which can have serious implications for users. Machine learning models are used to make important decisions that affect people’s lives, such as hiring decisions, loan approvals, and access to benefits. If these decisions are based on inaccurate or biased data, individuals can be unfairly impacted, perpetuating existing inequalities.

Power dynamics in AI development raise important questions about the ethics and accountability of corporations involved in this field. Inclusive and equitable design practices are essential to ensure that AI technologies are developed in a way that promotes fairness and equity for all stakeholders. This requires a commitment to transparency and accountability, as well as engagement with stakeholders from diverse backgrounds. The commitment to diversity and inclusion, increased scrutiny and accountability towards corporations, and a focus on developing AI technologies that benefit all members of society are required to achieve these goals.

--

--

rahul bhattacharya
ETHIX

Integrated Design educator - Experience Designer - Art Historian. Interaction Design enthusiast : UX design mentor