The Impact of AI and Big Tech on Society

--

By: Dave Ladouceur June 2024

All references in appendix — Copyright © 2024 — Regenerative Development Corp — All Rights Reserved. Used with permission only.

The rapid advancement of artificial intelligence (AI) and the unprecedented growth of Big Tech companies have brought about significant changes in our daily lives. However, these developments come with a range of serious challenges and threats. The pervasive influence of AI and Big Tech is jeopardizing humanity in several critical ways, with long-term detrimental impacts on society. Here, we outline eight major issues, supported by research and references and the introduction of our solution.

1. AI-Driven Manipulation

AI algorithms are designed to manipulate user behavior for profit by analyzing vast amounts of personal data and creating detailed user profiles. These systems predict and influence user actions to maximize engagement and profit, often leading users to spend more time on platforms and make impulsive decisions. This manipulation is subtle and leverages the user’s vulnerabilities without their knowledge, raising significant ethical concerns about transparency and the need for human oversight.

  • “The dark side of artificial intelligence: manipulation of human behaviour” explains how AI can manipulate behavior by exploiting human vulnerabilities through strategic planning and prediction of actions for profit​ (Bruegel)​.
  • “Characterizing Manipulation from AI Systems” delves into how AI’s influence on human behavior is tied to its ability to predict and manipulate user behavior​ (ar5iv)​.
  • “How AI Can Be Used to Manipulate People” covers the use of AI for creating deepfakes and controlling access to information, thus manipulating behavior and creating echo chambers​ (Psychology Today)​.

These manipulative practices underscore the urgent need for stricter regulations and ethical standards in AI development.

2. Misinformation Spread

AI significantly amplifies the spread of misinformation by creating and disseminating fake content, such as deepfake videos and images. These sophisticated AI-generated media can easily deceive viewers and are often used to spread false narratives quickly and broadly through social media platforms, undermining public trust and democratic processes.

  • “How generative AI is boosting the spread of disinformation and propaganda” by MIT Technology Review highlights how generative AI tools are used to create and spread misinformation​ (MIT Technology Review)​.
  • “The big election year: how to stop AI undermining the vote in 2024” by the World Economic Forum discusses the risks of AI in spreading disinformation during elections, destabilizing society by questioning the legitimacy of election results​ (World Economic Forum)​.
  • “How AI Threatens Democracy” by the Journal of Democracy outlines how AI-generated propaganda can influence political perceptions and erode trust in democratic institutions​ (Journal of Democracy)​.
  • “The role of artificial intelligence in disinformation” by Cambridge Core examines how AI techniques facilitate the creation and dissemination of disinformation, posing ethical and regulatory challenges​ (Cambridge)​.

The rampant spread of misinformation through AI-generated content threatens democratic institutions and public trust, highlighting the necessity for robust countermeasures.

3. Mental Health Crisis

Social media algorithms contribute to mental health issues by promoting content that maximizes user engagement through emotionally charged material. This leads to increased anxiety, depression, and addiction, particularly among younger users who are exposed to constant streams of distressing and addictive content.

  • “Social media use linked to decline in mental health” by MIT Sloan shows a significant link between social media use and increased rates of anxiety and depression among college students, particularly due to social comparison and the pressure of online interactions​ (MIT Sloan)​.
  • “A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health” by Stanford HAI discusses how social media algorithms contribute to mental health problems like loneliness, anxiety, and depression by creating addictive engagement patterns and facilitating harmful behaviors such as cyber-bullying and disordered eating​ (Stanford HAI)​.
  • “Social media and mental health: The impact on Gen Z” by McKinsey highlights the inverse relationship between screen time and psychological well-being, particularly among younger generations, with increased social media use linked to poorer mental health outcomes​ (McKinsey & Company)​.
  • “Social media use and mental health during the COVID-19 pandemic in young adults” by BMC Public Health presents meta-analysis results showing that increased time spent on social media is associated with higher levels of anxiety and depression among young adults​ (BioMed Central)​.

The mental health crisis exacerbated by AI-driven social media underscores the need for changes in education, mental health awareness and new systems.

4. Harming Children

AI-driven social media and content recommendation systems have a profound impact on children, significantly changing their outlook, purpose, and virtues. These systems expose children to harmful and addictive content, eroding their real-world social skills and emotional well-being, leading to long-term negative effects.

Exposure to Harmful Content

Children are particularly vulnerable to the effects of AI-driven content recommendations. These systems often prioritize engagement over well-being, leading to the promotion of content that can be harmful or inappropriate for young audiences. This exposure can result in increased anxiety, depression, and other mental health issues. According to Dr. Julie Fratantoni, the brain’s reward network compels children to keep checking social media, seeking out the dopamine hit that comes with another like or friend request. This nonstop flow of dopamine decreases attention, focus, and quality of sleep, while also reducing in-person interactions​ (Center for BrainHealth)​.

Addictive Behaviors

The addictive nature of social media, driven by AI algorithms, can lead to excessive use and social isolation. Adolescents are faced with more opportunities and traps than ever before, which can contribute to lower self-esteem and identity confusion. As explained by experts at Child Mind Institute, the constant pressure to maintain an idealized online persona can be exhausting and lead to feelings of inadequacy and anxiety. Additionally, the hyperconnected nature of social media means that children never get a break from the demands of online relationships, which can lead to emotional depletion and increased anxiety​ (Child Mind Institute)​.

Impact on Development

The influence of social media on children’s development extends to their outlook on life and their core values. Constant exposure to curated and often unrealistic portrayals of life can distort children’s perceptions of reality and self-worth. This can hinder the development of a healthy sense of identity and purpose. Furthermore, the need for instant gratification fostered by social media can undermine the development of patience, perseverance, and other virtues essential for long-term success and well-being​ (American Psychological Association)​​ (Center for BrainHealth)​.

The adverse effects on children’s development call for urgent reforms in content moderation and digital literacy education.

5. Environmental Damage

The environmental impact of AI technologies is an often-overlooked issue that has significant implications for sustainability. The development and operation of large AI models require substantial computational power, leading to considerable energy consumption and resource depletion.

Energy Consumption

Training large AI models, such as those used in natural language processing and image recognition, consumes vast amounts of energy. This energy consumption not only contributes to greenhouse gas emissions but also places a significant strain on power grids. For instance, training the GPT-3 model, a precursor to ChatGPT, consumed approximately 1,300 megawatt-hours of electricity, equivalent to the monthly energy consumption of 1,450 average U.S. households​ (LL MIT)​. The computational power required for sustaining AI’s rise is doubling roughly every 100 days, with projections indicating that AI could use more power than the entire country of Iceland by 2028​ (World Economic Forum)​.

Carbon Footprint and E-Waste

The carbon footprint associated with AI development is substantial. The energy-intensive process of training and running AI models leads to significant greenhouse gas emissions. According to a report from Stanford University, the carbon emissions from training a single AI model can be comparable to the lifetime emissions of five cars (carbon emissions stanford report). AI-related energy consumption could be 10 times greater by 2027 compared to 2023 levels, highlighting the urgent need for sustainable AI practices (Nature Article). Additionally, the electronic waste (e-waste) produced by AI technology, including the disposal of power-hungry GPUs and other hardware, poses serious environmental challenges. E-waste contains hazardous chemicals like lead, mercury, and cadmium, which can contaminate soil and water supplies (earth.org).

The carbon footprint associated with AI development is substantial. The energy-intensive process of training and running AI models leads to significant greenhouse gas emissions. For example, AI-related energy consumption could be 10 times greater by 2027 compared to 2023 levels, highlighting the urgent need for sustainable AI practices​ (Nature)​. Additionally, the electronic waste (e-waste) produced by AI technology, including the disposal of power-hungry GPUs and other hardware, poses serious environmental challenges. E-waste contains hazardous chemicals like lead, mercury, and cadmium, which can contaminate soil and water supplies​ (Earth.Org)​.

Mitigating Environmental Impact

To reduce the environmental impact of AI, several strategies can be implemented. These include optimizing AI algorithms to be more energy-efficient, using renewable energy sources to power data centers, and promoting the recycling and reuse of electronic components. For example, implementing power-capping techniques during the training and inference phases of AI models can reduce energy consumption by about 12% to 15%, with minimal impact on task performance​ (LL MIT)​.

Screenshot: “The AI Index 2023 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA

Although there have been some improvements in efficiency lately the graph clearly shows the substantial environmental impact of AI model training, which surpasses the CO2 emissions of air travel, human life, American life, and even U.S. car manufacturing and fuel consumption over a lifetime.

Addressing the environmental footprint of AI is critical for ensuring the sustainability of technological advancements. How long will it take us to get to a “20 watt challenge” which is the approximate power consumption of the human brain which by the way leaks a lot of power since it is in the ready to execute mode — Why-does-the-brain-use-so-much-energy. Tell you what — I will spot you 100 watts.

6. Privacy Violations

The extensive tracking and data mining practices of AI systems pose significant threats to individual privacy and autonomy. These practices often occur without explicit user consent, leading to widespread privacy violations. Despite billions of dollars in fines, Big Tech companies continue to exploit personal information for profit, manipulating data and selling it to the highest bidder. Here are some high-profile cases illustrating the severity of these violations and the regulatory efforts to address them.

Prosecution of Privacy Violations

Clearview AI: Clearview AI has faced multiple lawsuits for scraping over 3 billion photos from social media and other internet platforms without users’ consent to create a facial recognition database. This database has been sold to law enforcement agencies and private entities, leading to significant privacy concerns. Clearview was fined €20 million by the French Data Protection Authority and ordered to delete the data of individuals residing in France​ (European Data Protection Board)​​ (Data Privacy Manager)​.

Meta (Facebook and Instagram): Meta has faced numerous fines for GDPR violations, including a €405 million fine for mishandling teenagers’ data on Instagram. The Irish Data Protection Commission (DPC) found that Meta failed to protect children’s data, automatically displaying contact information and not conducting a Data Protection Impact Assessment (DPIA) where necessary. Additionally, Meta was fined €390 million for unlawful data processing and forcing users to consent to personalized advertising​ (Data Privacy Manager)​​ (Search Engine Journal)​.

Amazon: Amazon received a record €746 million fine from the Luxembourg National Commission for Data Protection (CNDP) for violating GDPR principles, including inadequate data processing practices and failing to protect users’ personal data. This case highlights the severe repercussions companies face for non-compliance with data protection regulations​ (Data Privacy Manager)​.

Google: Google was fined a record €4.34 billion ($5 billion) by the European Union for abusing its dominance through its Android mobile operating system. The EU found that Google required manufacturers to pre-install Google Search and Chrome as a condition for licensing its app store, made payments to manufacturers and mobile network operators to exclusively pre-install Google Search, and prevented manufacturers from selling devices running on alternative versions of Android. These practices were deemed to restrict competition and consumer choice, resulting in the largest fine ever imposed by the EU for antitrust violations​ (Fox Business)​​ (TNW | The heart of tech)​.

TikTok: TikTok was fined €345 million by the Irish DPC for GDPR violations related to handling children’s data. The platform failed to implement effective age verification processes and collected personal data from underage users without valid parental consent, exposing children to significant privacy risks​ (https://secureprivacy.ai/)​.

Threat of Future Violations

The future threat of privacy violations by AI systems remains significant. As AI technology advances, the ability to collect, analyze, and exploit vast amounts of personal data will only increase. This poses several potential risks:

  • Spear-Phishing: AI systems trained on personal data can be used to create sophisticated spear-phishing attacks, targeting individuals with highly personalized and deceptive messages​ (Haynes Boone)​.
  • Surveillance: AI-driven surveillance technologies, such as facial recognition and behavior tracking, could lead to pervasive monitoring of individuals without their consent, eroding personal freedoms and privacy​ (American Civil Liberties Union)​.
  • Data Exploitation: Companies may use AI to aggregate and analyze data from multiple sources, potentially revealing sensitive information about individuals’ behaviors, preferences, and identities without their knowledge or approval​ (American Civil Liberties Union)​.

Privacy first is a fundamental right of our democracy and has been abused for too long.

7. Algorithmic Bias

Algorithmic bias in AI and Big Tech software poses significant challenges, leading to unfair and discriminatory outcomes. These biases can be unintentional, stemming from the data used to train the algorithms, or intentional, based on the design and implementation choices made by developers. Here are some real-world examples and insights into the causes and impacts of algorithmic bias.

Causes of Algorithmic Bias

Data Bias: Algorithms are only as good as the data they are trained on. If the training data contains historical biases or reflects societal prejudices, the AI system can inadvertently perpetuate these biases. For example, an AI system trained on resumes predominantly submitted by men may develop a preference for male candidates, as seen in Amazon’s hiring algorithm, which favored resumes containing words more commonly associated with male applicants​ (IBM — United States)​​ (Learn R, Python & Data Science Online)​.

Implicit Bias: AI systems can pick up on subtle, implicit biases present in the data. For instance, facial recognition algorithms have been found to have higher error rates for people with darker skin tones. This issue was highlighted by Joy Buolamwini’s research, where facial analysis software failed to accurately detect her face due to the lack of diverse training data​ (IBM — United States)​​ (Learn R, Python & Data Science Online)​.

Sampling Bias: This occurs when the data used to train the algorithm does not represent the entire population accurately. An example of this is LinkedIn’s job-matching AI, which was found to recommend senior positions more often to men than to women, despite their qualifications. This happened because the algorithm interpreted behavioral patterns and engagement metrics that varied between genders, leading to biased recommendations​ (MIT Technology Review)​.

Impacts of Algorithmic Bias

Healthcare: AI systems used in healthcare have been found to exhibit racial biases. For instance, predictive algorithms used to guide healthcare decisions often return lower accuracy results for minority groups, which can lead to unequal treatment and outcomes​ (IBM — United States)​.

Criminal Justice: AI-powered predictive policing tools often rely on historical arrest data, which can reinforce existing patterns of racial profiling and disproportionate targeting of minority communities. This can lead to biased law enforcement practices and unjust treatment of individuals based on race​ (IBM — United States)​.

Online Advertising: Biases in search engine ad algorithms can reinforce job role gender biases. Research has shown that Google’s advertising system displayed high-paying job ads to males more often than to females, even when both genders had similar qualifications​ (IBM — United States)​.

Algorithmic bias in AI and Big Tech software remains a significant challenge, with far-reaching impacts across various sectors. These biases, whether stemming from data, design, or sampling issues, result in discriminatory practices that disproportionately affect minority and underrepresented groups. The persistent issue of algorithmic bias underscores the need for rigorous oversight and accountability in AI development to ensure fair and equitable outcomes.

8. Concentration of Power in Big Tech and AI

The concentration of power in Big Tech companies, especially within the AI industry, is a significant concern. This dominance impacts innovation, competition, and consumer choice, as a few large firms control vast resources and critical technologies. Here are some detailed insights into this issue.

Evidence of Concentration

  • Generative AI: The MIT Technology Review discusses how the rise of generative AI could further entrench Big Tech’s dominance. Companies like Google and Microsoft already hold significant influence over the AI ecosystem through their control of foundational technologies and infrastructure. This concentration could stifle competition and limit the diversity of AI development​ (MIT Technology Review)​​ (MIT Technology Review)​.
  • Market Dominance: The UK Competition and Markets Authority (CMA) has expressed concerns over the dominance of a few firms in the AI market. The CMA notes that Big Tech firms control critical inputs such as cloud infrastructure and data processing units, which are essential for developing AI models. This control enables them to set terms that can limit competition and innovation​ (ITPro)​.
  • Antitrust Cases: High-profile antitrust cases, such as those against Google and Amazon, highlight how these companies use their market power to stifle competition. The DOJ’s case against Google for monopolizing digital advertising and the European Commission’s investigation into Amazon’s use of marketplace data to favor its own products are prime examples of how Big Tech exploits its position to dominate markets​ (AI Now Institute)​.
  • Influence Over Ecosystems: Big Tech’s control extends beyond just digital markets. Their influence shapes other sectors, such as finance and healthcare, by leveraging their digital platforms to enter and disrupt new markets. This encroachment on various industries underscores the expansive reach of their power​ (Tech Monitor)​.
  • Regulatory Challenges: Despite efforts to regulate Big Tech, such as those by the FTC under Lina Khan’s leadership, these companies often maneuver to protect their market positions. For instance, Microsoft’s partnerships and investments in AI startups ensure that they remain influential in AI development, potentially at the expense of smaller competitors​ (SIEPR)​.

Impact on Small Businesses

The dominance of Big Tech poses significant challenges for small businesses. These companies often rely on the platforms, tools, and services provided by Big Tech, which can lead to dependency and vulnerability. For instance, small businesses may face higher costs and limited access to essential digital infrastructure, such as cloud services and online marketplaces, due to the monopolistic practices of tech giants. This can stifle innovation and make it difficult for small businesses to compete and thrive​ (AI Now Institute)​​ (SIEPR)​.

Impact on Cities

The concentration of power in Big Tech affects urban development and the digital landscape of cities. Smart city initiatives often rely on technologies provided by major tech companies, which can lead to a lack of local control and innovation. Cities may become dependent on these companies for critical infrastructure, such as data analytics, surveillance, and connectivity solutions. This dependence can limit the ability of cities to implement tailored solutions that address local needs and priorities​ (Tech Monitor)​.

Impact on Regenerative Development and Circular Economies

Regenerative development and circular economies emphasize sustainability, resource efficiency, and local resilience. The monopolistic practices of Big Tech can hinder these efforts by centralizing control and limiting access to technologies that support circular economy initiatives. Small and medium enterprises (SMEs), which are often at the forefront of regenerative practices, may struggle to compete against the vast resources and market influence of Big Tech. This can reduce the diversity and innovation necessary for sustainable development and impede the transition to circular economies​ (ITPro)​​ (SIEPR)​.

The concentration of power in Big Tech and AI poses a significant threat to fair competition, innovation, and sustainable development. Companies like Google, Microsoft, Amazon, and Facebook leverage their vast resources and control over critical technologies to maintain and expand their dominance. This not only stifles smaller competitors and limits consumer choice but also affects urban development and the implementation of regenerative and circular economy initiatives. Addressing this concentration requires robust regulatory measures and a commitment to fostering a more diverse and competitive ecosystem.

Conclusion

The issues highlighted above illustrate the profound and far-reaching impacts of AI and Big Tech on society. From manipulation and misinformation to privacy violations and environmental damage, these challenges threaten the ethical and equitable development of technology. The problem with Big Tech and AI goes beyond just the companies themselves; it encompasses how the technology has been developed, marketed, and designed to create dependencies and addiction for the sake of technology itself. This has led to a range of societal issues, including manipulation, misinformation, mental health crises, privacy violations, environmental damage, algorithmic bias, and the concentration of power.

Solution: The Big Reset — Computer Science 2.0

Computer science is going to be replaced — we are working on new platforms that combine all the needs of a modern society into one self-aware system based on virtues and purpose. Computer Science 1.0 is a virus in our world — it is unnatural with needless layers of complexity and duplication built on no foundation of life. It is the antithesis of regenerative frameworks and holism and is a prime example of reductionist thinking and our attempts to socialize and globalize access and control have led to overly complex, fake, and manipulative systems used for driving revenue with a side effect to help people. Computer Science 1.0 was never designed for planetary systems and lacks a foundation for systems thinking, where efficiency and potential outweigh fragmented problem-solving.

Computer Science 2.0 will be rooted in systems thinking and a planetary ontology, where all living systems have a voice. It will eliminate the need for invasive Big Tech algorithms and instead focus on a harmonious global set of nested independent systems. This model supports all living systems, integrates privacy-first principles, and has clean, vetted, and accurate information. Modeled after nature, it will cost significantly less — perhaps as much as 90% of the existing technology stack can be eliminated as it leverages a self-writing language based on an initial Intelligence DNA that allows for organic learning based on virtues.

In this future, everyone can have a Personal Assistant Liaison (PAL) integrated into the city’s fabric; we already built one prior to LLMs. New education modules will allow all learners to succeed using whatever techniques they are best equipped to use. This is not problem-solving or reductionist thinking all over again — it is through exploration in a safe environment, fostering human relationships and thwarting the mental health damage caused by AI-driven social media. Cities will use trust-based protocols to defend against digital threats, adapting models and priorities of what is important to them as a group of Aspens in an Aspen grove — all interconnected but independent to their specific needs.

If you relate to this story, I would greatly appreciate you clicking the 👏button. You can hold it down up to 50 claps for more exposure.

Join us in the future that is not far away where we live in harmony with the planet, shifting power to communities for a regenerative life. contact us.

About Regenerative Development Corporation (RDC): Regenerative Development Corporation specializes in pioneering sustainable, regenerative urban and community development practices. Integrating advanced technology, including the Future Cities Platform, and emphasizing carbon-neutral building materials, RDC commits to creating resilient ecosystems and vibrant communities. Our work extends beyond traditional development, focusing on education and empowering stakeholders to engage in regenerative practices that ensure economic vitality, environmental sustainability, and social well-being. Committed to innovation and collaboration, RDC is setting new standards for a sustainable future. For more insights into our transformative projects, visit rdc.re, Introduction to RDC or contact contact us.

References of “The Impact of AI and Big Tech on Society”

The Impact of AI and Big Tech on Society (1.6 ed.). (2024). [PDF file]. Retrieved from [URL].

1. AI-Driven Manipulation

o Bruegel. (n.d.). The dark side of artificial intelligence: Manipulation of human behaviour. Retrieved from https://www.bruegel.org/blog-post/dark-side-artificial-intelligence-manipulation-human-behaviour

o Carroll, M., Chan, A., Ashton, H., & Krueger, D. (2023). Characterizing Manipulation from AI Systems. In Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’23), October 30-November 1, 2023, Boston, MA, USA. https://doi.org/10.1145/3617694.3623226

o Psychology Today. (2023, April). How AI Can Be Used to Manipulate People. Retrieved from https://www.psychologytoday.com/us/blog/freedom-of-mind/202304/how-ai-can-be-used-to-manipulate-people

2. Misinformation Spread

o MIT Technology Review. (2023, October 4). How generative AI is boosting the spread of disinformation and propaganda. Retrieved from https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/

o World Economic Forum. (2024, January). The big election year: how to stop AI undermining the vote in 2024. Retrieved from https://www.weforum.org/agenda/2024/01/ai-democracy-election-year-2024-disinformation-misinformation/

o Kreps, S., & Kriner, D. (2023). How AI Threatens Democracy. Journal of Democracy, 34(4), 122–131. Retrieved from https://www.journalofdemocracy.org/articles/how-ai-threatens-democracy/

o Cambridge Core. (n.d.). The role of artificial intelligence in disinformation. Retrieved from https://www.cambridge.org/core/journals/data-and-policy/article/role-of-artificial-intelligence-in-disinformation/7C4BF6CA35184F149143DE968FC4C3B6

3. Mental Health Crisis

o MIT Sloan. (n.d.). Social media use linked to decline in mental health. Retrieved from https://mitsloan.mit.edu/ideas-made-to-matter/study-social-media-use-linked-to-decline-mental-health

o Stanford HAI. (n.d.). A Psychiatrist’s Perspective on Social Media Algorithms and Mental Health. Retrieved from https://hai.stanford.edu/news/psychiatrists-perspective-social-media-algorithms-and-mental-health

o McKinsey & Company. (n.d.). Social media and mental health: The impact on Gen Z. Retrieved from https://www.mckinsey.com/mhi/our-insights/gen-z-mental-health-the-impact-of-tech-and-social-media

o BioMed Central. (n.d.). Social media use and mental health during the COVID-19 pandemic in young adults. Retrieved from https://bmcpublichealth.biomedcentral.com/articles/10.1186/s12889-022-13409-0

4. Harming Children

o Center for BrainHealth. (n.d.). How social media can impact your child’s brain. Retrieved from https://centerforbrainhealth.org/article/how-social-media-can-impact-your-childs-brain

o Child Mind Institute. (n.d.). How Using Social Media Affects Teenagers. Retrieved from https://childmind.org/article/how-using-social-media-affects-teenagers/

o American Psychological Association. (2023, September). Protecting teens on social media. Retrieved from https://www.apa.org/monitor/2023/09/protecting-teens-on-social-media

5. Environmental Damage

o LL MIT. (n.d.). AI models are devouring energy: Tools to reduce consumption are here if data centers will adopt them. Retrieved from https://www.ll.mit.edu/news/ai-models-are-devouring-energy-tools-reduce-consumption-are-here-if-data-centers-will-adopt

o World Economic Forum. (2024, April). How to manage AI’s energy demand today, tomorrow, and in the future. Retrieved from https://www.weforum.org/agenda/2024/04/how-to-manage-ais-energy-demand-today-tomorrow-and-in-the-future/

o Nature. (n.d.). AI’s power use. Retrieved from https://www.nature.com/articles/d41586-019-02004-6

o Earth.Org. (n.d.). The environmental impact of artificial intelligence. Retrieved from https://earth.org/the-environmental-impact-of-artificial-intelligence/

6. Privacy Violations

o European Data Protection Board. (2022). French SA fines Clearview AI EUR 20 million. Retrieved from https://www.edpb.europa.eu/news/national-news/2022/french-sa-fines-clearview-ai-eur-20-million_en

o Data Privacy Manager. (n.d.). 5 biggest GDPR fines so far 2020. Retrieved from https://dataprivacymanager.net/5-biggest-gdpr-fines-so-far-2020/

o Search Engine Journal. (n.d.). Meta fined €414M for EU privacy law violations. Retrieved from https://www.searchenginejournal.com/meta-fined-414m-for-eu-privacy-law-violations/475639/

7. Algorithmic Bias

o IBM — United States. (n.d.). Shedding light on AI bias with real-world examples. Retrieved from https://www.ibm.com/blog/shedding-light-on-ai-bias-with-real-world-examples/

o Learn R, Python & Data Science Online. (n.d.). What is Algorithmic Bias? Retrieved from https://www.datacamp.com/blog/what-is-algorithmic-bias

o MIT Technology Review. (2021, June 23). LinkedIn AI bias. Retrieved from https://www.technologyreview.com/2021/06/23/1026825/linkedin-ai-bias-ziprecruiter-monster-artificial-intelligence/

8. Concentration of Power in Big Tech and AI

o MIT Technology Review. (2023, April 18). Generative AI risks concentrating Big Tech’s power: Here’s how to stop it. Retrieved from https://www.technologyreview.com/2023/04/18/1071727/generative-ai-risks-concentrating-big-techs-power-heres-how-to-stop-it/

o ITPro. (n.d.). UK competition watchdog says it has very real concerns over a Big Tech concentration of power in the AI market. Retrieved from https://www.itpro.com/business/policy-and-legislation/uk-competition-watchdog-says-it-has-very-real-concerns-over-a-big-tech-concentration-of-power-in-the-ai-market

o SIEPR. (n.d.). FTC’s Lina Khan warns Big Tech over AI. Retrieved from https://siepr.stanford.edu/news/ftcs-lina-khan-warns-big-tech-over-ai

--

--

Dave Ladouceur
Regenerative Development Series — Humanity 2.0

AI and Regenerative Visionary | The Big Reset | AI for Good | Regenerative Development