Why We Need More Social Entrepreneurs Shaping AI

Ashoka
Changemakers
Published in
7 min readFeb 21, 2024

by Hanae Baruchel & Smita Gupta

Artificial intelligence apps like ChatGPT, Midjourney, and now Sora have taken the world by storm in the last year. Moving at breakneck speed and known as generative AI, the technology powering these apps raises new business opportunities as well as key ethical questions. Suddenly, the potential good and harms of AI have become everyone’s business: the topic is no longer relegated only to technologists and policymakers.

Whether real or exaggerated, a spreading narrative of doom and gloom about the existential threats of AI is causing too many to feel powerless in shaping these emerging technologies. How do we reverse this trend, regain our agency, and become more responsive to the new challenges and opportunities AI poses as things move ever faster?

Flying under the radar is a growing group of social entrepreneurs leveraging AI for social impact. With ethics at the center of their work, they constitute both an early warning system for the unintended consequences of technology as well as an innovation engine, constantly building new ways to address both age-old and emerging challenges.

Now is the time to pay attention to these bright spots, the lessons they are teaching us about building tech that works for humanity, and the role we can all play.

Designing with everyone in mind & the role of language access

Whose voices are we missing? This is typically the first question on a social entrepreneur’s mind — and one, that is seldomly raised by technologists. India’s OpenNyAI is a case in point. Their community of lawyers, technologists, designers, entrepreneurs, and artists develop AI public goods to transform the experience of justice in India (Nyai means justice in Hindi).

In a country of 1.4 billion people where less than two percent of the population have access to legal aid because of cost or language barriers, AI has the potential to absolutely crush the cost of transactions and level the playing field by helping people understand critical information, like the rights they have.

To this end, the OpenNyAI team released their Jugalbandi software stack earlier this year. It combines the power of ChatGPT with voice-to-voice Indian language translation to provide every Indian access to 200 government welfare schemes, in 10 Indian languages, without literacy requirements.

For the first time, a farmer in rural India can have a conversation about support they are entitled to under Indian law, with none of the jargon and in voice-audio in their own language.

Halfway across the world in the United States, Melissa Malzkhun’s Motion Light Lab is zeroing in on a similar insight, with a focus on deaf language and culture. Her team is pioneering fluently-moving sign language avatars, leveraging AI and motion capture technology to increase access to deaf language and culture. Imagine a world where signing avatars can be embedded in everything from your favorite TV show to your Zoom meeting.

From winner takes all to fair ownership rights

The question of copyright infringement and worker rights have also been front and center in the debate about AI’s potential harms. In summer 2023, The Atlantic revealed the existence of “Book3,” a dataset of 183,000 pirated books used to train large language models. Many authors and publishers are now suing for copyright infringement. OpenAI came under scrutiny in 2022 when Time Magazine revealed the poor working conditions of data labelers in Kenya.

Social entrepreneurs show us there is a better way. Karya is a start-up that is building ethically sourced datasets in Indian languages, which can be leveraged to train Large Language Models (LLMs), and other AI systems. Though those languages may have millions of speakers, there is simply too little digital content available to scrape the internet as a primary database — a key method to build LLMs.

Karya employs a cadre of workers from India’s poorest communities with a focus on marginalized castes, genders, and religions to build and label new data sets.

In an industry riddled with poor worker conditions and exploitation, Karya set out to do things differently.

Not only are they paying their workers some of the highest wages in the global data industry, the organization is also committed to paying them royalties on data resales (where licensing conditions allow). Today, over 30,000 workers have completed 30 million paid digital tasks through Karya.

Other innovators like Regi Wahyu of Hara, who works to bring more than 30,000 smallholder farmers in Indonesia out of poverty, and Sharon Terry of Genetic Alliance, which administers the world’s largest bio-data bank for rare genetic diseases, have set up similar data cooperative models to share benefits and ownership rights more equally.

With the rise of AI automation and job loss risks becoming apparent in every industry, we need more alternative models like these ascribing rightful ownership. They can be used as blueprints when drafting new standards, policies, and laws, a need highlighted in Article 10 of the EU AI Act and the EU Data Governance Act.

Complementing AI with human judgment

In spite of its tendency to “hallucinate,” generative AI’s new capabilities heighten the temptation to replace humans with technology. Instead, we must double down on the importance of complementing technology with human judgment.

This is a lesson social entrepreneur Hera Hussain learned through a carefully designed trial and error process. She founded Chayn, the first fully digital organization addressing gender-based violence globally. The organization creates online trauma-informed resources for nearly half a million survivors of gender-based violence in 14 languages. It also provides micro-courses on topics such as how to collect evidence of domestic abuse, self-paced mental health courses, and one-on-one peer support.

In 2017, the Chayn team developed its own chatbot so users could query Chayn’s resources more quickly and intuitively, in a bite-sized format. They consulted survivors at every stage and made design decisions to ensure people understood they were speaking with a bot and not a human.

Nonetheless, after three years of user testing, they made the decision to take down the chatbot because no matter how much they tweaked the design, people were using the bot for crisis response — something the technology was ill-equipped to do.

They have now replaced the service with a one-on-one chat service with trained humans, though it is more expensive. The stakes were simply too high to get it wrong. This highlights two key lessons: it’s essential to have humans in the loop systems for high-risk use cases, and AI can serve a suggestive or advisory function but lacks the capacity for decisive action.

Building new safeguards and plugging the policy implementation gap

The recent European AI Act, and the barrage of announcements, declarations, voluntary commitments, and executive orders to regulate AI and manage its risks signal a welcome acknowledgement: we need to build new safeguards for the 21st century.

Often missing from the conversation, however, are the voices of social entrepreneurs. Yet, thanks to their proximity to the communities they partner with, they can serve as an early warning system for the unintended consequences of technology.

For example, Anna-Lena von Hodenberg, founder of HateAid, works with victims of online abuse. They have won landmark strategic litigation cases against social media companies for their subpar content moderation practices and guided thousands of victims to mental health and legal support.

With every second young woman becoming the target of digital violence on social media, they recognized how the EU’s Digital Services Act (DSA) was falling short on protecting women online. Their campaign brought issues of image-based sexual violence and doxxing onto the agenda, leading to more transparency and better complaint mechanisms for victims of digital violence in the DSA.

In addition to new policies, we need new tools and practices that help us measure and mitigate harms.

For example, what if we made platforms pay for the harms they produce? What if we imagined a tax on polarization, akin to a carbon tax?

Helena Puig, founder of Build Up, is championing this idea. Her organization leverages technology to transform conflict in the digital age. They are now outaim to create an AI-assisted reliable polarization footprint measurement, in partnership with peacebuilders and academics.

With generative AI, and our growing inability to differentiate truths from lies, human-made content from deep fakes, we need these types of measurement tools now more than ever. Even if policymakers manage to catch up with the pace of change, it is clear they will need new instruments to guide them and to facilitate the implementation of safeguarding policies.

Social entrepreneurs’ role in shaping the future of AI

We should feel optimistic about the proliferation of social entrepreneurs adopting AI for impact. But the reality is that there are still too few of us actively engaging with these technologies. To enable responsible experimentation, social entrepreneurs need easier access to technical testing environments and infrastructure (i.e. access to GPUs, cloud servers, etc. at a subsidized rate).

We also need more players who recognize the importance of investing in social entrepreneurs’ learning curve so they can start building for humanity. How can we enable the world’s leading changemakers to adopt AI and shape its future for the better? This is the question animating our work.

For more on this topic, check out our recently released edition of the Social Innovations Journal that documents 10 Impact AI case studies from around the world. The journal provides perspectives from social entrepreneurs working on media, health, justice, gender-based violence and more.

Hanae Baruchel is a partner at Ashoka — the world’s largest network of social entrepreneurs. She leads the organization’s Tech for Humanity initiative — a global, cross-sector effort to maximize the impact potential of technology and minimize its social and environmental harms.

Smita Gupta co-leads the OpenNyAI mission and is part of Agami — a justice innovation field catalyst in India. She is a lawyer who has been working at the intersection of AI, justice and law for the last 2 years and has been leading the building of Natural Language Processing models, generative AI stacks and a robust community of Justice AI innovators and entrepreneurs.

--

--

Ashoka
Changemakers

We bring together social entrepreneurs, educators, businesses, parents & youth to support a world in which everyone is equipped & empowered to be a changemaker.