Nonprofits, not Silicon Valley startups, are creating AI apps for the greater good
By Shannon Farley, Co-Founder & Executive Director of Fast Forward
Predictions for the potential of artificial intelligence wax poetic — solutions from climate change to curing disease — but the everyday applications make it seem far more mundane, like a glorified clock radio.
Thankfully, the future may be closer than we think. And the miraculous feats are not happening in Silicon Valley X-Labs — in a plot twist, nonprofits are leading the charge in creating human-centered applications of the hottest AI technologies. From the simplest automated communications to contextual learnings based on analysis of deep data, these technologies have the potential to rapidly scale and improve the lives of our most underserved communities.
Take chatbots for example, a new spin on mobile messaging that has historically been human-powered. Organizations like TalkingPoints and mRelief have for years used simple mobile messaging to meet users where they’re at. Recently, tech nonprofits are taking a new approach. Raheem.ai, a Facebook Messenger bot for reporting and rating experiences with police officers, engages with users to walk them through reporting police incidents and provide follow-on support. The interactions are simple, but powerful. Do Not Pay, “the world’s first robot lawyer,” started out as a bot to repeal parking tickets and now helps fight landlords in negligent housing situations, and even helps the homeless find and apply for social services. These chatbots eliminate the friction of traditional reporting and serve as legal empowerment in your pocket.
Crisis Text Line still implements a human-to-human volunteer model, but the tech nonprofit has the largest open source database of youth crisis behavior in the country, and has been able to use AI to dramatically shorten response time for high-risk texters from 120 seconds to 39. Crisis Text Line leveraged machine learning to identify the term “ibuprofen” as 16 times more likely to predict the need for emergency aid than the word “suicide.” Now using AI, messages containing the word “ibuprofen” are prioritized in the queue.
Machine learning even allows you to select the energy source that powers your home appliances. WattTime creates software that enables smart hardware devices to prioritize clean energy with a simple flip of a switch. Their product relies on machine learning to detect when to tell smart devices like thermostats to pull from the power grid, based on surges in clean energy. This means your A/C may turn on five minutes earlier or later than it typically would, because the algorithms instruct your utilities to capitalize upon instances of excess clean energy from sources like windmills, thus minimizing the use of dirty power.
Quill, a free online tool that helps students measurably improve grammar and writing, discovered that natural-language processing was essential to remedy students’ struggles with sentence fragmentation. Using open source tools and online training programs, Quill’s technical team built its own fragment detection algorithm powered by a combination of machine learning and natural-language processing. Quill’s methodology is exemplary for resource-constrained tech nonprofits. It leveraged Wikipedia to amass a dataset of 100,000 high-quality sentences, integrated the natural-language processing tool Spacy.io to break the sentences down, and incorporated Tensorflow for data classification.
The result? Quill’s fragment-detection algorithm accurately detected sentence fragments 84 percent of the time, and this will only continue to improve. Other tech nonprofits, like Dost Education, forecast using natural-language processing down the line to monitor their impact assessments with teachers and parents.
While many instances of AI pool internally sourced data, data mining allows organizations to execute deep research faster, or to scrape mass information on their target market to make product decisions based on behaviors and trends. The Pulitzer Prize-winning reporting on the Panama Papers conveys the growing importance of data mining in investigative journalism. With 261 gigabytes of data, data mining was essential if the team of 100 journalists were to dig through the largest mass of leaked data in the history of journalism.
Transparency Toolkit, a Berlin-based tech nonprofit, launched its first tool, ICWatch, which implements data mining to scrape information from publicly available profiles and resumes to identify individuals involved in activities ranging from government surveillance to drone strikes. The organization runs several different tools and projects designed to democratize the big data playing field for human rights activists and journalists.
Does it matter that these companies are nonprofits?
Yes. As the cost of AI implementation drops, it will become ubiquitous across software. The AI use case for a nonprofit is significant because incentives are well aligned to collect and open source the collected data. Effective implementation of AI requires massive data. Profit motives can restrain a company’s incentive to open its data, but this is not so for nonprofits. Open data serves the broader purpose of public education and knowledge sharing. As tech nonprofits deploy these technologies and open source their findings, they can deepen the capacity of all AI applications.
As tech nonprofits deploy these technologies and open source their findings, they can deepen the capacity of all AI applications.
However, corporations have a role to play, too. Businesses like Google and Accenture are leveraging their internal AI talent to build tools for positive impact. Google.org is working with Pratham Books’ StoryWeaver, a platform that connects readers, authors, illustrators and translators to massively expand the number of children’s e-books available in mother tongues. Through an integration with the AI-powered Google Translate API, StoryWeaver is expanding its library to 200,000 titles in 60 languages.
Accenture sees Responsible AI as both an opportunity and a responsibility for business, government and technology leaders to apply the technology in the right way, using human-centric design principles such as accountability, transparency and fairness. Accenture Labs in Bangalore is developing workforce accessibility solutions called Drishti, using Responsible AI to empower the visually impaired, in collaboration with the National Association for the Blind.
The tech for good use cases for AI are endless, ranging from refugee aid, to bankruptcy filings, to predictive solutions in child welfare. We are still in the early days of true implementation of AI, but in the tech nonprofit sector, the future looks bright.
This article originally appeared in Recode.