Want Better AI? Give it a Better Purpose.

Eric Sapp
Public Democracy
Published in
12 min readDec 20, 2019

Three years ago, I began work on a project that would change my life and the trajectory of our company, Public Democracy. Our goal was to find injured veterans with severe PTSD online and then move them through an online engagement funnel. The amazing team I worked with saved lives, and we helped launch a ground-breaking legal case that will make it much harder for terrorists and dictators to access funding in the future. But to accomplish our mission, we had to jettison many of the traditional rules for big data and machine learning our online targeting and recruiting algorithms were built upon.

Looking back, I’ve come to believe that the reason our effort succeeded was that we taught our machines a form of empathy — not how to mimic human empathy, but how to begin to process data and set goals in an empathetic way. What’s fascinating about what we uncovered was that once our algorithms learned these skills, they “taught” those same skills to the broader community of algorithms that govern the internet.

As an AI-driven third wave of computing crests on our global horizon, we need to take a much closer look both at the data environment in which our algorithms are learning and at the goals we set for them. Data privacy matters, but how we are training our AI probably matters more.

How AI learns and what it is trying to learn toward have a huge impact on how algorithms function and what they are capable of achieving. Yet these two conditions are not getting nearly enough attention from engineers or the public as we move toward more and more intelligent AI machines.

I’m going to focus this piece on the opportunity and new solutions that are possible when we get those two conditions right. But I’d be remiss if I failed to mention the dangers of continuing down our current path (or even worse, allow the Chinese government to take us down theirs!).

A child raised by materialistic, angry, narcissistic parents will grow up with a skewed concept of relationships and human emotions. It’s no different with machine learning. And yet, consider the data environment (at least in the West) that our machines “are raised in,” and what they are designed to optimize toward by the giant tech monopolies that have shaped the internet to sell our attention and products:

  • attention metrics govern the internet, determining each user’s worth by how many ad impressions we can be served;
  • consumer behavior is used as a proxy for individual “interests,” confusing what we buy with what we value;
  • what is trending is prioritized over what people care most deeply about, replacing what matters with what is novel or shocking; and
  • social platforms generate mountains of social data based on algorithms built from “Hot or Not,” which were designed to prey on and encourage insecurity, judgement, and narcissism.

It should come as no surprise that when algorithms learn about humanity in such an environment, those algorithms contribute to an increasingly lonely and divided society where so many are left desperately seeking purpose and connection.

Clearly, we don’t want the AI that controls so much of the human experience raised in an environment best suited to create sociopaths. This is especially true as we enter an era where AI is itself developing a new generation of AI freed from the limits of initial human programmers, with intelligence and learning beyond our own ability to comprehend and shape.

In short, building Skynet would be bad…and what the Chinese are doing with social scoring may be even worse.

Others have written on the threats of bad AI, however, and we have already published a great post on how changing the type of data we collect can dramatically change that learning environment.

So in this piece, I’m going to tell a story of hope — of the amazing opportunity for humanity if we get AI right.

AI Can Reflect The Better Angels of Our Nature

Humans are hardwired from our early cave-dwelling days to pay attention to what we fear and what angers us. A system built to attract and sell attention will tend toward those darker angels of our nature. That is the problem.

But we are also hardwired from our earliest origins to commit to and connect with others around what we care most deeply about. That instinct is where the solution and potential for big data, AI, and our dawning Digital Age can be found. And our work with injured vets at Public Democracy gave us a glimpse of what is possible when AI learns in a data environment reflecting those better angels of our nature.

So let’s turn our attention to what is possible when we get the goals for AI learning right — and the brighter future we glimpsed when better data and compassionate goals shaped the environment our machines learned in.

Lessons from the Ebola Outbreak

Before our team at Public Democracy ever imagined building a data-driven network of trust and understanding with veterans, we saw both the potential of better data and limits of current algorithms during an effort to mobilize Americans in response to the 2014 Ebola outbreak.

When the Ebola outbreak happened, Public Democracy had just completed our earliest psychometric values models. It was the foundation of what would later become the Values Data that we recently released through LiveRamp. These early values models provided us with an understanding of who in our database was most likely to engage out of a sense of justice, compassion, a desire to protect children, or patriotic responsibility.

We teamed with Oxfam to test these models in support of a Google fundraising drive for Ebola response, sending emails encouraging people to watch a video about the Ebola crisis and then following up with an ask to give.

I’ll never forget the feeling of sitting with my team as the first big email went out, and watching the YouTube view count jump from 8 to 20 to 115 and beyond every few seconds as we refreshed. Just as we started high-fiving, though, the count froze as YouTube tried to make sense of what was happening.

If you’ve ever watched Google try to “improve accuracy” and validate views, it’s informative. Counts will jump wildly up and down every few hours as multiple Google algorithms correct each other and try to make sense of the data they are processing. With Ebola, Google gave up and ultimately just left the count frozen and “under review.”

I understand why Google needs to validate counts, and our pattern was a clear outlier. But our campaign ultimately generated half a million dollars in support from the 4.7 million Americans who engaged with our emails. We had found a way to connect and empower people through the values they held most dear, but — and this is very important — our pattern was such an outlier that it could not be fit into the reality as understood by the algorithms that govern the internet. So they gave up.

How Injured Veterans Showed Us The Way

A year later — as Russians were beginning to teach AI built by American companies that what Americans want is to fear each other and be given content to more readily divide ourselves — Public Democracy began to teach Google’s AI a very different lesson about how to bring communities together around veterans with severe PTSD.

Our challenge with the veteran project was two-fold. Our first task was to find the right vets online. Then we had to figure out how to equip those vets with the experience they needed to join a mass-tort lawsuit against the European banks that funded the terrorists responsible for their injuries. To join, the vets would have to provide their social security number, medical records, and (often for the first time in their lives) share the story of the attack that led to their injury. That would be a heavy lift for anyone, but most of our vets had severe PTSD with intense paranoia being a major symptom of that disease.

We started our campaign using traditional targeting algorithms that are descendants of programs built to solve logic problems, win zero-sum games, and achieve measurable goals. From the algorithm’s perspective, it needed to find someone who qualified online, find the moments they were most likely to engage with our content, and then start moving them through a funnel of engagement toward signing our contract. The first 200+ targeting models we designed kept trying to optimize that process to lower the cost and deliver more contracts. Each new model tried to win — to get the vets to do what we wanted. They all failed.

We did quickly succeed in bringing injured vets to the site (we can now find more veterans than the VA!), but they wouldn’t sign. In fact, if we got them to the contract, the vets never came back.

If AI is from Mars, most humans are probably from Venus. For those of you not familiar with the premise of the best-selling relationship book, Martians have one path for dealing with things and Venusians another. Martians (mostly men) want to fix. Venusians (mostly women) want to talk and connect. Neither path is better or worse. They are just different. Gender stereotypes aside, the ultimate conclusion in the book is important: The best way for a Martian to fix things is to understand the Venusian enough to realize what she needs isn’t a solution. Instead the way to fix her problem is to ask questions and listen.

That’s what we needed to do with our algorithms. Rather than trying to get the vets to the solution we had for them, we forced our algorithms to start listening. This both created new and vital insights and also delivered the communal experience humans often need to move forward.

We knew about the paranoia associated with PTSD. The fact that 80%+ of our website traffic clicked our privacy policy was an early validation of that challenge. But two other key observations influenced the decision that ultimately led our algorithms to the solution. People who viewed the contract quickly (even if they spent considerable time reading it) never returned to the site. However, others would spend hours clicking through all our resources and come back again and again.

So we decided to brute force our algorithms in a different direction. We realized the only chance we had at getting our vets to join was to stop trying to get them to join.

Reminiscent of a Luke Skywalker’s attack run on the Death Star, we stopped letting the machines control the process and instead guided them to what we felt was necessary. To do that, we shut off the ability of our recruitment algorithms to direct vets to the sign-up page.

Think about that for a second. We were running a digital recruitment campaign where the people we were recruiting couldn’t sign-up and our algorithms couldn’t measure sign-up conversions. And after 200 failures, this strategy is what finally worked.

Instead of optimizing for sign-ups, we set the algorithm to learn and test how many paths we could take vets through and where they preferred to spend their time. It became clear that the vets yearned for community and trusted only other vets who’d been in Iraq. But online community means social networks, and which led to another problem.

Facebook has built a big data fence around itself to keep a monopoly on ad revenue, and Facebook didn’t have strong enough data to find our vets. In fact, neither Facebook nor Google had enough data for our algorithms to learn how to both identify and build trust with qualifying clients.

This led us to bridge the Google/Facebook data divide by creating a pathway where once people spent a certain amount of time on our page, we recruited them into a noncurated community on Facebook. Ultimately, we created a funnel from our website to the Facebook community, back to the website, back to Facebook where we would give vets a chance to speak to another vet, who would be the gatekeeper on finally sending them to the website for their signature, which would then send them back to a human to complete the collection of information.

Once we finally found the path, something fascinating started to happen. As we built data on the vets who would join and the best moments to find them (e.g., someone searching for benefits using “Veterans Affairs” was 10x more likely to engage than someone — even the same person at a different moment — searching “VA”), our system began to cut steps from the process. After a few months, we dropped Facebook entirely from the funnel. In the end, even the human engagement component was not necessary for many of the vets to join.

Ultimately, we had so many veterans and Gold Star families signing up that we needed to pause the system. But when we stopped the ads, the sign-ups didn’t stop. The internet kept sending these vets to us organically in their moments of need and when they were emotionally prepared to join. Our algorithms had taught the internet what these vets needed.

We spent four of the hardest months of my life trying and failing and trying again and again to find the path that would work for these vets. Neither Google nor Facebook had the data to accomplish that at first. But once we got our algorithms to stop trying to sell vets on our deal and instead help support their need for community and affirmation, hundreds and then thousands began to sign up. Through that different process and very different set of algorithmic goals, we ultimately created that an empathy-based data lodestone that reshaped how the internet understood these individuals and the moments when they were ready for help and community.

Just as humans are hardwired to pay attention to what is scary and commit to what we value, we also gain understanding of our world through stories. So I’ll conclude with two stories that I hope will provide helpful lenses for understanding how why our program succeeded and how that success can be replicated.

I referenced Star Wars already (and have confidence JJ won’t let us down with the finale today!), but I’m actually more of a Trekkie. And Lt. Commander Data from Star Trek epitomizes the concept I’ve been outlining for how we can achieve data-dependent empathy.

Commander Data is a machine, and the story arc of The Next Generation begins with Data’s inability to understand emotion, the lack of logic of humans, and even some idioms and other aspects of human communication.

Data wants to be human to the point that he even tries to use an emotion chip that the brilliant cyberneticist Noonian Soong created as a way to code emotion into himself, but that wasn’t the solution. In fact, [spoiler alert], it was Data’s recognition of the dangers of being programmed to appear human and his willingness to give up his goal that helped him avoid the Skynet-esque mistakes his “brother” Lore made.

Over the course of the series, Data realized he’d never truly understand humanity and grasp the moral applications of empathy through better code. Instead he ultimately “learned” to relate to people just by spending time with them, practicing petting his cat Spot, and doing many other seemingly pointless things that fill our lives but ultimately add up to something of great worth and meaning. In polls of Star Trek fans, Data is considered the most human of all the characters in the series.

I’ll close with a second reference to the wonderful movie Groundhog Day where Bill Murray tries every permutation possible to convince Rita to fall for him. After failing countless times, it is only when he gives up on his goal and instead focuses on what Rita and others in the community need that he finally understands love and ultimately achieves his goal.

Our contemporary digital environment enables a virtual Groundhog Day for understanding both the people around us and best moments to engage them in the online and virtual environments through which we all navigate. The problem is that like Bill Murray’s initial goal with Rita, our systems are basically all trying to use that knowledge to seduce users.

But there is a better way. We’ve glimpsed it with our past Ebola outreach, injured veterans work, and other data projects in support of the common good. And through new projects we’re now pursing to support veterans, enhance healthcare, and improve economic development, Public Democracy is doing more to develop this work every single day. But we’re just one company. A lot more can be done…if we all work to change what the algorithms are doing.

No matter how much data an algorithm can crunch or how many permutations it tries, many of our biggest problems can be solved only by seeking to understand and serve others, rather than by trying to figure out how to get them to do what we want. That’s true for life, and it’s also true for data and AI.

--

--