Making our Marq: Social 2.0 (Part 2)

Marcel Tan
SEP Berkeley
Published in
7 min readSep 20, 2020
Photo by Prateek Katyal on Unsplash

The first part of this two-part series, Making our Marq: A New Consumer Curve, explored the concept of jumping to the next curve by changing human behaviour. This post dives into how Sohil and I are tackling misinformation through Marq (SkyDeck HotDesk S20) with the new consumer curve in mind.

√Some good

Nobody really likes the idea of throwing money at a problem. It sounds foolish to resort to money as a solution when other avenues may exist. If not for practical reasons, then at the very least for the reason that money is supposedly the root of all evil.

There are, of course, special exceptions. The U.S. government had to throw an enormous amount of money at the economy in 2008 with the $431 billion TARP bailout¹ and now again in 2020 with the $2 trillion CARES stimulus package.² One could easily argue that such cases were justified by the threat of imminent collapse of the banking and SME ecosystem respectively.

When it comes to softer and less catastrophic situations (like making humans behave better), however, throwing money at the problem sounds especially wrong. It immediately conjures up images of wealthy kingmakers “buying democracy” and of negligent NGOs disbursing aid money into the hands of corrupt bureaucrats.

So you just shouldn’t, that’s the “rule.”

With that said, making the leap to social media 2.0 — a new consumer curve — may involve breaking the “rule” for all the right reasons. Here’s our story on making money the root of some good.

Photo by Eden Rushing on Unsplash

The impetus

I had never written a line of code before coming to Berkeley. It didn’t even cross my mind that the term “Computer Science” was a major until college application season rolled around.

Instead, I was a history nerd all throughout primary and secondary education. In primary school, I was particularly fond of playing the historical real-time strategy game, Age of Empires. Then in middle school, I became enraptured by Europe’s transition from the medieval period to the Renaissance era (Da Vinci is still up there on my power rankings). By the time I got to high school, I was weirdly fascinated by World War II military strategy. I spent a chunk of time researching and writing about Operation Torch, the pivotal yet overlooked Anglo-American invasion of North Africa.

To this day, I still think of myself as a writer and social scientist by trade — far from the Bay Area techie archetype. I’m also quite an unlikely startup person, having seriously considered the path of a career diplomat.

One interest that underlaid all my previous curiosities though, was understanding and improving human behaviour — specifically with regard to solving collective action problems. Collective action problems are situations whereby individuals fail to cooperate toward a mutually beneficial outcome due to conflicting interests between actors.

It turned out that software, with its strong network effects and wide offering of public goods, was inescapable when it came to collective action problems.

Photo by dole777 on Unsplash

I spent two years in college doing sales at The Daily Californian, Berkeley’s news organisation. Over time, I grew very frustrated by the amount of misinformed and inflammatory comments that were posted on our online articles. While I admit that Berkeley can be a hotbed of political clashes, this wasn’t a geographically or demographically-contained occurrence. The same phenomenon seemed to be present across all popular social media.

Nonetheless, I found myself drawn to reading comments on these social media articles and Twitter wars all the same. I would leave comments sections feeling mentally bloated and sluggish as if I just ate a 10-piece McNuggets meal. In many ways, this was “fast food” for the modern consumer — one scroll and one “like” at a time. I quickly realised through multiple conversations that many of my peers felt the same way about their consumption of online discourse.

Consequently, I teamed up with my good friend and fellow SEP member, Sohil Kshirsagar. Sohil had previously worked on purplesource, a news aggregator that paired articles with different political leanings on the same topic. We had also already worked together on a past project to make conversations about current events easier. Being on the same wavelength, we got to work immediately by talking to over 140 social media power users.

We discovered through our customer interviews that these users love reading online comments sections. In truth, the comments section can be an immensely valuable place. Some constructive commenters will call out original posters for their misinformation and fallacies. Others add value by giving personal insight to a complex topic.

People gravitate to replies on social media because they are naturally interested in hearing what others think. However, we also found that the majority of people were dissatisfied with having to sift through a pile of misinformed and inflammatory answers just to find one quality answer. So we mused, why not construct online discourse such that the valuable parts of crowdsourced information are maintained while the gunk is kept out?

The experiment

We realised that people are disincentivised to think critically before answering or cosigning answers online because “likes/upvotes” on traditional social media platforms are unlimited resources. As such, these online currencies have been devalued in the same way that legal tender is devalued by the printing of money.

In economist speak, online answers are, therefore, non-rival and non-excludable goods. Hence, they are subject to the free-rider problem.

Following that logic, our hypothesis for tackling misinformation was that online moderation systems required a scarce resource to incentivise people to do their due diligence before writing or cosigning answers. Naturally, we went with the most basic, universal scarce resource out there — money.

Marq was born.

Literally throwing money at the problem of misinformation might understandably seem heretical. However, Sohil and I designed a Q&A moderation system that incentivised honesty and critical thinking while discouraging the growth of echo chambers.

To test our hypothesis, we ran an in-person experiment on the UC Berkeley campus. We gave 101 strangers some money and then asked them if they wanted to keep the money or invest it in answers to a simple question that we pre-wrote (“What is the best place to study in college? Library or home?”). The results were promising:

  1. 95% of participants chose to invest the money in answers
  2. Most participants admittedly invested in answers based on their honest opinion, not perceived popularity
  3. Participants performed their due diligence when choosing an answer to invest in (some spent ~2 minutes deciding)

This experiment showed us that a carrots and sticks system could effectively induce people into thoughtfully shopping for premium answers to questions.

Nonetheless, most people still don’t think that misinformation is soluble. The online forum crowd defaults to assuming that community size and the quality of online discourse are inversely correlated (as illustrated in the cases of Quora, Yahoo Answers, and Reddit). Their argument is that “it’s just human nature” and hence inevitable that growing Q&A communities will eventually post and cosign content that panders, slanders, or otherwise adds no value.

But if it is indeed “just human nature,” then we can redesign the incentive system such that people will interact with social media in ways that encourage constructive discourse. Just like how products can be redesigned, human behaviour can be re-engineered for the better.

Other companies have attempted to tackle misinformation through deep tech applications (e.g., spam filters, “stupid” filter,³ bot account identification). While their pursuit is noble, their products don’t necessarily make the Internet think harder as a whole. As a political scientist, I view this as a myopic approach — you cannot solve human problems created by tech with pure tech. Misinformation is a human incentive problem at its core.

Dr. Paul Farmer, Harvard medical anthropologist and cofounder of Partners In Health, captures this best in his celebrated Kennedy School speech on accompaniment as policy.

“Accompaniment does not privilege technical prowess above solidarity or compassion or a willingness to tackle what may seem to be insuperable challenges. It requires cooperation, openness, and teamwork of the sort so many of you cherish.” — Dr. Paul Farmer⁴

Our earned insights, over the course of this journey of exploration and experimentation, have made us deeply attuned to the urgency of tackling misinformation. Even more importantly, they have awakened us to the real possibility of solving an “insoluble” problem.

Some smarts, lots of empathy, and sheer force of will — all ingredients needed to supplant existing attention economies with one based on quality and trust.

Marq my words, we will see social 2.0 through in a world of humans and computers.

About Marcel

Marcel studies Political Science and Business Administration at UC Berkeley. He makes puns and has a 1210-day Duolingo streak. Together with Sohil, he cofounded Marq, a Q&A platform that rewards users with payouts for investing money in constructive answers.

Marq is in the SkyDeck HotDesk (S20) program. The beta version is now open to the public with limited spots.

References

[1] https://www.cbo.gov/publication/43662

[2] https://www.nytimes.com/article/coronavirus-stimulus-package-questions-answers.html

[3] http://stupidfilter.org/

[4] https://www.lessonsfromhaiti.org/press-and-media/transcripts/accompaniment-as-policy/ (I’m using an edited book version)

Thank you to Sohil Kshirsagar, Claire Liu, Jennifer Lu, Keshav Rao, and Justin Duan for reading drafts.

--

--

Marcel Tan
SEP Berkeley

Cofounder @ Marq (marq.live) | Cal ’21, Political Science & Business Administration