WTF Quora: how a platform eats itself

Joan Westenberg
Published in
6 min readJun 3, 2024

Quora had a simple premise: it would be a place to share knowledge and expertise, where curious minds could pose questions on any topic imaginable and receive thoughtful, well-informed answers from the platform’s erudite community.

Think Yahoo Answers, for tech employees and Gladwellian intellectuals.

Founded in 2009 by two former Facebook employees, Quora quickly took off, featuring answers from an impressive roster of Silicon Valley luminaries, Ivy League academics, industry insiders, and enthusiastic hobbyists eager to weigh in on everything from the mundane to the esoteric.

By 2014, it was valued at north of $900 million.

Its success was built on the idea that platforms shouldn’t just connect us to information but to the people who held that information in their heads.

On Quora, you could do more than learn about quantum physics or the history of the Peloponnesian War; you could study it through the personal notes and replies of bona fide experts.

Quora’s stated intention was to create a sophisticated ideal of the internet, where people would altruistically gather and engage in stimulating discourse and knowledge sharing.

As of 2024, it has — arguably — failed.

The reasons?

  1. The overwhelmingly destructive power of digital marketing and growth hackers.
  2. The technology that investors and users had been promised would take the platform to new heights: artificial intelligence.

It began innocuously enough. As Quora’s community swelled into the millions, the company installed algorithmic systems to help sort and rank the number of questions and answers.

Whereas previously, the best answers bubbled up through a robust system of user upvotes — a wisdom-of-the-crowds approach not unlike Reddit’s — now algorithms played a heavy hand in determining which content was surfaced and promoted.

Helpful, well-crafted responses were still rewarded with placement at the top, but the AI began to exhibit some quirks. Answers with more extreme views started receiving outsize attention.

Users discovered they could game the system by writing with more sentiment and zeal, even if the underlying substance was — shall we say — lacking.

Slowly but surely, the algorithms were reshaping the culture and dynamics of the community.

As Quora ploughed forward with AI-driven enhancements, things got weird. Unscrupulous users — the digital marketers and growth hackers who lurk in the shadows of every platform — realised the AI could be exploited.

They could dupe the system into heavily promoting their content by packing their answers with SEO-friendly keywords and exaggerated claims.

Moreover, they learned that the AI struggled to discern between human-crafted text and machine-generated content — meaning a user could feed a question into a language model, paste the output into Quora verbatim, and watch the algorithm shower it with upvotes and attention. Almost overnight, the site began to drown in a swamp of spammy, incoherent, AI-generated text.

Simultaneously, some of Quora’s most beloved contributors, the academics and experts — who should have been prioritised as its lifeblood — began to drift away.

Though the AI overwhelmingly favoured answers written with confident declarative statements, many of the platform’s brainiest users were put off by this. They had always understood their role as context-adding, not decree-issuing from on high.

Wary of being misrepresented by the algorithms or unwilling to pander to them, the quality contributors logged off. Those who remained watched with dismay as their thoughtful, nuanced responses were buried beneath a rising tide of zero-calorie content — not wholly inaccurate, but lacking any real nutritional value.

As Quora’s woes compounded, its leadership found itself in an ironic predicament.

A company that had built its name on celebrating knowledge and expertise was now at the mercy of machines that could not discern the difference between information and wisdom, nor between text that was factual and text that mimicked the patterns of factual-sounding writing.

In their relentless pursuit of engagement, the algorithms surfaced content that “earned” clicks and views but slowly eroded the culture of substance and erudition that had once defined Quora. By chasing short-term metrics, the company was jeopardising its long-term viability as a source of signal in a world saturated with noise.

Quora’s final capitulation was the launch of its chatbot platform, Poe. Pretentiously (and excruciatingly) an acronym for “Platform for Open Exploration,” Poe integrates AI models, including those from OpenAI, Anthropic, Meta, and Google.

Users can now engage in back-and-forth dialogue with these chatbots, receiving instant answers to their queries. While this is pitched as a step forward in “convenience and accessibility,” to many, it’s the final nail in the coffin of a promising, human-centric platform.

The introduction of Poe, backed by significant funding from populist-cosplaying investors like Andreessen Horowitz, shows how completely Quora has surrendered to the AI trend. The platform that once prided itself on being a home for human-generated content has now become a farm for AI-powered responses. The monetisation features of Poe, which allow bot creators to earn money through subscriptions and per-message fees, incentivise the proliferation of AI-generated content at the expense of authentic human interaction.

So we arrive at the real reason behind Quora’s demise — and the cautionary moral of this post. In outsourcing more and more of the community’s mechanics and dynamics to artificial intelligence systems — no matter how advanced — the company failed to appreciate AI’s limitations, while being blinded by its touted abilities.

Powerful as language models are at identifying textual patterns and predicting the next word in a sequence, they are rather poor at sniffing out the hallmarks of human expertise like reasoning, sound argumentation, and substantive knowledge. This deficit has proved to be the platform’s undoing. As one user put it, Quora had devolved from “a parliament of experts to a robotic ghostwriter’s content farm.”

Quora isn’t alone in this. Platforms like YouTube and Facebook are grappling with how to square the immense promise of AI-powered recommendation and moderation with the unnerving reality that these systems fail to make essential distinctions we humans intuit naturally. They cannot ifferentiate between authoritative information, compelling misinformation, shitposting, incisive commentary and attention-capturing outrage bait.

At their best, these algorithms can surface delightful serendipity — expanding our horizons in ways that enrich our understanding. At their worst, they create perverse incentives and reward the wrong things, polluting a platform’s culture and eroding trust in its community.

But back to Quora. As the dispiriting trends accelerated — dwindling expert engagement, worsening content quality, a grinding erosion of community cohesion — user growth first plateaued, then nosedived.

Anecdotally from my own circle, those who remain spend less and less time on the site, repelled by the increasingly frothy, self-referential noise.

Potential new users who stumble on Quora are unlikely to be impressed by the fluffy, repetitive entries awaiting them.

This one-two punch — existing users churning while new user signups stagnated — has sent Quora into a downward spiral from which it will likely never recover. It was, in effect, a brilliant own goal.

By 2024, Quora is a shell of its former self, attracting few users who genuinely love the platform. For a fleeting moment, Quora had offered a glimpse of what an internet of substance could look like — and then, as quickly as it had come, that vision was horse-traded away.

As a former Quora user, I’m left with an unsettling question: in our blind march toward an AI-intermediated future, is it even possible to imbue our algorithmic tools with the discernment and wisdom to curate for more than just shallow engagement?

How do we preserve fragile ecosystems of expertise in the face of machines that struggle to meaningfully distinguish knowledge from mere information and opportunists who will use and abuse those systems for their own gain?

For all the capabilities of LLMs, they still lack fundamental capacities and cannot replace human engagement, communication, and thinking. The Quora that could have been is very much gone.

But other bright-eyed tech founders would do well to remember the difficulty and importance of building online communities that elevate substance over shitty, cheap engagement and the very human wisdom we risk losing when they throw it all away.

Join thousands of other readers, creators and thinkers who subscribe to @Westenberg — where I write about tech, humans and philosophy.