The Startup
Published in

The Startup

Good Intentions Only Mean so Much in Startup World

How the trend of responsible social media platforms failed to make a dent in the industry

Since the tech world started imploding after scandal upon scandal not least of which was the one that got our reality-star-in-chief elected, there have been innumerable startups trying to capitalize on the trend of a desire for a more ethical internet, especially in the social media realm. But none of those startups to date have been particularly successful.

I spent two years building a social media platform with “good intentions”, and didn’t get very far. Those two years we’re not wasted — they made for a good set of product development skills for my career — but suffice it say I’ve put the startup on hold.

That’s because, after two years, I finally realized: people use products for their entertainment value, and not for their intentions.

Any application, in the real world, has to make money to survive (unless it’s nonprofit, which has its own set of obstacles). And right now the best way to earn money via an app is by monetizing people’s attention. The best way to garner attention is through sensationalism and clickbait, which thrive on our existing social platforms. That was the premise I, and many others attempted to challenge through various startup efforts. But a necessary component of any new product with hopes of improving the reliability of social media had to necessarily rethink the way it would make money. Attention means engagement, engagement means ads, ads mean profit. And as yet, there have been no social media solutions that have come up with a better way to earn profit than ads.

Until either A) people’s general consciousness gets to a point where they would rather be informed than entertained (which is unlikely to happen), or B) a startup comes up with a way to earn money faster and more efficiently than an ad-model, then the ecosystem of misinformation will prevail.

This is the unfortunate truth we have to live with.

Until now, startups in this space have come up with a variety of novel business models:

  • Points system — users can purchase and redistribute a digital currency, similar to a freemium mobile game (e.g Okuna)
  • Brand Image Security — prevents ads from being placed alongside undesirable content (E.g. Factmata)
  • Aggregate Subscriptions — get past multiple paywalls with one monthly fee (E.g. INKL)

And of course, there have been some great ideas which have not been monetized whatsoever, and simply managed to stay afloat via venture capital (E.g. Nuzzel).

At the other end of the spectrum, there are non-profit solutions that have also attempted to tackle the problem from a more academic context. I think it is fair to say that, while potentially useful for understanding the problem better, such solutions often less user-friendly, and as such, can never replace incumbent platforms, meaning that they’ll never make a real dent in solving the problem.

So, now we have a big picture of the ecosystem and an understanding of the disheartening truth that good intentions don’t make money. All of the above startups and projects are struggling or gaining little traction.

Where do we go from here? If we can’t come up with a sustainable market solution, nor a non-profit one, what’s left? That’s the million-dollar question.

To get to the answer, we first need to understand one thing: while current social media platforms are a primary enabler of the spread of misinformation and clickbait, they are not the cause of the problem. The root of the problem is… you and me ! We, users of those platforms, tend to spread unverified content, usually unknowingly, only because it’s so easy and tempting to do so. I’m guilty of it, you’re probably guilty of it; there’s no shame in admitting that the internet is filled with metric tons of convincing information that looks legit, but which we end up finding to be inaccurate, whether because of negligent reporting, or deliberate trolling. Spreading fake news is not only done by criminals, and that’s the first step to understanding why platforms have had such a hard time controlling it. There is no quantifiable way to pinpoint the perpetrators, nor the content. It’s one big intertwined, multidimensional matrix of people and information, guilty and innocent, true and untrue.

Visualizing this as a cake, Facebook is the bready part, fake news is the frosting — the sweetest part, but also the one that will clog your arteries. Now we have to squeeze something in between. Something to slow people down, between the moment of their initial impulse to share, and the actual act of sharing. We have to add another layer to that cake.

When you take a bite of the cake, you’re getting all the layers at once, but we can try to make that second layer so tasty, that on their second bite, people will slide off the icing and only eat healthy part. Eating the entire slice of cake will be tempting at first because there’s an extra step to sliding off that frosting. Eventually, bakeries will make cake without the icing, and the icing factories will go out of business, or shift to a more healthy form of cake topping which doesn’t include all the artificial colorings and preservatives.

What that layer will look like, I’ll leave to you to figure out and experiment with. The important thing is that this solution is not intended to replace existing platforms, but instead, work with them. Any direct competitor, despite having a better “mission”, won’t work.

My leaning would be something along the lines of a browser extension (e.g. B.S. Detector). Something simple, yet powerful, which will not label content as fake or true (which is also impossible — if a human has a hard time doing it, so will a machine), but instead quantify it in terms of the topics involved and the claims it makes, and compare those claims across similar pieces of content from different publications or databases, studies, etc — adding context to the content sharing experience. Such a context would give users the information they need to make a better decision as to whether or not to share that piece of content on social media. Then how do we make money off of that? Simple, a one time fee or a subscription model, just like an ad-blocker. Don’t expect to become the next Facebook with such a product, but you might be able to get some traction as a small but sustainable software business, and maybe even get acquired by one of the big ones.

I put all my writing out without paywalls because it’s more meaningful to me to share my experiences and inspire others to do the same. If you’d like to support my mission, please consider buying me a coffee.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store