Meditations on Thinkspot

and design considerations

Tim Pastoor
10 min readJun 18, 2019
image source

Dear Jordan (and anyone else interested in the subject),

My name is Tim Pastoor. I am an independent identity and reputation systems researcher from the Netherlands. Over the past five years I have been explaining -to anyone willing to listen- what the drawbacks are of existing identity and reputation systems such as Facebook, Twitter, Google, Amazon, Uber, etc., and how we could potentially solve them.

I believe I understand the intentions behind Thinkspot, and can only encourage the efforts. Nonetheless, I also believe some of the pitfalls that come with existing systems apply to Thinkspot as well. I say this after only having studied S2E12 of “The Jordan B. Peterson Podcast”, titled “A Conversation with Joe Rogan / Part Two,” to better understand the philosophy and design choices behind Thinkspot (straight from the source). More importantly, I believe most of the issues I will address can be solved relatively easy. And if that sounds too good to be true, allow me to explain.

Over the past few years I’ve written articles on topics such as:
- how we moved from tribes into civilization and now back into (electronic) tribes, why this is happening, and where we might be heading, in “There Will Be More Intermediaries. Not less.”,
- the essentials of understanding where I believe we went wrong and how to fix this in “Fixing Orwellian Reputation Systems”,
- a long-form reply to Twitter’s RFP for their “Health” metric in the hope some of the smart people working there would either prove or break my theories, in “On Twitter’s “Health” Metric”,
- how I was locked out of Twitter by their AI without violating their Terms of Service (ToS) (happened to me thrice by now), and how I have warned people about this scenario for years, in “How I was locked out of Twitter”,
- and have been tweeting for years about related problems and how I believe we could fix them.

(note date)

This isn’t intended as a shameless self-plug though, so enough about me. The reason I share this is to explain my urge to provide -hopefully- constructive feedback. So, let’s get to Thinkspot.

Besides applauding the efforts to build more censorship-resistant social media platforms, features such as livestreams, an alternative to Patreon, fair monetization models for content creators, and context-specifically quoting or annotating copyrighted works sound like good building blocks for an alternative social media platform to me. However, from where I sit it doesn’t solve any of the issues that plague most of the reputation systems we use today. Think of Sybil attacks, trolls, sockpuppets, online harassment, etc.

Traditional reputation systems (or even smaller platforms such as forums for that matter) start from the concept that they trust everyone and everything at first. From there, they try to filter out whatever they deem irrelevant, mostly through human moderation. Companies like Google and Facebook hire tens of thousands of moderators these days, costing them billions of dollars per year. From my analysis, this is where the problem lies that Thinkspot wants to rid itself of: human moderation.

This isn’t the first time in recent human history that big corporations struggle with the fact that it would be more efficient (both cost and risk-wise, to them and their users) to automate human switches off of their electronic networks and out of their offices. For example, the U.S. government granted AT&T the monopoly on the U.S. telephone network on one condition: everyone who signed up had to be connected to the network, and thereby be able to communicate with anyone else on the network. It didn’t matter whether you were located in down-town metropolis or somewhere on a hill in Kansas. After some quick napkin math the smart folks at AT&T quickly realized that in order for this to happen they had to employ a significant part of the American population as switchboard operators. This incentivized them to invest in Bell Labs, and the transistor. The invention of the transistor then lead to punching in a number on your telephone and (automatically) being connected to anyone else on the network, without relying on a human to switch the information mechanically, and ultimately to the mother of all geodesic networks: the Internet. The fact that the same government later broke up AT&T is outside of the scope of these writings, but you may understand there are parallels to be drawn here as well.

So, how does this relate to existing reputation systems and Thinkspot?

Reputation systems are born out of a need to find a needle in a haystack. In the early Internet days, before search engines and social media platforms, there were directories with websites. It was basically the Yellow Pages for websites, where websites were archived by category. Since the amount of data on the Internet grew exponentially it quickly became infeasible to curate all this information by hand, and so, reputation systems were born to solve for curation, and search came along.

To summarize (and butcher) how search engines work: They index and gather all the data they can find to then present whatever they deem relevant, and filter out whatever they find irrelevant. Social media platforms essentially use similar technology under the hood: They allow anyone to sign up and spread content, and blacklist any content they consider ‘bad.’ This is how reputation systems offer a good solution to a hard problem, while creating a seemingly even harder problem.

As I said, I believe it can be solved relatively easy. So, here’s my suggestion:

Don’t filter content for your users before you display it to them. Humans are still better at deciding for themselves who they want to listen to, within what context, and when. How can this solve the aforementioned issues though?

Instead of approaching everything and everyone as trusted from the get-go, perhaps we could start with the whitelisting principle. Users would add those they already trust (maybe piggy-back off existing networks?) to then only filter their search results from these connections. And if their friends / connections / followees don’t have the answer you’re looking for, perhaps their friends do. Or theirs. This way, we could filter by trusted sources and degrees of separation.

This is how humans have filtered their ‘search results’ for thousands of years, until the Internet came along: from their already trusted sources (a neighbor recommending a car or car dealer, the local butcher recommending his finest meat, or you no longer recommending the local barber after they do poorly on your haircut). Think about it. How many people who deliberately spread misinformation do you have in your Rolodex or contact list?

An example. Let’s say you are deciding whether or not to buy some computer parts from a seller on eBay or Amazon. There are a million ratings from people you don’t know (you don’t even know if it has real people behind it) who say something negative about the seller. According to them he’s the worst. Now we place one rating alongside them from your best friend who (you believe) knows a lot about computers and says that this is the best seller they ever came across. They’ve had many interactions in the past. Your friend says shipping always went perfect, and that the after-sales service is superb when things somehow do go wrong. Who do you now trust? Those million people (?) you don’t know or your best friend?

From my perspective, this is where the solution lies: stranger danger. This is how you get rid of fake news, misinformation, trolls, and sockpuppets from your social network. Another example.

Let’s say you add me to your contact list and I am now one of your trusted connections. Anything I say will be shown to you, and all messages from untrusted parties are discarded. However, whatever my friends (2nd degree connections) or even their friends (3rd degree) say can also be shown to you. If I now create those million fake identities (1st degree connections to me, 2nd degree connections to you) and let them all give each other good ratings, when you visualize your network you will see that it’s me who connects you to those million fake identities. You could then down-vote me from your personal network in order to get rid of the whole Sybil swarm that lies behind me, from your perspective. You can hereby blacklist anything or anyone from your personal network yourself, without it having any impact on anyone else’s personal network. Unless there’d be some sort of reputation mechanism involved where you could label me as “toxic” (or whatever) and broadcast that information to your own network. Those who trust you could then decide for themselves how to act upon this information, and whether they want to block or continue to listen to me and my Sybil swarm. Generally, they will have an incentive to remove me from their networks too, since my intention isn’t to spread relevant information, from their perspectives. A smart person once said “make friends with people who want the best for you.” ;-)

To make this possible the only solution seems to be to give users better tools to filter for what they find relevant themselves, and curate their own personal networks further. I don’t believe filtering out results that drop below a 50/50 score will help in any way, since it’s arbitrarily picked, and besides that it’s easy and cheap to create ‘fake’ accounts with the sole purpose of changing the outcome of any voting mechanism on platforms like these. And even worse, this is generally where the troubles start, once platform owners start deciding what is relevant to their users and what isn’t.

When you ponder on “stranger danger” long enough you will probably realize how you don’t have anyone in your personal network (think of those you follow on Twitter, the contact list on your phone, or your Rolodex for that matter) who spread fake news or other types of misinformation (according to you). And even if you do, if you’re, say, a journalist or researcher you might have a very good reason to connect with them. Or maybe you don’t listen to what they have to say about the news but only sports-related stuff; content within another context. In the end, it’s all very subjective, and computer code consists of purely objective logic. It’s practically impossible to use objective logic to solve for subjective logic. And the subjective value of a trusted source is always greater than the subjective value of an untrusted source, as circular as that may sound. The reason I point it out is because existing systems don’t assume this, by design.

In other words, how I see it is that the trick to designing a quality social media platform is to create it in such a way that everyone decides for themselves who to listen to, when, and what for. Don’t like someone or something they’re saying? Then don’t add them to your network. And even if you add someone to your network who links you to others that spread any type of information you find undesirable, you can visualize your network and remove yourself from the equation with a single click.

Instead of relying on third parties to blacklist bad actors from our personal networks we should start with tools to whitelist those we already trust, and only listen to them and those they trust. On traditional systems we often listen to those we don’t trust, to then complain about how, say, ‘toxic’ they are.

However, any centralized platform puts itself in a position where they can be forced to remove content, since they control the platform and associated data. The only (technical) solution to this problem is peer-to-peer (P2P) networking, where all “peers are equally priviliged, equipotent participants in the application.” It is practically impossible to take down content from a network like BitTorrent or Bitcoin. Let alone taking down the network itself.

There are already projects out there working on P2P social media solutions, such as Mastodon, Iris, and Aether. I believe systems like these can show us how to build truly censorship-resistant platforms, by design. Hopefully, Thinkspot will have a P2P back-end as well, and will allow anyone to run a front-end themselves. What I also hope for is that users will have to create their own (cryptographic) identities so that any data they create and share can be truly controlled by them, and can be shared end-to-end encrypted with their trusted connections. Not only would this be more risk-efficient to the user, since their data has now become mobile and can be taken elsewhere, but also to Thinkspot, since they’d have to store a lot less personal data. In other words, consider personal data to be toxic waste instead of a gold mine.

Where it comes to advertising and whitelisting, theoretically you could promise advertisers to reach near-100% of their targeted markets. After all, if I have to sign up in order to receive your spam, is it really spam?

For example, if you add an advertiser to your network and only request advertisements on one or more specific subjects they would then have an incentive to only deliver content you deem relevant, without needing any other personal information from you. Once their information isn’t relevant to you any longer you will have an incentive to down-vote/unfollow the advertiser out of your personal network. From that moment on, none of their messages will be displayed to you any longer. This could create a level-playing field for privacy-friendly and opt-in ads, besides the other planned mechanisms that would allow for (better) monetization models.

In the end it all comes down to common sense, in my opinion, and hopefully this article is enough food for thought to come up with a better design than traditional systems apply. I believe in the good intentions behind Thinkspot, but as you would say: “good intentions are only the beginning.” It’s just that I would hate to see you step into the same pitfalls as everyone else.

To anyone who has any comments, questions, and/or suggestions after reading this, feel free to contact me on Twitter, Iris, Mastodon, or perhaps one day on Thinkspot. :-)

Hopefully this helps, and I wish you all lots of censorship-resistance!

All the best,

Tim

Disclaimer: I intentionally haven’t mentioned “blockchain” or “AI” / “ML,” since I don’t believe they offer real solutions to the aforementioned problems. At least, not nearly as much as they can help us with solving other issues.

Changelog:

June 20, 2019 — added Aether to the list of P2P social media examples.

--

--

Tim Pastoor

Rants about Bitcoin, P2P Identity & Reputation, and Intermediaries