Here’s what’s wrong with algorithmic filtering of your social feeds

Algorithmic filtering has come to Twitter, as many suspected it would. And while it likely won’t kill Twitter — despite what some hysterical Twitter users seemed to fear — it is not a magical solution to all of Twitter’s problems. And beyond that, it also comes with some pretty obvious social downsides that are worth talking about.

As the hashtag #RIPTwitter started trending following BuzzFeed’s initial report of the new features on Saturday, the corporate Twitter machine went into defensive mode: Co-founder and CEO Jack Dorsey responded with a series of tweets saying he was listening, and that Twitter values the traditional timeline, and noted Twitter investor Chris Sacca promised that there was “zero chance” the chronological view would disappear.

The new offering is essentially an expansion of the “While you were away” feature, which has been available for some time, and puts what Twitter feels are tweets you might be interested in at the top of your feed. Interestingly enough, it will start out as an opt-in feature, but eventually it will become the default view, and that’s important.

The argument from defenders of a filtered feed is two-fold: 1) Since many new users find Twitter confusing and it takes time to find accounts worth following, giving them an algorithmically-sorted feed (i.e., with tweets ranked by a computer program) is a good “on-boarding” strategy. And 2) Almost everyone who follows more than a handful of people misses plenty of tweets already, so sorting things via algorithm isn’t really much different, and probably better.

So is the fuss over filtering just another molehill that users are turning into a mountain? Is it the same as changing the star that represented favorites into an exploding heart — just another fuss that will blow over in time? Perhaps, although users of social services often come to accept many things that might not be good for them. Even the former CTO of Facebook, Adam D’Angelo, acknowledges that there are problems with a filtered feed.

If we need an example of both the benefits and the risks with a filtered feed — even one that is theoretically optional for users — we already have a pretty massive one, namely Facebook. Many supporters of Twitter’s move argue that Facebook users initially complained about filtering too, and then eventually went along with it, and engagement at the social network continued to soar. In other words, no big deal.

It’s worth noting, however, that while Facebook allows users to opt out of algorithmic filtering, the opt-out setting is difficult to find (it also automatically resets itself to filtered after a certain period of time). As a result, most people don’t opt out because they don’t even know the option exists. In user design, defaults are everything.

A survey by researchers from the University of Illinois showed that 60% of users didn’t even know that Facebook filters their feed at all. Some might wonder if that’s such a bad thing. Another former Facebook chief technology officer, Bret Taylor, noted on Twitter that an algorithmic feed “was always the thing people said they didn’t want but demonstrated they did via every conceivable metric.”

So if users enjoy their new filtered experience on Twitter, then who cares whether it’s being structured without their knowledge? Usage goes up, everyone is happy. Where’s the problem?

In a nutshell, the problem with filtering is that the algorithm — which of course is programmed and tweaked by human beings, with all their unconscious biases and hidden agendas — is the one that decides what content you see and when. So ultimately it will decide whether you see photos of refugees on the beach in Turkey and shootings in Ferguson or ice-bucket videos and photos of puppies.

Does that have real-world consequences? Of course it does, as sociologist Zeynep Tufekci has pointed out in a number of blog posts. It can serve to reinforce the “filter bubble” that human beings naturally form around themselves, and that can affect the way they see the world and thus the way they behave in that world.

Defenders of Twitter and Facebook make the point that newspapers and other forms of media do this kind of filtering and selection all the time. But they theoretically have a journalistic mission of some kind (in addition to just wanting to sell newspapers). Do Facebook or Twitter have a commitment to journalism, or accuracy, or any of the other goals media outlets have? We don’t really know.

Twitter, at least, has shown in the past that it cares about freedom of the press, and is willing to stand up in court and defend those principles. But how will that affect its filtering of your timeline? It has commercial and political considerations as well, since it is a for-profit company.

Facebook, meanwhile, has argued in the past that it doesn’t choose what to show you — that you, the user, do that by clicking and liking and sharing. In effect, Facebook says that the algorithm is just a reflection of what you have already said you want. In other words, it is specifically rejecting the idea that it plays any kind of editorial role in what users see. But this seems like dancing around the issue.

By definition, algorithmic filtering means that you are not the one who is choosing what to see and not see. A program written by someone else is doing that. And while this may be helpful — because of the sheer volume of content out there — it comes with biases and risks, and we shouldn’t downplay them. As social platforms become a larger part of how we communicate, we need to confront them head on.

Note: This post originally appeared at Fortune.com