I am thinking it is perceptions and the intention that is the issue. I.e., that we perceive that google claims to be policing content.
In other ways, not much has changed. Pagerank originally aimed for relevant content, but it probably also overvalued popular content (i.e., pages with lots of in links). I do not see how this is fundamentally different than say, checking if which reputable entities interact with a given source to help ranking and quality control. What I see as a problem is the fundamental stance that certain opinions are not just less relevant, but *wrong*. Where information gets deleted. Or, if the relevance scoring somehow works along aspects like political affiliation, but without control or transparency. So far I think we both agree.
Here’s where it gets tricky. Algorithmic methods that aim to be impartial are not. Whatever metric you put in will bias the data one way or another. And which data you feed it will manifest that to a certain extent. Data collected from a male dominated field won’t represent the female population. Doesn’t mean the system to output sexist data, but it does. When your data is constantly evolving, this gets pretty hard to predict. This is why we need to be aware that there isn’t such a thing as an unbiased algorithm. What we can advocate for is a set of values that we wish for algorithms to work toward. Designs, of both systems and algorithms will eventually find the best practices, but it’ll take some experimentation or less formalized trial and error.
So a fair system is probably one where as many as possible(legal?) points of view get represented, and where the factors that contribute are made transparent to users. For systems with many variables and complex algorithms transparency, this is hard to do in practices. However, the closer we get to best practices for the values ranking and filtering systems should espouse, the closer we can get to making that sort of transparency happen.