Surveillance for recommendation system

In the epilogue of the book Radical Markets, the authors speculated that we may replace the market with an AI system that recommends everything that people need in the future. They worried that the temptation to abuse the system would be overwhelming. So the authors thought that the recommendation system should be governed in a democratic, decentralized, distributed, and auditable way. I think a part of this can be achieved by using zero-knowledge proofs like zk-SNARK. (You can see the next articles to catch the concepts of zk-SNARK: [1][2])

Disclaimer: I’m not an expert of zero knowledge proofs. There might be errors in the followings.

Let’s assume that a certain group (a company, a government, etc.) runs the recommendation system. We must make the system runs objectively, without any biases, to avoid being controlled by the group. Then how can we achieve this?

(Let’s assume that the data is unbiased, and only the algorithm can be biased by malicious actors. Also, we all know that an objective recommendation algorithm itself can promote the political polarization, but let’s put it aside for now.)

First of all, we need to confirm that the algorithm is not biased. So we can request the group to open the algorithm to public. However, even it is publicly opened, we need to ensure that the group is actually running the algorithm for the recommendation process. Here the zero knowledge proof comes.

The group may publish a zero-knowledge proof on a public network to ensure that the recommendation is done by the objective algorithm. However, the group can publish false proofs while they are using a biased algorithm for the actual recommendations for users. Therefore, they should publish a zero-knowledge proof to ensure the process is done for each user.

However, the data can be the problem as we all know — garbage in, garbage out. But can’t we ask the companies to open the recommendation algorithm? I think we can.