The flip side is that we may not able to clearly see the proper rules to establish parity and fairness in the system in the first place. Seeing clearly is hard. Systems that work agnostically for the whole are ideal but most people cannot and will not think that way. They look to establish clan/tribal law and impose their will on the whole at any junction. This is a powerful force in human dynamics.
The machine laws may aslo become tyrannical even with the best intentions because humans are just not very good at large scale system thinking or seeing outside of their one viewpoints. We could end up establishing a system that enslaves us all of we are not careful and maybe even if we are.
There is also a school of thought that says that humans are hopelessly flawed in ways that are not transcend-able and that we ‘ll just end up creating more flawed systems even if they look promising from this early vantage point.
An example might be something like AI for sentencing (though I was not talking about AI in this post, this is the most common example of a negative use of a promising technology that I can think of currently). If the only cases the AI can study are hopelessly filled with bias and corruption from fifty years of bias and corruption in arrest pattern, then that is all current AI systems will be able to learn and hence they would end up establishing a corrupt system in a new, more pernicious and suffocating way.
I would rate these risks as very high to moderate.
Humans are great at not living up to their potential.
There are also a number of ways this can play out where corrupt powers find a way to warp and stunt the growth of the system, essentially making the current system even more draconian which leads to all kinds of bad things for humanity.