The trouble with AI is trust?

Cracking the deep, cheap secret of neural nets

Possibly the most important AI news last week was another big score for astrophysics and the academic republic of Cambridge, MA. Henry W. Lin and Max Tegmark explained why neural networks are so effective at “unlock[ing] almost every lock that we’ve tried” to unlock. Lin’s explainer on Quora continues that metaphor with “magical keys”. The summary at MIT Tech Review frames the issue without resort to magic (sadly), “There is no mathematical reason why networks arranged in layers should be so good”. Lin and Tegmark, “show constructively that polynomials can be approximated arbitrarily accurately with a neural net of fixed size. This is in contrast to a bunch of older works which show that if you allow the size of your neural net to grow to infinity, you can approximate anything you want.”

But we still put our trust in black boxes

On the applied side of artificial intelligence and algorithmic decision making, John Mattison of Kaiser Permanente asserted that the most important question of the 21st century will be: Whose black box do you trust? Tim O’Reilly responded this week with four criteria for assessing which algorithms to trust.

I am skeptical that the assumption of choice applies to algorithmic decision making. Consumers and platforms users often know far too little about how algorithms work. Even when they do know about one platform, they may not know about competing platforms, prohibiting any kind of logical comparisons. Further, where a particular decision tree has proven profitable, it tends to be adopted across an entire industry. It may not be possible to choose a platform that avoids a popular strategy for arriving at particular decisions if none currently exist. Would you like to read news produced by a news room that is not influenced by click-through rates? Please let me know when you find one.

Juking the stats on college rankings

In an excerpt from her new book, Cathy O’Neil explains how the formulas (read: algorithms) that determine college rankings have led colleges and parents to juke the stats. Bucknell and Emory fiddled with the SAT scores they submitted. One set of wealthy parents gave a consultant $25,000+ to get their child into NYU. Of course we can assume that black boxes will lead to information asymmetries and attempts to profiteer a la Akerlof’s used car salesmen.

Wikipedia’s bots fighting Sisyphean culture wars

Taking a closer look at bot vs. bot competition, Milena Tsvetkova et al. find that Even Good Bots Fight using Wikipedia data to show that Wikipedia’s maintenance bots are constantly adding and deleting each others’ edits. What’s more, “just like humans, Wikipedia bots exhibit cultural differences”. As Pedro Domingo points out in his 10 Myths about Machine Learning, “not all learning algorithms start with a blank slate; some use data to refine a preexisting body of knowledge”. And that’s how we get bots tirelessly waging quiet editorial culture wars on Wikipedia.