We often want to supplement what we see with what someone else sees: we ask our friends to read what we write, check our math, and review our codes. We want to borrow the perspective of someone who “knows” stuff, but also has a different point of view to look at the same thing — with the caveat that we have a good idea of the difference in the point of view so that we can integrate both what we have seen and what they have seen into coming up with a better product. Having the interface through which we can ask for Google algorithm’s input is, in principle, a great idea. We know that Google certainly “knows” a lot of stuff. Algorithms almost invariably have a different “perspective.” If done right, Google can dispense incredibly valuable insights.

The problem is whether we can productively appreciate the difference in perspective. Google will be no more right than we are, except in the narrow sense of minimizing a particular cost function (or functions —optimization problem with neural networks with many nodes can get kinda complicated.) In order to productively make sense of the difference in perspective, we need to have a good idea, at least, of how perspectives differ. One major problem is that, while we know neural networks are often more “right” (b/c they can optimize complex loss functions better than alternatives), how exactly they get things “right” can be often mysterious. Without understanding the how, can we productively appreciate the difference in perspective Google algorithms offer to supplement the alternative perspectives?