Has DeepMind Really Passed Go?
Gary Marcus
51424

What Deep Mind and all other digital/hexadecimal A.I. systems appear to have trouble with, no matter how many analytical and decision making layers they possess is the simple fact that they do not think like we do. We are analog. They are decimal. We see shades of gray, they either see yes or no.

I suspect what may be needed to bridge the “gap” is a differently “wired” processor or two in the system that can deal effectively with analog processes; perhaps as a filter or as a comparative refinement “plug in”.

Go is a very analog game where there may be three or more “right” moves (unlike Chess) where the player him/herself makes a preference-generated decision rather than a “most right” decision. I play Go and trounced much more than I gain victory. Yet I still play it because I like the challenge it presents. It can be frustrating, maddening, and success ultimately depends on the personality types present in the players. This is not something that can be programmed or chosen, it has to develop over time. Perhaps somehow interconnecting Deep Mind with one of the “learning” patterned experimental computing systems that has a sensory system as its input would help (provided the systems are compatible).

Honestly, while Deep Mind is very smart as a computing system, it may be too “smart” for its own good. Over-analysis can make even the fastest program-sets stumble, and Go is very seductive in this way. It invites over-analysis while offering (as I commented) more than one equally right result. Perhaps Deep Mind needs to be willing to risk a move and learn from it rather than choose the best result…especially when there are several “best results” presented. I wonder if letting it lose and learn might not be the best way for it to refine its “Go” gaming; after all, it’s how everyone else learns the game.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.