I read through a decades old paper criticizing a few unhelpful feature of the AI field and its researchers. Reaction and thoughts are kept here as a record.
As a student, I am still very much outside the AI field and out of contact with its vanguards. The overall theme of the communications I am exposed to is one of confidence, excitement and urgency about the state of AI and it’s progress towards A-General-I. Only 30 years and we’ll will have our deity in circuitry* apparently, so I’d better hurry up and join the party.
I’ve only read through a few ‘naysayer’ type pieces and all of them including this one, Drew McDermott’s Artificial Intelligence Meets Natural Stupidity, have been really enjoyable for me. I hate thinking that I am naive about something which is important to me, and a little too often I get the feeling about AI that it’s too good to be true. …
I read through Digital Reasoning’s Technology Review of their Synthesys System, which is their main commercial product offering.
Digital Reasoning (DR) starts of outlining their playground, the booming world of unstructured-data. Unstructured-data, apparently a whole new class of economic asset, is filled with mysteries, and stories, and conspiracies. DR sets itself up as a company to build new approaches to understanding human communication in unstructured datasets, and their work will have huge utility to the financial and intelligence communities in particular (their targets I assume).
As described, DR’s Synthesys machine learning platform seems to be a straightforward application of supervised learning techniques, in the realm of natural language, and some unsupervised learning as well. The mention that unsupervised techniques are used in their analytics, but don’t ever expand on it. …
Read through Google Brain’s AMA on r/machinelearning if you have forgotten what true yearning feels like. Man what a great job to have. Maybe one day guys, maybe one day.
“Do you need a PHD?” is a painfully common question asked of ML researchers at Google, and the answer is pretty clear. No, you don’t need one because 22% of the Google Brain team don’t have one, but the clear majority do have one.
Chris Olah is a clear outlier, having not completed a university degree. Damn impressed with you Chris.
About 5% of the questions in the AMA were related to education for a career in ML. That’s around 40 questions, all with about 1 up-vote each. …