Tony Wilson
Jul 27, 2017 · 1 min read

Hmm, I side with Elon Musk here. Admittedly, I really think we’re in more of the machine learning era and a long way from AI but it is a powerful technology that bears caution. I find it especially worrisome that we do not truly understand human intelligence (e.g., thoughts, memories, consciousness) and we naturally tend to anthropomorphize everything; approaching AI from this perspective gives us a 33% chance of a good outcome.

Typically speaking, a situation generally get better, worse, or stays the same. Well, lets apply this to the AI issue. Better = AI leads to benevolent artifical intelligence (e.g., robots), Worse = AI leads to artificial intelligence that is in direct/hostile conflict with humanity, Stays the same = AI leads to an artificial intelligence that is similiar to our projections of AI and humanity intersecting, as seen in science fiction — life imitates art quite frequently with respect to sci-fi…even in our sci-fi rumminations, half of the stories end of badly for humanity.

Pragrammatically speaking, we should proceed with caution in this area.

    Tony Wilson

    Written by

    Software developer (iOS, Java, Python, Javascript/web-stack); Agile Methods guy. Avid reader (science fiction/fantasy and epic fantasy)