I appreciate the overall warning message in this post, but I notice that I have very different experiences with my technology. As a curious person, the mapping technology that you’re saying makes people less aware of their surroundings is exactly why I have a great mental map of my town, and every town I’ve ever spent more than a week in. I love…
I think there’s more at stake than just trust — the humans who work alongside AI want to be able to learn from it. Explainability helps us see new connections that we wouldn’t have made before, and helps us understand how to use AI to solve even more complex problems.
But it still seems like a backwards approach to build in explainability after the fact.
Hi Mark, Great analysis! I think you’d be interested in the business model of Mind AI, which allows individual knowledge workers to license their intellectual property by contributing to our ontology database. It’s an AI-meets-blockchain approach that addresses many of the problems you’ve identified in this piece.
That’s cool, but I stand by my original statement: your list left out some of the best follows on CT. Producing a second list of “women in X” never makes up for the harm of leaving the influential women out of the original list.