Feb 23, 2017 · 2 min read
Interesting future focus thesis topic. Watch as I put my foot in my mouth, but there is something a bit off and I cannot seem to put my finger on it yet. Here are my thoughts Joël.
- “A Future Without Choice” sounds terrifying
- A.D. only works when all three actors are well aligned. Hmm, this makes me think about the Project Management Triangle: cost, scope, and schedule, pick two.
- “Designing Against the Experience Bubble” assumes that an experience bubble is bad. Safety, comfort, and community are some good reasons that bubbles exist (also, great for entertaining kids). Maybe, the focus should be less on against and more on learning to expand the bubble.
- “Extended Intelligence” — yes, very much so. Good call.
- Interactions more human-like also sounds like a road we’ve (design) traversed before. How do we avoid the uncanny valley and skeuomorphism? Sometimes a more human-like personality is a bad thing when it comes to building trust. The Polar Express scared a lot of kids…and me.
- Trust through control and transparency works for only some users, not all. Those willing to hack their way through APIs and algorithms might like this focus, but having this as a guideline for designers seems dangerous to me. Yes, make control and transparency a goal, but don’t directly link it to trust. Trust is built differently for different people. Also didn’t you say something about “A Future Without Choice” earlier?
- You mentioned that, “Rams, Nielsen (1998), Norman (2013) and Schneiderman (2009) are insufficient for automation because principles regarding transparency, control, loops and privacy are missing.” What are you referencing and doesn’t Norman go deep into the need for visibility? I do believe that these “older” design principles still hold water well when it comes to automation.
Despite all that I’ve written, I am glad you’re bringing future thinking to UX. Keep writing + sharing Joël!
