Support Utopia

A Customer Service Product Fable

Aeons ago, our forefathers left the shores of NoSupportLandia, where their intents could not easily be actualized in their native UI and whose administrators mostly ignored their pleas for service and many recurring feature requests…

It was a perilous journey, but they held in their hearts a vision which nourished them — a vision of Support Utopia, a shining City on a Hill where the pleas of the people were heard, understood and answered in a timely and courteous manner, and the Product-Itself provided clear well-documented pathways to actualize user intent such that their requests became fewer and fewer until they eventually disappeared…

Support Utopian Ideals

At the core of their vision, these brave Crusaders believed that someday an intelligent UI/UX would, could, or should exist which creates and updates its own multi-media documentation and answers both natural language questions and keyword searches related to its functionality…

  • No longer would support technicians be forced to write documentation from scratch and maintain up-to-date descriptions of an ever-evolving product [the map is not the territory] — the product would describe itself programmatically, and be capable of automatically creating and updating text instructions, screenshots, animations and short videos demonstrating its features and functionality.
  • Updates to the product would auto-generate changelogs keyed to UI elements and functionality. When changes were deployed, the product UI itself would give recommendations for review to support technicians who would simply check what little documentation does exist to make sure everything was in agreement, and then okay it, adding their 💖 of approval.
  • All information about the UI and how it works would be contained in one set of underlying data points which populated/broadcast information to all subsidiary corpora/documentation on other platforms/systems. (eg, bots)
  • No longer would a Help Center exist wholly apart from a product — it would be embedded experientially throughout. A user could query/inspect an element directly to gain information about how best to utilize it. (3d Touch?)
  • Tickets, feedback and feature requests could be opened from anywhere, and the activity path which lead to creating a ticket would be appended to a transcript for analysis… leading to more accurate, element-specific measurements of bugs (including steps to repro, plus ability to accurately measure against documented purpose/expected behavior of elements) and enabling granularity of feature requests linked to specific elements/activities (ie, user intents lacking apparent paths to actualization).
  • Automated natural language chat would be available in-product at any point without requiring sign-in or 3rd party accounts to solve the majority of questions before they ever became tickets. Where tickets were necessary, automated chat would collect all relevant user information (including account info, browser, device, OS, initial problem description, etc) required to move forward before a ticket is ever filed for review by a human agent.
  • Tickets, when they came in, would be automatically sifted using natural language processing, and macros/triggers would fire automatically for the simplest tickets, or flag for review more complex inquiries and actions requested by users. Ticket-answering macros would in effect be powered by the same natural language question-answering technology as automated chatbots.
  • Agents would have access to cleanly organized mostly self-updating workflows, rule sets, macros, admin tools, prior actions taken (jurisprudence), open issues — all from ONE interface — to enable them to accurately assess what would be the best path to resolution.

Holistic Integrated Support

In short, the Support Utopian ideal is a wonderland where the Support Technician(s) can simply kick back and “relax” because most of the work is done for them…

The Support Utopian drinks 🍹 and rests 😴 easy knowing that support is integral to building good 💩, and that the product is more or less self-documenting; the tickets are almost self-answering; the bugs and feature requests largely self-reporting.

Once the feedback loops are in place, the Support Utopian can concentrate on what’s really important…

Is Support Utopia possible?

Definitely maybe. But probably not without a new paradigm of how support is integrated into product development…

Self-Documenting UI

Not being a programmer myself, I’ve been looking into this idea of a self-documenting UI to see if that’s really even possible and what that would look like if it were.

  • Self-documenting code seems to be a “thing.” The idea is that in your code, things are arranged in such a way any new coder could come in and easily understand it…

I guess what I’m proposing would be that:

Self-documenting code informs and integrates directly with what becomes public-available documentation available directly in the UI.

UI Inspectors

Sounds kind of abstract, but if you open the Inspector in your browser, you get kind of a glimpse of one conceptual pathway that UI elements might be able to give information about themselves:

I don’t really understand all of that myself, but I see it as a demonstration that useful meta-documentation could be somehow built into UI elements…

Or take the Accessibility Inspector on Mac. Borrowing the screenshot from link to demonstrate:

It enables you to hover over Mac desktop UI elements and get more information about them, including their names and position within a hierarchy of actions/menus, etc. (Which I came across while reading about using AppleScript to control Sonos with macros in Keyboard Maestro. Presumably VoiceOver must hook into this info as well…)

For my money, that’s starting to look almost like something you’d see in JSON, name-value pairs describing an object — but in this case describing the appearance and functionality of a UI element:

"name" : "button_newStory"
"display" : “Write a new story”
"action" : links to
"graphic" : none
"size" : 160x40
"api-equivalent" : publishPost
"child-options" : none

Self-Documenting APIs

Speaking of macros, JSON, and users re-writing app functionality to suit them, I also found some interesting things around so-called self-documenting RESTful APIs, a la Swagger and YAML (neither of which I properly understand — so maybe I’m way off base here…)

Check out this demo of a pet store API in the Swagger UI, and a screenshot from same:

I envision documentation not unlike this created programmatically from UI elements which convey their functionality through JSON (or whatever). From it, you could even generate simple targeted example screenshots and animations of functionality… Access to such docs would be integrated into the UI elements where they appear “in the wild.”

Putting it all together⛵

So the support documentation (available on inspection of any element of the UI) would be baked into the code at a really micro-level and would auto-populate data points in a knowledge base/help center upon which the rest of the Support channels would be ultimately based.

If you can link all that back into ticketing, natural language processing and close the loop more for bugs and feedback, I feel like Support Utopia could become more than just a dream. It could become reality…

Keep on dreamin’!💤