Connecting with real-world entities: Is structured content missing a trick?
Joe Pairman
1

Joe, I am more and more coming to the conclusion that there is a deeper problem here which is preventing people from adopting this obviously useful technology. People are having a very hard time learning to think in hypertext.

They are, to be sure, having no problem at all reading and researching in hypertext. Reading hypertext, and even reading linear texts as if they were hypertext (an activity largely enabled by Google) comes very naturally to us.

Writing in hypertext, not so much.

The problem with writing hypertext is that, by its very nature, a hypertext page is a node in a network connected by a web of subject affinities. It is hard to write an effective page without working straight through it in a linear flow. But it is very hard to maintain the flow if you have to think through all the junction points in your content. You can go forwards and sideways at once, but that is just what hypertext does.

Soft linking, of course, provides a technical answer to this problem: simply mark up the subject affinity and move on. But by itself this does not seem to conquer the sense of lostness people feel when trying to write nodes of a hypertext.

A complex hypertext is hard to visualize and hard to verify. It has so many pathways, how can we be sure that they are all correct, all lucid, and all useful? Beyond a pretty small scale, you can’t. How is an author to get the feedback they crave that they have finished their work (work in the sense of opus) correctly?

The wider uses of annotating real-world entities go beyond linking, of course, but that just makes the feedback problem worse. The more interconnected the content becomes, the harder it is for an author to understand how it behaves.

The problem of individual contributors being unable to understand the full behavior of a system is not unique to content, of course. It is present in all sorts of engineering applications, where it is more or less certain that the full behavior of the system is unknown and unknowable, and that the best you can do is to mitigate risk and to build in feedback loops at all levels so that you learn more about the machine during its deployment. (One of the lessons of the Hubble Space Telescope project was, service missions are essential.) Beyond a certain point of complexity, all development is iterative on product in the field.

But authors are not used to building systems like this. It is not part of their culture, and not part of their toolset. The idea of finality and authority attached to published work makes the idea of iterative refinement of published work hard to swallow (Writers are very used to dealing with other types of uncertainty, especially regarding how different readers will interpret content.)

But we do need to get past this. Hypertext is an indelible part of our culture and work today. Failure to support the hypertext reading habits of our readers, and to exploit the ability to link content to objects in the real world, is costing up substantial opportunities. Works that a final, authoritative, linear, out of date, and wrong are simply not an acceptable standard anymore.

New tools that natively support hypertext are going to be a necessary part of this evolution, but the tools themselves are not enough. We need to find a way for writers to get comfortable with the idea of hypertext, and with the necessary loss of individual control, both over the reader and the content, that goes with it.

Like what you read? Give Mark Baker a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.