Different economics require different standards processes
This is utterly true. Procedurally, the W3C process favours implementation. But it also doesn’t present the whole picture.
First off, the W3C is usually dealing with specifications that have very particular economic dynamics.
Second, the W3C has, off the top of my head, three different processes, two of which have evolved explicitly because the standard W3C process wasn’t working well.
1. The economics
Historically, the W3C has tried to solve two different problems.
The first is to harmonise the implementation of a particular feature of the web stack across vendors. E.g. several browser vendors see the value in addressing a particular problem area and need to make sure that the solution is compatible across browsers. This has sort of worked because browser vendors realised that incompatible APIs and features reduced the value of their browser to pretty much everybody. Compatibility maximises adoption and value because they share a platform.
The key here is that you already have more than one browser vendor who has figured that the marginal profit (marginal value added minus marginal cost of implementing) of a feature is high enough to be worthwhile.
The second problem the W3C has tried to solve is when it sees the web stack as incomplete in some way or tries to push it in a particular ideological direction. These tend to be efforts that are high on ideology and low on economic rationale. Examples being early flavours of RDF, various flavours of XHTML, XHTML2 (a big enough flop to mention on its own) and similar messes.
It is generally much less successful at solving the second type of problem than it has been at the first (they often end up being utter disasters). The reasons should become clear when you look at the processes.
2. The processes
The W3C’s normal specification process in broad strokes: a committee of interested vendors assign editors they trust to write a specification, that specification then becomes a firmer and firmer recommendation as more and more vendors implement it and hash out incompatibilities.
This works well when the economic picture is clear: the value of the feature is obvious to all participants and there’s no particular economic advantage for any one of them to hold the others back.
The process doesn’t work well under a few circumstances:
- The marginal profit of a feature is unclear, i.e. browser vendors either aren’t sure if the web community will value the feature or aren’t certain about how hard it will be to implement.
- The feature clearly has no near term marginal profit but the W3C (or somebody else) is driving the feature because they firmly believe in some long term picture.
- Vendors are divided because the marginal profit of the feature varies a lot from vendor to vendor. For example, Apple often doesn’t have the same economic dynamic as Google, and Mozilla’s values tend to differ considerably from the rest.
The second process at the W3C came because of tensions between vendors and the W3C. The consortium pushed for XHTML2 while vendors wanted a more pragmatic approach. (This is simplifying a complex situation a lot, I know.) This led to the WHATWG and its set of specs followed by a reconciliation and harmonisation process of sorts. The end result is a complicated mess that is extremely confusing to outsiders. Obviously HTML5 walked its own walk when it came to W3C specification processes.
Reusing the WHATWG/W3C hybrid process for other specs is clearly inadvisable but it’s evidence to the fact that there has been in the past considerable dissatisfaction with how W3C has done things.
The third process is, in my view, a consequence of two realisations:
- The W3C is institutionally a really bad judge of what’s a good idea.
- The marginal profit for a lot of potential web features is very hard to gauge at the outset. I.e. the ‘is it worth implementing?’ question is hard to answer.
This has led to an increase in a community-oriented standards process where a community incubates and debates a feature before it graduates to a standards track at the W3C. IndieWebCamp has done this with Web Mentions and Micropub. And it has been done in informal ways with several specs that have been developed in ad hoc outside communities before they are brought to the W3C. Chromium/Blink has formalised this as its preferred specification path by adopting an incubation-first standards policy, directing participants to take proposed web specs through Web Platform Incubator Community Group. It should be noted that since this is a community group, participation is open to all, unlike what you see in the W3C’s standard working group format.
The community incubation process is a quick, iterative way for developers and vendors discover exactly how hard a feature is to implement, what design for the feature is the most viable, and gauge the value it would add to the platform: it allows for a dramatically simpler discovery of a feature’s marginal profit than the other processes do.
As my prior series of posts and tweets should make abundantly obvious, I think that a merger of the IDPF and the W3C is a bad idea and I’m sceptical of the direction that the specs driven by the publishing industry at the W3C are taking.
The digital side of publishing suffers from a very hazy economic picture. Layoffs have been abundant in the past year. A lot of its digital talent, including people with standardisation expertise are now unemployed. Sales have been going down. Amazon’s dominance makes the ROI on tech investment unclear. Publishing industry processes aren’t particularly compatible with web industry processes. We don’t know if any of these issues can be addressed by new specifications at all. We don’t know if the various publishing industries will adopt new specifications even if they get specified. We don’t know if any vendor will implement them. We don’t know if publishers will use them even if they do get implemented. The marginal profit of new specifications is utterly unknown. Not acknowledging these uncertainties and the risk they pose to tech specification is irrational.
The W3C’s standard process is not going to work under these circumstances. Adding the IDPF and its cluster-mess of a format into the mix will not improve the odds one bit.
The only viable process under these circumstances is the community incubation model. Take the features that the publishing industry needs but might interest browser vendors through the Web Incubator Community Group. In parallel, the W3C should set up an Publication Incubator Community Group, structured in the same way, for specifications that are less interesting to browser vendors but are publishing industry specific and relevant to non-browser vendors.
Going the community route is more democratic, more transparent (which has been lacking in publishing industry standards work), better suited to publishing industry budget requirements, and—in my opinion—more likely to succeed.
Excluding the community by continuing with a closed process is not a good idea when the digital side of publishing is facing so many uncertainties in its near and distant future.