Centripetal Standardization:

Top-Down and Bottom-Up Vectors of Value Creation

--

Image credit: Joey Gao

Introduction

Standardization of capability-based credentials has the potential to better preserve and communicate the value of an individual’s skills over time and across geographic, cultural, and linguistic barriers. An analogous project, the Diagnostic Statistical Manual of Mental Disorders, has been invoked as a model for credential standardization because it represents an effort to arrive at a common language to describe mental illness and prescribe clinical interventions, placing doctors and patients around the globe in conversation with one another to build an ever-evolving understanding of mental health.

The DSM evolved in response to a social need to catalog, document, and alleviate psychological and behavioral impairment in the United States. The obverse question is now being asked by some educators and industry actors: is it possible to standardize capabilities? The impetus for this question is emerging from a market economy in which potential employers, investors, and partners want to understand how an individual’s skills will translate into value for their businesses. These actors are increasingly finding the traditional degree model of measuring achievement, based on grades and credit hours, to be an imprecise indicator of skill — and therefore of value. However, no more precise system of measuring skill has yet emerged.

In this white paper, I attempt to think through this impasse by developing a sociologically-informed account of standardization processes, which are in fact social processes of value creation. I draw on examples of standardization to construct a theory of “centripetal standardization,” which articulates how value is created through simultaneous top-down and bottom-up processes of exchange between individuals and communities attempting to meet their needs and secure desired outcomes while negotiating the openings and closures of trust. The paper goes on to hypothesize that the resulting standards can be mapped along a power-law distribution, which reflects not only the inertia of cultural inheritance but also differential authority distribution between the communities that comprise vectors of standardization.

Neither top-down nor bottom-up approaches to value creation are “better.” Rather, they solve for different aspects of the ongoing social process of standardization. In some instances, however, standardization processes may skew far toward one or the other pole, potentially to the detriment of the communities making use of the resulting standards. For this reason, foregrounding standardization as a centripetal process — dependent on vectors emerging from opposite sides to coalesce around a center-in-motion — should give educators, industrialists, craft practitioners, and government a framework that helps them steward processes of standardization toward resulting standards that are precise, widely-accessible, and reliable while remaining agile, able to quickly adapt in response to the needs of the communities who make use of them and are impacted by them.

1. Defining Standards and Standardization

Standardization is an ongoing social process that produces bases for comparison.

Standards are bases for comparison which enable exchanges of information conveying what is valued and how it is valued by the communities which engage in standardization.

Standardization must be considered prior to standards themselves. This is because standards are not fixed, eternal entities, but the result of ongoing social processes. From a functional standpoint, standards serve the purpose of facilitating exchange in the context of uncertain and fluctuating values, a state of affairs which characterizes any form of group sociality.

1.1 Standards of Content and Standards of Exchange Value

In order to facilitate exchange, standards may convey information about two things:

  1. What is being exchanged
  2. For what it is being exchanged

The first type of information — “What is being exchanged” — refers to a standard of content. For example, when I say that I want to trade my car for something else, I am relying on a standard cultural understanding of what a car is. However, the word “car” can encompass a whole range of meanings which may or may not include what I mean by car, or what my potential trade partner means by car, in this particular instance. This is where legal instruments like legislation and contracts can establish more precise standards where necessary.

The second bit of information — “For what it is being exchanged” — refers to a standard of exchange value. That is, it conveys the terms in which the subjective worth of the first object can be expressed. I may choose to trade my car for a diamond ring, or a thousand sacks of flour, or $3,000 in cash, depending on the social context and its appraisal by standardizing parties (an antique car broker, Kelley Blue Book, or Sotheby’s). Alternatively, I may be coerced into exchanging my car for something which I do not consider to be of equal or proximate value in order to avoid an unpleasant outcome — which then factors into the exchange value equation.

1.2 Currency as Translating Idiom of Exchange

Price is a designation of exchange value. Accordingly, price is a standard. Most people are familiar with price fluctuations and understand that they convey information about the changing worth of an object or set of objects over time. Price is, in turn, often expressed in terms of currency, which itself is an object fluctuating in value — currency itself is a standard.

Prices expressed in terms of currency have the advantage of facilitating exchange in much more precise ways than prices expressed in terms of barter-based object equivalencies because currency functions as a translator, able to create more reliable equivalencies between very different categories of thing, the exchange of which may otherwise involve protracted and uncertain negotiation — often to the point of infeasibility. Although many currencies in circulation today used to be pinned to material standards of content, like gold or silver, today the content of most currencies has been reduced entirely to their authorial representation of the fiat of a nation-state. In other words, fiat currencies are substrate-independent, though they may be represented by various authorized substrates (i.e. paper, coins, digital tokens); their only real content, however, is sovereign will: they are “legal tender” whose deployment the state is obligated legally to enforce. The overriding function of fiat currencies is, of course, indexing exchange value. Their exchange value is, in turn, influenced by sovereign will qua monetary policy, but this has limited control in determining precise exchange value.

With the advent of Bitcoin, however, a quasi-currency now exists whose content has been shorn even from the will of any sovereign entity. Though it is materially-mediated through machines forming a peer-to-peer network running a cryptographic protocol establishing a blockchain of transactions, it is not recognized as legal tender by nation states, who, accordingly, do not guarantee transactions conducted through it. Bitcoin’s exchange value is, therefore, dependent exclusively upon the degree of its adoption as an idiom of translation for exchanges of all kinds. This is itself tied to the mutability resistance of Bitcoin’s blockchain (how hard it is to change the past by rewriting the chain in such a way as to convince all other computational participants that the amended chain is the ground truth).

The absence of state backing has not proved to be an obstacle for Bitcoin, but an asset: cryptocurrency has leveraged significant levels of public mistrust of sovereign power, including its ability to fairly and responsibly enforce a monetary policy beneficial to the public rather than exclusively to social elites. Bitcoin is thus perhaps the first example of truly “pure” (that is, contentless) exchange value in human history — precisely in response to the corruptibility of top-down regimes of standardization.

1.3 Grades and Credit Hours: Translating Idioms of Exchange

But currency is not the only medium which can facilitate exchange as an idiom of translation; any unit which serves as a rubric of value may do so. In the context of credentials, which is our subject here, two traditional units of value have been employed in recent history: the grade and the credit hour. Both have been used to index skill in learning contexts.

However, as students shift from a context where skill is translated into grades and credit hours to a context in which skill is exchanged for currency (i.e. employment), many are discovering that the worth of their skills in terms of currency may have little to no correlation with how it stacks up in terms of grades and credit hours. Individuals with many credit hours of instruction behind them and high grades are not necessarily more valuable to employers than are those with fewer credit hours and lower grades.

As imperfect as grades and credit hours are for indicating the content of skills or their exchange value, they continue to be employed as indices of skill in large part because they are denominated in quantities and therefore easily translatable (that is, exchangeable) into currency values (price) by the education industry. Accordingly, because degrees are functions of both grades and credit hours, their exchange value can also be quantified with relative ease.

The translation of skill acquisition into currency, and vice versa.

For their part, potential employers, investors, and business partners often lack the time or wherewithal to explicitly measure skill. A degree or certification then becomes a general (though unsatisfactory) indicator of the graduate’s capacity to persevere through a long-term project to completion with some success. Moreover, the institution which conferred the credential may also index other social values like prestige and social class, giving their graduates an advantage in the professional opportunity market. Prestigious institutions may then factor the value of their prestige into the price of tuition.

It has become increasingly clear in recent years that while prestige certainly fuels economies by virtue of its exchange value, it doesn’t necessarily produce other valued outcomes: utility; profitability; agility; long-term growth; innovation; loyalty; camaraderie; skillful leadership. A growing number of representatives of various industries claim that they can’t find graduates with the skills they need to fill important positions. In light of these concerns, is there a way to standardize the content of capabilities that better indicates what an individual can actually do?

2. Content Standardization as a Process of Verification

Content standardization is a process that establishes, verifies, and modifies what is meant by a particular standard in a particular social context. Accordingly, individuals who rely on or are impacted by a standard generally trust that process of standardization if they trust the standard that is its outcome. Trust in the process of standardization means, in turn, trust in at least some of the social vectors by which it proceeds:

Vectors of standardization are the currents of meaning-making formed by aggregates of differently-weighted authoritative actors, including communities of verification and impact, which coalesce around shared indexes of value.

In the sections that follow, I outline what is meant here by communities of verification and impact and how they constitute vectors of standardization.

2.1 Communities of Verification: Top-Down Vectors of Standardization

Because no individual can independently verify every standard on which they rely, individuals distribute their trust among social bodies whose responsibility is verifying certain standards of content. Philosopher of language Hilary Putnam noticed this pattern long ago with regard to one set of social standards: words. He proposed a “hypothesis of the universality of the division of linguistic labor,” which he clarifies in more detail as follows:

“Every linguistic community exemplifies the sort of division of linguistic labor just described: that is, possesses at least some terms whose associated ‘criteria’ are known only to a subset of the speakers who acquire the terms, and whose use by the other speakers depends upon a structured cooperation between them and the speakers in the relevant subsets.” (From “The Meaning of ‘Meaning.’” In Mind, Language and Reality. Cambridge: Cambridge University Press, 1975. 215–271, p. 228.)

In other words, I may not know the difference between an oak tree and an elm tree, but there are people who do, and they are the social “keepers” of this distinction. They are what this white paper calls a “community of verification.” The vector of standardization emerging from communities of verification is top-down; it reflects the distribution of social authority to that body to define what is meant by particular standards of content.

A community of verification is a social body to which authority has been delegated to define standards of content. This community may be as small as one person or as large as all of humanity, though it is usually a small subset of members of the social group that makes use of the standards in question (see communities of impact, below). Communities of verification influence standardization processes from the top down, usually on the basis of knowledge, experience, and skill. These characteristics are, in turn, verified by members of a community or communities of verification — and secondarily by communities of impact.

2.2 Communities of Impact: Bottom-Up Vectors of Standardization

However, trust in communities of verification is often tenuous, not only because those communities may be unknown or only dimly-known to the individual, but because they are bodies whose interests the individual may perceive to be at odds with their own. However, there are always simultaneous, bottom-up vectors of standardization: these emerge from wider “communities of impact” who make use of, engage with, or rely upon the standards in question. Although communities of verification may have leveraged impact because of the authority delegated to them, that impact may be heavily influenced or even overridden by common usage or pushback from communities of impact. In short, communities of verification are authorized, not only by other communities of verification but by communities of impact — and that authorization may be contested or revoked.

As authorizers of both communities of verification and standards themselves, communities of impact may have significant leverage in standardization processes. For example, as Ethan Zuckerman has recently written, the impetus for the co-design movement has emerged precisely in response to the disjunction between product standards developed by communities of verification, comprised of engineers and technologists, and actual use of those products by communities of impact. A stronger bottom-up vector of standardization may be precisely what is needed in situations where a product or service simply is not catching on — or is being used in ways that are completely at odds with the intentions held by the community of verification. The disciplines of consumer and market research are dedicated to making bottom-up standardization vectors more impactful.

A community of impact is a social body that makes use of, engages with, or relies upon standards. Communities of impact often delegate standardization authority to communities of verification; however, this delegation is never complete. Accordingly, communities of impact may influence standardization processes from the bottom up by checking top-down standards against their own experiences and needs, and employing or not employing them in particular use cases. In some cases, communities of verification may even become coextensive with communities of impact, resulting in a standardization process skewed more heavily by the bottom-up vector.

2.3 Centripetal Standardization and the Power-Law Distribution

There is no hard and fast social boundary between communities of verification and communities of impact; these terms simply describe ideal poles of how the “authorizing function” is socially distributed. Because processes of standardization are always both top-down and bottom-up, standardization itself may be described as “centripetal”: the result of forces moving in opposite directions which in combination produce a circular vector coalescing around a center-in-motion.

Centripetal standardization is an ongoing social process that produces bases for comparison as a result of inputs from differently-weighted authoritative actors.

Complicating matters, however, is the reality that communities of verification and impact are often highly fragmented, which leads to the existence of multiple competing standards of value circulating in the same social space. This fragmentation may be remedied by efforts at conscious community-building initiatives, such as the formation of a governing (standardizing) body (i.e. a government committee; professional association; nonprofit; advocacy group; etc.). However, this move often produces indifference, resistance, or defection on the part of competing communities of verification. Moreover, without authorization from communities of impact, creating communities of verification aspiring to determine a use of standards is likely to be largely ineffective or a violent imposition.

This paper hypothesizes that the distribution of values produced by this interaction of vectors of standardization in a social context can be characterized by a power-law curve:

Classic power-law distribution. Credit: Wikipedia.

A power-law distribution reflects a functional relationship between two quantities in which one quantity varies as a power of another. It is sometimes used to describe real-world probability distributions.

This hypothesis suggests a promising avenue for future research: mapping distributions of credential content and seeing if they do, in fact, describe a power-law relationship. If the relationship does hold, the standards that fall within the first 20% of the curve would likely be heavily determined by more leveraged communities of verification with significant buy-in on the part of communities of impact. The remaining 80%, or so-called “long tail,” would likely contain standards produced by less-leveraged communities. Although frequently remaining unknown by those outside of niche communities of practice or purpose, and at times regarded as less “reputable” by mainstream opinion, which accords significant weight to the first 20% of the curve, long-tail communities can be important sources of contestation and innovation in standardizing processes while also carrying forward unique methods, bodies of knowledge, and constructive outputs.

2.4 Standardization Case Study: The DSM

In the United States, over time, a community of experts has arisen in response to the social demand to document, understand, and ameliorate mental illness. This community arose during the 19th century in response to the demands of federal government census takers, who wanted to understand the prevalence of mental disabilities in the American population. Over time, however, the categories used to document mental illness became more precise, and the study of mental illness developed into a medical science, with both research and clinical practices.

This community of scientists and clinicians established academic training programs, professional societies, and government bodies with the intent to standardize and collectively develop the understanding of mental health and mental health interventions. The resulting communities of verification are highly leveraged in determining standards for what qualifies as a mental illness, accounts of etiology and prognosis, and clinical intervention. However, production of standard diagnostic criteria involves considerable collection of evidence from many broad communities of impact all over the world — patients; research subjects; affected families; populations qua survey data; etc. Scientific practice demands that conclusions be revised in light of new evidence, which can come from anywhere. Accordingly, the DSM is an ever-evolving artifact documenting standards for the identification and treatment of mental illness, which are developed in both top-down and bottom-up fashion and tend to fall in the first 20% of the power-law distribution through its influence on practitioner education and licensing standards.

However, the top 20% of a power-law distribution never exhausts the social process of standardization; the long tail includes initiatives of knowledge-creation and practice which exceed the social reach or methodological limits that derive the standards which fall within the first 20%. The DSM framework has never been accepted as a universal standard by all researchers and clinicians, even those working within the communities of verification that give rise to it. This is sometimes due to profound differences in philosophical and political orientation among professional practitioners. In addition, many approaches to the production of knowledge and therapeutic practices for addressing mental illness originate from outside of these communities of verification. Accordingly, countless alternative approaches to mental health proliferate, some of which are scientific practices while others are more pastoral or intuitive in orientation. Many of these have their own standardization and certification practices, all of which are collectively located in the “long tail” of the power-law distribution.

The DSM is, of course, far from the only example of ongoing standardization processes; they are ubiquitous. Technical specifications; business best practices; legal regimes; social norms; and educational testing and subject-matter content standards are all the products of standardization processes. Yet can standardization processes extend to capabilities themselves? It is this question — the one with which we began this paper — that will be the focus of the following section.

3. Can Capability Content be Standardized?

So far, we have reviewed how standardization works and taken the DSM as an example of this ongoing social process. Now we address the question of whether it is possible to design credentials that accurately reflect capabilities. To answer this question, first let us define “credential”:

A credential is a verifiable attestation that a certain set of capabilities, experiences, or characteristics has been attained by the credentialed person.

The purpose of a credential is to create confidence and trust in a person’s capabilities, experiences, or characteristics. The content of what is credentialed can be anything from registering a sole proprietor DBA (Doing Business As) to completing one’s Commercial Pilot certificate with Pilot in Command cross country instrument rating. Many credentials can only be conferred by communities of expert practitioners, such as a PhD in Biochemistry or an MD’s board certification, while others can be conferred by anyone, or even automatically conferred once a person engages certain triggers. For example, millions of people have been ordained as clergy through the Universal Life Church by simply clicking on a link on their website; this legally authorizes them to perform weddings in the United States.

As is clear from these examples, some credentials are easier to standardize than others. The more complex the social function of a credential, the more complex the process of standardization becomes. Capabilities may be one of the more difficult content types to standardize because their social functions often shift rapidly. Accordingly, efforts at capability-based standardization have so far either created broad capability ranges or precise, practice-based definitions. In what follows, I examine a few current attempts at capability standardization, beginning with approaches that skew top-down and ending with a few examples that skew bottom-up, in order to better understand how both vector types function in this context.

3.1 Top-Down Standardization Efforts in Europe and the United States

Today, a major effort at standardizing capabilities-based credentialing is taking place in the European Union, where the European Qualifications Framework (EQF) is attempting to establish equivalencies across national and institutional boundaries. Since 2012, every credential issued within the EU has carried a reference to its EQF level. The communities of verification behind the EQF are national Accreditation and Quality Assurance bodies, who themselves are audited extensively by the EU before they are granted entry into the organizations responsible for the EQF standardization process: the European Quality Assurance Register (EQAR) and European Association for Quality Assurance in Higher Education (ENQA).

To gauge capability, the EQF draws on what could be called “level of instruction,” which roughly corresponds to educational tier. Thus, Level 1 corresponds to Primary School, while Level 8 corresponds to a PhD. Accordingly, the EQF also draws heavily on degree equivalencies developed by the European Credit Transfer and Accumulation System (ECTS), which is an attempt to standardize what degrees mean across EU member countries. The presupposition behind this rubric is that the higher the level of education, the higher the level of qualification. While there is some truth to this, it by no means exhausts the range of capabilities an individual cultivates through non-classroom means, such as work experience, family responsibilities, informal invention, etc.

A similar framework for standardizing capabilities, Connecting Credentials, has been devised in the United States by the Lumina Foundation, a nonprofit whose stated aim is to increase the number of Americans with postsecondary education credentials. The Connecting Credentials framework also consists of eight levels which closely reflect those of the EQF (although it is unclear whether the Lumina Foundation drew on the EU as a model for credential standardization). However, unlike the EQF, skills are broken into three categories: Specialized Skills, Personal Skills, and Social Skills. In addition to developing Connecting Credentials, the Lumina Foundation has also worked closely with the Association of American Colleges & Universities (AACU) to develop a Degree Qualifications Profile (DQP), which specifies the skills a student should have upon receipt of a Bachelor’s Degree, Master’s Degree, and PhD. The DQP thus mirrors the impetus behind the ECTS as well.

These projects are formidable; however, their capability classifications are highly general owing to the broad scope of their intended applicability. They also proceed along different social trajectories, as the different ways the communities of verification involved are structured suggests. In the EU, the communities of verification comprise a transnational governing structure which leverages organizations of national governance to derive legally-enforceable standards. In the US context, it is nonprofit organizations, working closely with volunteer educational institutions, who are at the forefront of capability-based credential standardization.

All of these communities of verification, moreover, are located within the education sector. In other words, no industry-based communities of verification set standards for skills a college graduate must have in order to be employable in entry-level positions in their industry. That is why, from an employer’s perspective, it can still be unclear how, say, a Level 4 credential or a Master’s Degree translates into the ability to do those things required to be successful in a specific job. In other words, right now firms comprise communities of impact rather than communities of verification with regard to education credentials that fall within the top 20% of the power-law distribution. In practice, this means they are taken into account by universities and educational credential standardization bodies, but distantly and imprecisely.

In response, corporations often construct communities of verification outside the formal education system by developing their own credentialing frameworks; for example, AT&T or Google’s Nanodegrees (offered through Udacity), Microsoft’s Certifications, and Motorola’s Six Sigma (developed by Motorola’s Bill Smith, but made famous by Jack Welch at GE). However, many industry credentials are designed precisely not to be translatable into other industrial contexts in order to preserve the market advantage of the company issuing the credential by locking in its labor force. Thus, from the student’s perspective, an industry credential doesn’t necessarily have a higher exchange value in the marketplace than does a formal education credential. In fact, a formal education credential may, in practice, function as a de facto prerequisite for later industry credentials by serving as a prerequisite for jobs that form the precondition for the possibility of acquiring an industry credential.

The aforementioned initiatives are all centripetal; however, they are skewed toward their top-down vectors by relying on tightly-delimited communities of verification that exercise overriding influence in the standardization process. In what follows, I examine a few examples of capability standardization that skew toward more bottom-up approaches.

3.2 A Few Bottom-Up Approaches

New practice-specific capability-based credentialing frameworks are emerging that are expanding their communities of verification from a select body of expert practitioners to the entire body of practitioners of a discipline: that is, what is usually one of the most immediate communities of impact has become coextensive with a community of verification. This move has been made possible at larger scales by what amounts to an explosion of educational platforms experimenting with alternative certifications (such as the schools and companies using edX and Coursera), learning marketplaces (such as Quora, Stack Overflow, or Kaggle), or the countless mentoring, learning, and doing platforms (Skillshare, Masterclass, Instructables, and P2PU) that go beyond simply delivering information. Many of these services are themselves built atop open-source software initiatives, which are proving to be a major avenue of democratizing access to processes of authentication and standardization. Like the Bitcoin blockchain, the underlying premise of open-source is that transparency and distributed verification will check the myopia and lack of agility engendered by concentrations of power. However, the prerequisite to being able to verify the processes and information that are being opened up to public access is a certain level of skill, thus protecting the standardization process from distortion by people who truly lack the capacity to discern value from the vantage point of a practitioner.

One recent example to consider is Kaggle, a platform created by data scientists to issue competency-based micro-credentials on the basis of specific achievements. This results in an “achievement dashboard” that is similar to a portfolio, but specific to the needs of data science practitioners. It tracks performance across three categories of expertise: competition, kernels, and discussion:

Demo profile on the Kaggle platform, July 2016.

These categories of expertise then become the framework against which further skill in the practice of data science is built over time. In this way, the platform moves beyond static credentials to a record of the continuous accumulation of merit over time. Of course, Kaggle consists of top-down vectors as well. A small subset of data scientists and developers created the platform and specified the rules of engagement by which advancement occurs within it. Thus, there are multiple layers of community verification at work within the broader community of Kaggle-involved data scientists.

Another example of capability standardization skewed toward the bottom-up vector is the Open Science Framework, developed by the Center for Open Science. The OSF began with a project to crowdsource studies attempting to reproduce the findings of psychology studies in the wake of revelations about rampant misrepresentation of data and significant results in the field. However, it has since become a free, publicly-accessible platform in which researchers all around the world can share data, papers, and methods, inviting collaboration and critique from their community of peers regardless of political or institutional borders. Its just-launched publishing platform, SocArXiv, collects preprints of articles and datasets in the social sciences, creating an open-access forum for scholarly literature and dialogue. Like Kaggle, the OSF quantifies participation, although not nearly in as precise or robust a fashion (currently it is limited to “activity points”):

Author’s profile on the SocArXiv platform, July 2016.

This rapidly-developing platform will likely undergo significant change as it is adopted more widely by social scientists in the coming years. It is a promising example of bottom-up capabilities-based credentialing, as the ability to contribute to the platform isn’t limited to those with institutional credentials but is open to anyone who has research to share. The community of practice then conducts organic “peer review” insofar as it labels research found to be unreliable or irreproducible as such for public view, thus vouching for the skill of individual participants. The platform is also hospitable to the provisionality of research findings, allowing researchers to create versioned records of projects by continuing to upload new data and revised interpretations of existing research. In this way, it becomes one of the conditions for the further development of research capabilities.

Examples of other bottom-up-heavy standardization processes are community healthcare initiatives that can be located in the “long tail” of the power-law distribution — those alluded to at the end of the DSM case study above. By this I mean healthcare that is more intuitive and pastoral, often curated by members of spiritual or religious traditions. Closest to the first 20% of the curve are practices like doula certification, some of which occur as part of formal medical education programs (including nursing certification programs) and some of which are performed as unlicensed community education initiatives. Particularly in the latter case, “certification” may be as much a product of word of mouth and reputation as it is of a formal seal of approval. Analogous to doula certification programs are religious psychotherapy training programs, which leverage scientific findings in the field of psychotherapy alongside a theological tradition. Deeper into the long tail are practices such as life coaching, traditional/folk healing, and magic, some of which offer certifications of skill, while many others do not.

3.3 Standardization Is a Social Process That May or May Not Be Scientific

What makes these long-tail standardization practices potentially more heavily bottom-up than traditional formal education is that there is often less of a divide between the communities of verification and communities of impact in terms of subject-matter knowledge and experience; there also may be significant overlap between them. However, the obverse is also often true. That is, the long tail can spur stronger top-down efforts at standardization, like legal or symbolic regimes guarded by exclusive communities of verification or even authoritarian or cult-like organizations in which standardization proceeds by sovereign fiat.

This bipolarity characterizes community practices in which scientific methods have not made effective inroads: many competing standardization processes cause standards to proliferate widely, and in response, attempts arise to curb this multiplicity through practices of top-down control. Neither outcome is necessarily more effective or satisfying for solving the problems or meeting the aspirations that have engendered the development of long-tail communities of practice in the first place.

In light of these examples from the “long tail” of mental health treatment in the United States, is it important to foreground another characteristic of standardization: it is agnostic with regard to whether the resulting standards “hold” in a scientific sense. In other words, human societies are always standardizing knowledge and practices as part of the production of social norms, but these are not sufficient sources of knowledge in a scientific sense. Moreover, the scientific process is incremental: it usually only ever addresses delimited cases that don’t meet the full aspirations of the long tail community.

In other words, the power law distribution can be as much of an engine of politics and celebrity as it is of science. Not everything that lands in the top 20% of the distribution merits such placement from the standpoint of creating foundations of knowledge and practice that hold. This is why the character and practices of communities of verification and impact are so important — it is social collectives which are still the cradles and crucibles of human progress, despite radical changes in their technologically-mediated forms of organization.

Conclusion

This paper has outlined a sociological account of standardization practices with the aim of providing a framework for individuals and organizations attempting to think through how a process of standardizing capabilities may proceed in a way that meets the needs of different stakeholders in always interconnected and evolving spaces of exchange and commerce. The framework suggests that identifying the appropriate communities of verification and communities of impact is a fruitful place to begin. This identification foregrounds the social agents who engage in the standardization process, and therefore the communities to which the standards must in fact be useful.

However, identifying the communities involved in the standardization process, whether with prescriptive or descriptive intent, never exhaustively maps or enables control over the social vectors and mechanisms through which it actually proceeds. This is important to keep in mind because it helps prevent the pitfall of attempting to socially engineer the process of standardization in an overdetermined fashion. One way of understanding Stewart Brand’s famous utterance that “Information wants to be free. Information also wants to be expensive. . . . That tension will not go away,” is as an indicator of the vectors of standardization described here. Value-creation processes will always be underway; scarcity is a relational property that appears wherever anything is unevenly distributed. However, network effects always exceed top-down control. And networks are always nested in wider networks. The ultimate (human) community of impact in any standardization process is humanity as a whole, and its individual members are singular in the ways they form connections with other humans and non-humans and constitute value over time.

Nevertheless, delimiting communities of verification and impact creates heuristics which can help direct agentive work to bring processes of standardization more in line with, for example, scientific best practices. It can also serve as a simple feasibility check. For example, declaring all chemists around the world a community of verification for the purposes of a standardization project may prove either feasible or infeasible depending on geopolitical realities, institutional capabilities, technological resources, and other factors.

Identifying these communities is only one aspect of the work. As alluded to above with the example of scientific best practices, standards perform a social function. To revisit the discussion of the function of standards from the first section of this paper, standards facilitate exchange; however, whether this is an exchange of information/knowledge, objects, intangibles, currency, method, or anything else is not predetermined. The end in question, however, may profoundly influence the procedural means by which vectors of standardization establish the associated standards: it may produce high-centralized-touch mechanisms like the research- and committee-heavy process of developing the DSM or low-distributed-touch mechanisms like activity-based capability scoring through algorithms and community, as in the case of Kaggle.

In other words, the answer to the question with which this paper began — Is it possible to standardize capabilities? — is yes. However, the centripetal nature of value creation suggests that, if capability standardization is to prove broadly useful, it cannot be an exclusively top-down exercise — that is, decided and enforced by communities of verification. It must leverage significant inputs from both major vectors of standardization. Such a task is facilitated by technological innovations that allow for increased peer-to-peer practitioner collaboration on a massive and highly-distributed scale. It remains to be seen how these materials will be recombined and evolved to make more intentional what are always ongoing, preexisting social processes of normalization and innovation.

--

--