Thinking outside the domain

Why social science, not computer science, is the next internet phenomenon

Mike Neuenschwander
24 min readAug 25, 2023

Introduction

From a high level, identity and management (IAM) seems an eccentric, even obtuse segment of the technology industry — particularly to dispassionate observers of the technology. Identification technologies are familiar enough, given the pervasiveness of IDs in everyday life; but many aspects of identity technology defy simple description. IAM seems at once the most important aspect of distributed computing and yet difficult to cost justify. Privacy concerns are everyone’s problem but no one’s responsibility. Such paradoxes beg the question: Why should such ubiquitous technology evade simple description?

The dichotomies that arise in attempts to rationalize the function of identity in networking (and in society) suggest that larger, mostly unaccounted for forces are at play than contemporary theory acknowledges. So, what component of identity remains uncovered by technology standards, marketing white papers, identity laws, and IdM suites?

The (perhaps non-obvious) problem with the question is actually in the premise. The question promotes an identity-centric viewpoint, suggesting identity is the center around which computing evolves. But such a fixation on identity is unwarranted. Identity information does help us solve important problems of exclusion in online communities, but it is nonetheless just the beginning.

Threats of vandalism, terrorism, and exploitation are real, but building better identity systems is only a partial solution — and one that relies on an unruly resource of identity information. Rigorously pursuing the source of identity is tantamount to a launch into a heart of darkness; and along the way, the system increasingly introduces a new set of dangers. A new generation of technologies must emerge to foster pro-social behaviors by supporting natural processes of recognition, reciprocity, and community awareness.

Identity is the start, but community is the goal

Identity information is in use almost everywhere today. Consumers, citizens, and employees around the globe are continually interacting with national identification cards, passports, drivers’ licenses, payment cards, social security cards, and customer rewards cards. And behind the scenes, a significant amount of personal information is traded and sold, largely without the knowledge of the data subjects. Most people typically don’t consider the value of the information they’re supplying to a web site when registering or purchasing items. It’s also extremely difficult to assess the potential for negative consequences that could arise as the identity information they provide is leveraged for marketing (including unsolicited email messages); and even sites with strong privacy policies can inadvertently expose customer information to resourceful attackers.

Amid rising anxiety over identity theft and privacy invasion, communities often look to improving the accuracy of identification technology in hopes of improving security. But the impulse to rely on identification systems, though a natural response to such social dilemmas, also substantially increases societal risks, often without materially improving security. Accordingly, societies appear to face a Catch-22 that guarantees the system will degrade over time, leaving only a choice for how the demise is to be accomplished — whether by fire or ice. On the one hand, society can become dangerously over-identified; on the other, social disorder ensues.

The type of problem that identity management poses is in fact a well documented phenomenon in social science called a “social dilemma” (sometimes called collaborative action problem). Peter Kollock describes social dilemmas as “the study of the tension between individual and collective rationality. In a social dilemma, individually reasonable behavior leads to a situation in which everyone is worse off.” Identity management actually qualifies as a social dilemma on multiple levels — or rather it’s a nested social dilemma. At a basic level, ironically, identity management belongs to a set of structural approaches for solving social dilemmas: identity systems are a means of establishing social order and regulating the use of shared resources. But identification systems also rely on shared resources — namely personal information — and so introduce a second-order problem of managing a common pool resource.

Treating identity systems as social dilemmas provides insightful explanations for how society interacts with identity. For example, collaborative action theory (a branch of social dilemma research) predicts the ambivalence that many individuals and organizations take toward identity management, because ignoring the problem is individually rational behavior. Problems such as identity theft are clearly beyond the capacity of any one person or one organization to solve; and few organizations are liable for the consequences (or “externalities”) of their activities. From an economic perspective, because identity information is essentially a common pool resource, businesses will find little incentive to invest in solving identity problems unilaterally. As such, motivation to collaborate around identity problems is low.

But approaching the root dilemma as a collaborative action problem offers some hope of finding feasible solutions to the Catch-22 identity systems pose. From a social perspective, at issue is not how organizations to increasingly improve the accuracy and reliability of user credentials, but rather how society can create a pro-social environment that fosters collaborative action. Clearly, the solution can’t be formulaic. But a growing canon of research into successful resolutions of social dilemmas demonstrates that collaborative arrangements are likely when certain conditions are met. Advances in social science, economics, game theory, and complexity theory contribute to informing the debate on how to organize for collaborative action.

It behooves all participants involved with digital identity, then, to take notice of this research and create technologies that go beyond identity management to provide pro-social environments online. New technologies must support natural processes of recognition, community awareness, and reciprocity that promote pro-social behaviors.

Escape from Freedom: The Role of Identity in Social Dilemma Resolution

A social dilemma is a situation in which individuals are motivated to make choices that create a suboptimal result for everyone. Generally, social dilemmas concern the use of shared or “common pool” resources. For example, natural resources such as air and water are common pool resources, because they aren’t owned by anyone, but used by all. Dilemmas arise as users of a common pool resource derive its benefits but don’t bear the corresponding costs of their use. In economic terms, consumers of the resource create externalities — they open a tab that society has to repay.

The operative metaphor for this scenario is the tragedy of the commons, put forward by the ecologist Garrett Hardin in 1968. (In the metaphor, the “commons” refers to a grazing area shared by several shepherds.) Hardin argues that the fate of any commons — barring the introduction of regulatory control — is its eventual demise through overuse and abuse of the resource. The commons suffers this inexorable fate because self interest is at odds with and prevails against societal interest. As Hardin puts it, “freedom in a commons brings ruin to all.”

The classical approach to avoiding a tragic fate is through eliminating opportunities for exploitation of resources through coercion and strict regulation schemes. In sociological terms, the social dilemma requires a structural solution. Generally, the solution comes in the form of an appointed authority that watches over the resource and regulates its use.

Public roads provide a cogent example of solving social dilemmas through structural solutions. How does a society get the funds and organizational wherewithal to build roads in the first place and then to maintain them? Usually, the funds are collected by a tax authority. And once the roads are built, how does society maintain order on them? Again, though a network of authorities that test drivers’ awareness of the rules and a police force that enforces these rules. Both the collection of funds and the regulation of use are collaborative action problems, and both negotiated through structural interventions.

Dilemmas in the Digital Commons

Online applications exhibit many of the behaviors of common pool resources. Applications create shared spaces — commons areas — that would be quickly overrun without the ability to control access. The Internet is replete with tragedies of online commons. Early on, Usenet groups were quickly overrun with gratuitous and seedy advertisement. Later, E-mail accounts became overwhelmed by spam. Internet sites exploit a variety of tricks to fare more favorably in search engines. And promoters pester surfers with innumerable pop-up windows. Some exploits become so egregious, their names make headlines in the popular press (think “ransomeware,” “phishing,” and “spyware”).

In cases where applications are privately owned (or under control of a single governance board), responding to dilemmas of this type is typically a matter of restricting access to a limited community. Identity management technologies are among the methods used to establish order in resource usage in private domains. The resource owners seek to regulate access based on individual identity. Such a scheme then requires administrators to create accounts, issue identification credentials, assign privileges, and manage these artifacts through time.

In public spaces, governments usually take on the role of resource regulation — over both individual and organizational behavior in the commons. Governments can enact and enforce rules that curb destructive behaviors of individuals and organizations over collective pool resources. For example, privacy regulations place parameters around corporate use of personal information. Identification also plays a significant role in governmental regulation, law enforcement, and justice systems. These systems rely on the ability to identify responsible individuals and hold them accountable for any violations of social order.

Compounding the complexity of the situation, identity management is also instrumental in monitoring resource regulators. Trusted insiders have privileged access to the resource they’re meant to protect, and oftentimes these gatekeepers use their position for their personal gain. Auditing and monitoring controls — such as those required in the Sarbanes-Oxley Act — also rely on identity information.

In short, identity systems provide the basis for regulating the use of shared resources. They belong to a class of technologies (structural solutions) that instill social order in an otherwise chaotic commons. In this role, identity systems are the byproducts of resolving social dilemmas.

Whose Data is it, Anyway? Identity Information as a Common Pool Resource

Some privacy advocates frame their case as a simple matter of property rights: a person is the natural owner of information regarding herself. In practice though, such an approach leads to ridiculous consequences. For example, no person “owns” her friends’ impressions of her (see Bob Blakely’s Blog). And trying to establish who owns what information about a person is an exercise in futility: identity information is collected, created, and distributed by so many interested parties that it becomes impossible to find uncontested claims to any piece of information.

The economic explanation for such wrangling for control over identity information is simple: identity information behaves as a common pool resource. Similar to openwater fishing operations where fishermen compete for the catch, a great number of interested parties are continually capturing information about individuals. Governments monitor individuals’ behavior to watch for signs of criminality or terrorism; pharmaceutical researchers gather personal information to develop new drug therapies; advertisers gather personal information to improve the effectiveness of marketing campaigns; and (as stated above) IT organizations use identity data the means of access control in shared online spaces.

The Non-Exclusionary Nature of Identity Information

Common pool resources such as identity information are marked by difficult and costly exclusion and subtractability. Identity information is non-exclusionary, because of the difficulty in keeping the information secret or private. It is also subtractable or a rival good because use of the information can degrade its value to others who might use the resource.

Certain aspects of a person’s identity are difficult to change. Fingerprints, home address, social security number, and name change infrequently, if ever. This stability makes the data a valuable resource, and using such data can improve identity vetting and identity assurance processes — but only if such information isn’t available as general knowledge. If a simple web search can reveal a person’s home address, telephone number, last mortgage payment amount, and fingerprint image, the value of this information to knowledge-based authentication mechanisms is dramatically reduced.

When identity systems use stable data as secrets (such as in challenge-response and knowledge-based authentication), they can create significant, often unseen, costs for individuals. If the information becomes degraded or compromised, since stable information by definition is not readily changed (unless falsified), the system requires other stable information to replace it. Credit card transactions on the Internet provide a familiar example of this phenomenon: to guard against credit fraud, businesses increasingly require customers to provide accurate personal information (such as mailing address, phone number, and mother’s maiden name). Although the practice improves the assurance of a particular transaction (satisfying the interest of the seller), it also creates greater opportunity for personal information to be compromised. Once that happens, businesses will have to rely on other personal information, such as the name of the high school a person graduated from or a person’s fingerprint information, to increase the surety of a transaction. And so the cycle continues.

A similar situation has occurred in the United States regarding the use of Social Security Numbers (SSNs). The SSN became an easy and convenient way for businesses to uniquely identify an individual, and so they came into broad use in the financial, healthcare, and higher education sectors. But overuse of this identifier degraded its value and also revealed the interdependency that had developed across the market, as the SSN became the means of exploitation in identity fraud.

Use of physically verifiable information has the advantage of simple verification. Almost anyone can compare a picture on a passport to the bearer and assess the validity of the identification. But the scheme is usually based on information that is difficult to revoke or reissue and it is often based on information that is non-exclusionary and therefore easily duplicated and falsified.

The Trouble with Identity: Assessing the Externalities of Domain Centrism

Identity information constitutes a Hardinean commons, subject to the kind of tragedies Hardin warned about. But the nature of information — particularly in a digital format — obscures the obviousness of this assertion, because the costs of replication are extremely low. That is, once data is captured in a digital format, the cost of each additional person consuming the information is essentially nothing. So from an access perspective, there are no externalities in using an informational resource that would lead to its eventual overuse and depletion. For example, once Burton Group [now Gartner] publishes this report online, the incremental cost of producing copies of the report is negligible (although the initial cost is high). Whether this report circulates to a few hundred or several million readers, the cost to Burton Group remains roughly similar.

But other characteristics of personal information can drive the cost of use to very high levels. Because identity information is intertwined with access to financial, informational, and physical resources, misuse of such information can result in significant losses to individuals and society. Identity information also represents a shared state, and once the quality of information is degraded, all participants suffer from the loss. For example, if a person’s credit score is falsified, several parties that rely on the accuracy of that information pay a cost for that abuse of the informational resource. And as a common pool resource, identity information can’t be privatized, and so brought under a stringent control regime.

Society, it seems, is bound in a vicious cycle in its use of identity information. Social order necessitates the use of identity information; but the use of identity information creates a second-order social dilemma. From the perspective of any single domain owner, the unrestricted use of identity information is rational behavior. Each domain is free to create the kind of system that best regulates the resource. But at a collective level, such practices are burdensome and harmful to society.

No Domain is an Island: The Logical Fallacy of Isolation

Economic theory describes individuals’ behavior as the actions of egoists, each attempting to maximize his or her utility. Egoists pay little attention to downstream affects of their actions — provided those effects continue flowing downstream. Although the egoist theory is arguably an oversimplification of human behavior, it serves as a framework for understanding (among other things) the behavior of businesses and organizations in their use of identity information and identification.

As discussed above, use of identity information has negative side-effects to individuals and society. However, use and refinement of identity information also generates value. And because the value created is also identity information, it becomes part of the common pool — whether the domain owner is aware of it or not.

Social security cards, driver licenses, and student identification cards provide examples of positive externalities in identity information. When a government or organization makes the effort to formalize a relationship (by creating an artifact such as an identification card) others can reuse the artifact to reduce their costs and risks in identity vetting. In economic terms, organizations are free-riding by making use of a resource for which they make no contribution.

But is free-riding in an identity commons a bad thing? After all, the fabric of society is built on such networking of relationships. And if the costs of identity vetting decrease with the number of connections a person makes, the net result should be to increase the value of information in the commons. Single sign-on (SSO) and federation technologies make use of these positive externalities.

But the issue comes not in the reuse of the resource (since the costs of copying information are essentially zero) but in the misalignment of meanings of identity artifacts. A social security number reused in financial accounts and health insurance claims takes the identifier far outside its original context, with significant risks. Similarly, the charter of a state’s department of motor vehicles is to promote safety on the roads, and so can’t assume the cost of running full background checks on each applicant.

Even in generating identity information, then, no domain acts in isolation. Formalizing a relationship invites reuse and projection of that relationship in downstream and unforeseen contexts. Relationships have value to others outside the relationship, because they create reputation. Internet sites such as eBay rely heavily on reputation as a basis for initiating new relationships.

Domains share a common lot in the use of identity information in other ways, too. For example, they are subject to external forces, such as regulatory and governance oversight. These effects are most noticeable as policies change, requiring each domain to retool to comply with new regulations.

From Identity to Relation: Redefining the Dilemma

The dilemma confronting society regarding the use of identity information is twofold:

  • How can society effectively regulate the use of personal information as a common pool resource?
  • How can society regulate the use of any common pool resource without exacerbating the dilemma of personal information use?

Social dilemmas are essentially coordination problems. To solve the dilemma in an optimal way, a large percentage of the community must make a concerted, coordinated effort or discontinue a behavior. The effects of individual action or sporadic effort are negligible in social dilemmas. The group must act collectively to produce an optimal outcome. Solving social dilemmas therefore requires the introduction of a coordinating paradigm, enabling community members to act collectively. As one social scientist put it, “for a social unit to be preserved and avoid anarchy, individual behavior has to be controlled by informal and formal rules that permit cooperative activity” [citation here].

Philosophers and social scientists have debated the merits of various control systems in solving social dilemmas for centuries. Hardin’s conclusion that society’s only option for avoiding commons tragedies was “through coercion, mutually agreed upon” harks back to the ideas of Thomas Hobbes, who argued that a totalitarian regime provides the best solution to society’s coordination problems [citation here]. Many of the control structures in place today — particularly on computer networks — take a Hobbesian approach to regulating shared spaces. These control mechanisms are typified by centralized authority, strong identification, stringent access controls, and administrative oversight. Must identity information also be managed in this way? or is it possible to secure a network and identity information without formal hierarchy and without identification serving as the primary coordination paradigm?

Contemporary research on social dilemmas suggests that such options are available. While identity and authority are important elements of regulatory control structures, they are more effective when accompanied by several pro-social features. As Edella Schlager puts it, “Mounting evidence suggests that Hardin’s cattleherders do not generally describe the situation faced by many natural resource users” [citation here]. This research also challenges the effectiveness of Hobbesian control structures on at least two levels. First, researchers have found that such control structures are effective for domains of known size, but as a system becomes complex the model breaks down. Second, the kind of draconian measures required to make these systems highly effective are inherently counter-social, creating a host of second-order dilemmas and reducing the natural cohesion of communities.

Scaling Issues with Centralized Coordination

Poor scalability of access control mechanisms is a known issue in the computing industry. The cost of creating and enforcing strict rules can quickly outrun the value returned. Ideally, Hobbesian control structures create an environment in which it’s impossible to act antisocially. The British television series Red Dwarf dramatized such a concept in an episode called Justice, in which the characters encounter Justice World, an abandoned penal colony. Part of the facility is known as the Justice Zone, in which it impossible for anyone to commit a crime. (Notably, this feat is accomplished by making the perpetrator the immediate victim of his own injustice — in economic terms, the Justice Zone immediately internalized any externalities an agent creates.)

Establishing such a high degree of social order is theoretically possible under well controlled conditions and at great expense. But justice zone methodology doesn’t scale well in real-life scenarios. For example, sociologists studying regulation of fisheries note the absurdity of identifying all the fish in the ocean and then apportioning property rights to fishers. Although today’s regulatory schemes are not nearly that ambitious, cost is clearly one of the major constraints on their expansion.

Many environmental calamities, from exhausted fisheries to climate change, are caused by nations and individuals using resources without heeding the long-term consequences. (Image from Dave Cutler.)

The scale of identity information now in play on computer networks is enormous and immeasurable. Nations can and do enact laws governing the use of identity information, but enforcing these laws can be difficult and expensive. Centralized enforcement even at the enterprise level is similarly problematic, because policy makers are often too far removed from the resources in question, so rules are created and applied at a coarse-grained level.

Loss in Growth: The Asociality of Formal Control Systems

Identity systems are structural solutions that often replace social solutions. Stated another way, they enable communities to rely on automation rather than coordination. Structural solutions to coordination problems enable communities to scale to become complex and massive structures. However, structural solutions also change the perception of the problem from a collaborative to a facilitated problem. The danger with this approach is that communities become increasingly reliant on the technology as the coordinating paradigm and loose the cohesion inherent in the outmoded social solution.

David M. Messick offers a simple example of this process.

Some years ago I lived in a city in which I had to drive home from the office through a five-way intersection that was very congested during rush hour. What was surprising was that all drivers knew that traffic could move efficiently and safely through this intersection if a simple alternation rule was followed, a rule that was essentially a turn-taking rule. People were vigilant in scrutinizing the traffic and making eye contact with each other. In more than a year of passing through this intersection during rush hour, I never witnessed an accident or a near-accident.

Someone is a position of authority decided that it would be better if the intersection was governed by traffic lights, and these were installed. The lights then regulated the flow of traffic through the intersection. The result was that the time required to pass through the intersection nearly doubled…. Prior to the lights, nearly all drivers saw the intersection as a collaborative problem that had to be solved…. [citation here]

For all their benefits, identity management systems also contribute to a blind reliance on technology. When privilege management becomes a computer problem rather than a social problem, people are less likely to be on the lookout for suspicious activity. Social bonds that can improve the security of a group are likely neglected in the structural solution.

A Contemporary Theory of Collaborative Action

Over the last few decades, social scientists have begun studying common pool resources that, in the absence of a central or formal governing authority, defy the outcome predicted by Hardin’s theory. Such “discoveries” of cases in which communities resolved complex coordination problems without centralized planning caused researchers to revisit the premise of theories regarding natural resource degradation. On closer inspection, researchers found that self-governed systems relied heavily on social arrangements that emerged from groups of appropriators of the resource. A growing body of evidence suggests that such arrangements are also much more efficient and effective than Hobbesian control measures. But it also became clear to researchers that although self-governing arrangements were possible in nature, they are particularly rare.

Reflecting on this research, Edella Schlager writes that “appropriators are active participants in creating the dilemmas that they face, and under certain conditions, if given the opportunity, active participants in resolving them. They are not inevitably or hopelessly trapped in untenable situations from which only external agents can extricate them.” So, given that solutions to social dilemmas are possible without consolidated authority, what environmental conditions best encourage spontaneous formation of collaborative solutions?

In 1990, Elinor Ostrom advanced a theory on what conditions improved the likelihood of multilateral, durable collaborative action regarding common pool resources. Ostrom claims that appropriators of a common pool resource are likely to engage in collective action to initially set up rules of conduct under the following conditions:

  1. The appropriators perceive they will be harmed in some way if no action is taken
  2. A fair solution can be found through which all appropriators will be affected in similar ways by the rules
  3. The durability of the relationship is believed to be high (low defection rates among appropriators)
  4. The cost of participation is reasonably low
  5. Most appropriators share social norms of reciprocity and trust
  6. The group of appropriators is stable and, preferably, small

Ostrom further claims that ongoing governance over common pool resources is sustainable if the following design principles are adhered to:

  1. Exclusion — The group must be able to guard the resource from free loading, theft, or vandalism.
  2. Rationality — The agreed upon rules must be attuned to the context of the resource.
  3. Involvement — Members have avenues to participate in modifying operational rules
  4. Monitoring — Effective monitoring and auditing or policies
  5. Enforcement — Sanctions can be imposed on violators of the rules
  6. Arbitration — Appropriators have access to low cost, but effective conflict resolution
  7. Autonomy — The rights of appropriators to devise their own institutions are not challenged by external governmental authorities

These principles enable communities to form and have sufficient motivation to act collectively. They enable pro-social behaviors to emerge in individuals, creating the cohesion necessary for collaboration.

Collective Action in Action: Examples of Collaborative Action on the Internet

A great number of Internet sites have put theories of common pool resources to the test — and many of them to a tragic result. Countless online games sites, message boards, and mailing lists have reenacted the tragedy of the commons in glaring detail. But some have not. And on further inspection of the successful sites, it becomes clear that pro-social features encourage collective action, which in turn fosters the longevity of the site. In addition, although identification schemes play a role in these solutions, most of the contribution comes through introduction of pro-social features.

Standards organizations and open source projects are also communities in which members of the community contribute both to the content and to the regulation of contribution. Once a standard or software product is completed, anyone is able to use the resource. But not all standards or open source products achieve wide scale adoption. This circumstance is illustrative of collective action problems: value comes only if a sufficient number of people agree to and abide by a standard or rule. No one wants to expend effort committing to a standard that is unlikely to succeed, so people wait to sense a groundswell of support for a standard before committing to it.

Wikipedia is an awe-inspiring example. The entries in Wikipedia are user contributed — that is an ad hoc community collectively contributes to this common pool resource. Joining the community requires surprisingly little identity vetting. And no single person or organization owns is responsible for regulating contribution or use of the site. Instead, contributors themselves continually monitor the site and community members respond to acts of vandalism against the site.

Similarly, eBay is a frequent target of exploitation with threats coming continuously from around the globe. In spite of all this, the site continues to remain trustworthy and available. Key to the stability of the system is the eBay reputation, which enables members of the eBay community to rate each other’s reliability. Sellers and buyers can then assess for themselves the reliability of the other party and can affect their reputation based on the outcome of the transaction.

Spam is a continual problem for e-mail providers. Internet e-mail providers have to combat spam on both the sending and receiving sides. Automated engines have difficulty detecting spam messages, because the contents are becoming increasingly sophisticated. All of these systems now enable users to identify spam in their inboxes. With a simple action on the part of each user, a very complex problem becomes manageable. E-mail providers also have used viral marketing as a means of gaining new accounts.

Social media and messaging (IM) providers also rely on social networking and user monitoring to maintain order. Obtaining accounts from these providers requires virtually no identity proofing — and usually not even any payment. These systems also have very little centralized oversight — as it would be cost prohibitive to provide such security. Users can create their own communities on these sites by contacting people they know and mutually agreeing to allow message exchanges. Users can warn, block, and hide from abusive users, forming the basis for a reputation system.

Several industry efforts, such as Distributed Identifiers and sovereign identity, have begun developing technologies that engage users in management of identity information. These technologies attempt to improve identity management through stronger social arrangements. Sovereign identity promises to internalize some of the principles of Ostrom’s theory. In particular, user-centric identity systems acknowledge that optimizing the network for administrators (or the governing regulatory body) is usually not the optimal solution. Studies suggest that government-managed resources are less efficiently managed than appropriator-managed. And secondly, when appropriators (network users in this case) have influence on the rules and use of a system, contribution levels increase and appropriators provide positive reinforcement of the resource.

Social networks such as LinkedIn.com afford users awareness of others in the network and to legitimize relationships through acknowledgement and endorsements.

A New Identity: Revisiting Strategies for Resource Control on Computer Networks

Although today’s identity management systems are sophisticated and reliable, their design center is a domain-centric, formal control system. Of course, such functionality is critical to resource management, but few enterprise identity management products also foster pro-social, collaborative action, following Ostrom’s principles. As IdM vendors refine control mechanisms for stronger authentication and finer-grained access control, they should also take notice of alternative approaches to resource management that Internet providers have developed.

Facilitating Collaborative Action on Computer Networks

Exclusion is a key principle for community relationships; as Robert Frost put it, “good fences make good neighbors.” The technology industry’s focus on identity as the instrument of exclusion is certainly understandable, if misguided. Identity is an important ingredient to exclusivity — the means of establishing a domain boundary where no physical alternative remains. But pushing identification systems to extremes engenders an array of social problems, for the simple fact that identity information is itself a common pool resource. Technologies for identity and access policy are necessary but not sufficient means of solving social dilemmas. As Edella Schlager points out, “Exclusion, while critical, is insufficient to ensure long-term commitment to the rules” (Edella Schlager, Collective Cooperation in Common Pool Resources, Pg. 118).

The identity management market therefore stands on the cusp of change that will introduce pro-social technologies aimed at improving user participation, fostering communities, and thereby increasing the cohesion and security of domains. As the market makes this long transition, the jargon of the industry will change as well. The following image depicts the how relationship, community, and collaborative action will eclipse identity, domain, and control as the foci of identity systems.

Butler Lampson’s quip that “all problems in computer science can be solved by another level of indirection” seems to counter a trend toward directionality in identity systems. Of course, David Wheeler’s rephrasing would add that indirection “will usually create another problem.” Nevertheless, use of identity information today doesn’t support sufficient indirection. And identity isn’t even the basis for trust or cooperation in the first place. Efforts to continually refine identity front-load the problem, as if positive, unambiguous identification were all that’s required in solving social dilemmas.

Robert Axelrod makes the point rather emphatically that cooperation can occur among enemies, in the absence of trust, altruism, and even strong identity. According to Axelrod, “the foundation of cooperation is not really trust, but the durability of the relationship.” He further explains:

Much more can be said about the conditions necessary for cooperation to emerge, based on thousands of games in the two tournaments, theoretical proofs, and corroboration from many real-world examples. For instance, the individuals involved do not have to be rational: The evolutionary process allows successful strategies to thrive, even if the players do not know why or how. Nor do they have to exchange messages or commitments: They do not need words, because their deeds speak for them. Likewise, there is no need to assume trust between the players: The use of reciprocity can be enough to make defection unproductive. Altruism is not needed: Successful strategies can elicit cooperation even from an egoist. Finally, no central authority is needed: Cooperation based on reciprocity can be self-policing….

For cooperation to prove stable, the future must have a sufficiently large shadow…. For example, what made cooperation possible in the trench warfare of World War I was the fact that the same small units from opposite sides of no-man’s-land would be in contact for long periods of time, so if one side broke the tacit understandings, then the other side could retaliate against the same unit.

Robert Axelrod, The Evolution of Cooperation.

For enterprise domains, Internet applications, and government services, a new kind of pro-social technology is required. These technologies must help users perceive their online dealings as occurring within social contexts. They should create greater awareness of other users and greater sense of community. Technologies will therefore enable resource users and contributors (appropriators in economic terms) participate in monitoring the community and rely on reputation and reciprocity as the basis for trust.

Pro-social technologies must also acknowledge the context of cascading realms of authority and amorphous relational boundaries. Network architects must therefore be cognizant of how to enable localized communities that federated into larger structures. In other words, formal and centralized control structures aren’t by any means outmoded in the evolving model. As Schlager puts it, “evidence suggests that governments may be of greatest benefit to appropriators by providing a supportive environment that encourages appropriators to devise their own solutions to the dilemmas that they face.” (Schlager, pg. 115) Technologies must therefore foster cooperative activity at local, regional, and global levels.

Note: this article originally appeared in February 2007. Small edits have been made to anachonistic examples.

Books cited

1 Peter Kollock. “Social Dilemmas: The Anatomy of Collaboration.”

2 Josefina Figueira-McDonough. Community Analysis and Praxis: Toward a Grounded Civil Society. Taylor and Francis, 2001.

3 Mark Van Vugt, Mark Snyder, Tom R. Tyler and Anders Biel (editors). Cooperation in Modern Society. Routlege, 2000.

--

--

Mike Neuenschwander
Mike Neuenschwander

Written by Mike Neuenschwander

I've been involved in Digital Identity work for 25 years and have a passion for fostering social trust online. Trust is what we're really after, imo.

No responses yet