You’ve seen the refrain: “Whose values will be built into AI?”
Behind this question — often posed with a hint of foreboding — rests an important assumption: that one set of values will be selected as the winner by some elite decision maker.
It’s a view of power that is state-centric and zero-sum. And increasingly it looks like no one’s values will determine the course that many of the most powerful algorithms take.
Increasingly, algorithms are able to learn to cooperate with other algorithms and humans. This mutual cooperation happens on the fly. And it produces positive-sum interactions.
The possibility of machines creating positive sum interactions reveals the fundamental flaw in state-centric thinking. …
Last week I argued that Permissionless Innovation —which treats regulation as a battle between government and entreprenuers —is seriously misguided. Of course, it takes a theory to beat a theory. And the “battle” metaphor at the heart of the Permissionless Innovation view has merit: It is simple, intuitive, and (to some) compelling.
I’m proposing an alternative metaphor to model regulation. It possesses, without any loss of simplicity or elegance, much more explanatory power.
Regulation is best thought of as the “rules of a game.” Regulators are people who set those rules.
At this point, we needn’t get too caught up in the definition of “rule,” “game,” and so forth. Instead, I want to emphasize just how much stuff is incorporated when we think about regulation as the rules of a…
“Permissionless Innovation” isn’t the answer
The world of tech regulation is incredibly lively. Last week, a new bitcoin lobbying outfit, based in D.C., announced its formation. The S.E.C. rejected the first bitcoin ETF. And SXSW hosted a panel titled “Self-Driving Cars and the Policy Maze.”
Despite all the activity, the tech regulatory space is not in good shape. Idea-wise, it’s floundering. In particular, tech regulation lacks a helpful framework for understanding these regulatory developments. Over the long haul, this lack of direction will impede progress.
One prominent framework for understanding tech regulatory developments is the “Permissionless Innovation” view proposed by Adam Thierer, research fellow with the Technology Policy Program at the Mercatus Center. It’s a great slogan, and when I first encountered it, I was excited. …
Lately I’ve been thinking more about the ethics of autonomous systems, and in particular the interaction between group-level norms (society) and system-level norms (machines).
Over the weekend I noticed something in this vein that I had missed: Cristiano Castelfranchi’s A Cognitive Framing for Norm Change. The Chapter is from Springer’s series on Coordination, Organizations, Institutions, and Norms in Agent Systems, which emerged from the 2015 International Conference on Autonomous Agents and Multiagent Systems. AAMAS2015 was focused in particular on
the design and construction of open systems [in order] to devise governance mechanisms that foster interactions that are conducive to achieve individual or collective goals. …
Of all the areas of policy that “nudges” have touched, one conspicuous absence is the area of accessibility. This is surprising, because there’s a deep connection between nudging and accessibility: Both are about creating the right kind of environments.
Nudges work by making it more likely that individuals will make the right choices. This requires policymakers to create environments that promote the right preconditions for choice. Want folks to eat healthier? Create a cafeteria line where the healthiest foods prominently positioned. In other words, nudging is all about environment. …
Today, around five billion devices are connected to the internet. We live squarely within the age of big data. The promise of the age is that core social problems can be solved with the petabytes of data we produce. But consider: Gartner anticipates that internet connected devices will increase by a factor of five by 2020. Cisco estimates a factor of 10. Soon, virtually all objects will be smart and intercommunicative. With the Internet of Things (IoT), physical and digital worlds will merge.
The IoT will fundamentally reshape the frontiers of cooperation. It will create increasing economic and social interdependency, which will encourage the growth of shared goals among cross-cutting segments of the population. In the private sector, new forms of cooperation will flourish. Meanwhile, the IoT will offer governments powerful regulatory tools. These developments will give rise to a complicated set of ethical questions about the legitimacy of state action, coercion, and paternalism. …
If Pokémon Go is a fitness app, it’s without a doubt the most popular of its kind.
But is Pokémon a fitness app? Vox says yes:
Unlike most games, which engage only your thumbs, Pokémon Go requires you to walk, run, and even jump — all great forms of exercise. Gizmodo noted that this may even be driving a “pandemic” of sore legs, since so many users have complained about pain from their Pokémon “workouts.”
And, according to Pokémon Go CEO John Hanke, all of this activity is by design:
A lot of fitness apps come with a lot of “baggage” that end up making you feel like “a failed Olympic athlete” when you’re just trying to get fit, Hanke says. “Pokémon Go” is designed to get you up and moving by promising you Pokémon as rewards, rather than placing pressure on you. …
A list of ways that disclosure — a prominent nudge — can fail to achieve its policy aims:
It’s hard to think of a more successful recent policy innovation than the nudge. The simple idea is to create policy that’s “built for people.” For too long, it felt like policy interventions were built for every group except the people. Laws were hard to read; they imagined that humans decision-making is completely rational; regulation was overly formal, rigid, and anything but intuitive. Today, for every law that still adopts this traditional approach to policymaking, there is a nudge to challenge it.
I’m a huge fan of nudges. I think the core values that they represent is the future of effective policymaking. In fact, I think we’re only at the very beginning of policymaking that’s “built for people.” Eventually, all successful policy will meet people where they are, instead of where policymakers imagine them to be. …
On Sunday I suggested that the fate of smart contracts will rest on social engineering just as much as it will rest on technical engineering. Tonight, I want to offer a handful of proposals on how to facilitate the social side of smart contracts.
We conceptualize contracts and promises as containing an implicit “reasonableness” clause. If I promise to meet you for lunch on Thursday, implicit in that promise is the idea that if, say, we have an earthquake, then lunch is off. Here’s how you can test claims like this. Ask yourself: Would the person who breached the contract or promise be subject to rebuke? If I didn’t show up at lunch on Thursday after an earthquake earlier that day, would you send an annoyed text? You might want to know that I’m OK; but you wouldn’t be annoyed that I didn’t show up for lunch. …