On Consensus, Virtue & the Commercialization of Freedom

I came across Kant’s paradox recently:

“Virtue can’t be mandated, because this would contravene liberty, yet liberty is conditional upon virtue.”

Elegantly put, but deeper analysis is justified in practice.


For this quote to be true, we must have consensus on the definition of liberty & virtue. Our consensus also must define the causal relationship between liberty & virtue; we can agree on what constitutes a basic right, and on rules or quantifiable standards of virtue that result in earned liberties beyond basic rights.

Consensus creates expectations or social norms that are not enforced by law, but which result in social consequences if violated. This net of social expectations acts as a safety net for handling those matters that everyone does not agree on, which is why they cannot be made into law. Within a homogeneous subculture that has universal agreement on a social contract of virtuous behavior, expectations can maintain stability when laws are not enough.


Scope is relevant here; local liberty is conditional on local virtue, federal liberty results in federal virtue. For local social interactions, we use expectations, a softer version of laws.

As scope increases in scale from local to global, we use laws to enforce virtuous behavior necessary for preserving freedom in edge cases, like paying taxes for defense.

Often liberties & virtues of different scopes become relevant to another scope by social response; if enough people are outraged by a politician’s personal life, it begins to impact matters of state.

This is why we might see someone justify marrying a dictator if he’s kind to her, or why we might see a soldier justify killing a possibly innocent person to honor her country.

Knowledge of Virtue

Liberty is also conditional on having imperfect information about lack of virtue. When we have technology that can tell us when someone is planning a crime (or when it just occurs to them), liberty will be even more difficult to protect.

At what point in the life cycle of an idea to commit a crime do we terminate the process in the mind that created it; do we draw the line at idea conception, or at intent, or the planning phase, or execution?

Should we:

  • punish the person at all for having the idea
  • remotely execute a kill process command with/out their permission, delete the parent process that initiated it, and restructure the relevant neural network to conform better to structures that produce virtuous thoughts
  • punish the developer who created that flavor of brainOS for allowing such a vulnerability in the mind design
  • wait until the moment before a person commits a crime before we terminate the thought process, allowing as much time as possible for their conscience to intervene before we do?

Let’s hope we’ll be smart enough to design the brainOS in such a way that those we purchase crime prevention services from can:

  • detect & terminate bad decision thought processes with the user’s permission, if the brainOS that the user chose has a vulnerability that allows bad decisions

but cannot:

  • create bad processes to implicate innocent people
  • create negative processes to make innocent people kill themselves
  • create positive processes to stupefy or catalyze an addiction in innocent people
  • otherwise influence innocent people beyond the scope of crime prevention requirements.

The Future of Freedom Markets

Given that we will eventually have technology to read & edit mental processes, we may arrive at a point where there is a market for companies/agencies offering crime prevention services in return for control, so that a person who agrees to be protected by that entity is also giving them read & edit permissions over their own thoughts. There will likely be competition between these entities, so that you are only assured protection from other customers/citizens of the same entity & from your own bad decision processes, if the entity is trustworthy.

This market will result in complex contracts between the criminal prevention provider & the user with unclear terms of process termination, so that if a person agrees to use their protection, they are consenting to have their bad processes terminated and their brain edited according to the conditions of the contract.

Inevitably those contracts will require that a person use a certain flavor of brainOS that consistently produces the most virtuous people in order to receive the protection service, and if a person reneges & installs an unauthorized version of the brainOS that gives them more freedom, the contract may have small print allowing them to punish the person if this brain-hack is discovered.

A heroic, well-paid, or imprisoned developer will create the first flavor of brainOS that has an intentional vulnerability allowing whoever installs it to intervene in any automated decision, however virtuous, after which criminals & corrupt government officials will exploit the vulnerability to commandeer an army at their convenience.

Then the next hero will develop a workaround to allow brain hacks without signaling that it has occurred, and we will be free for a short period of time before that opportunity closes, and so on for the next unexploited distortion. The lesson there is if you are vigilant & adapt quickly enough, you can maintain your rights despite incentives working against you. The only way out of that trap is being so independent that you can build whatever technology you need rather than trusting that of another.

So we are back to consensus bridging the gap between virtue and liberty; agreement on what constitutes a virtuous thought process & what liberties a person has in regards to their own mind. Or you can laugh uproariously at this crazy ‘society’ trend & go start your own nation of one, where the only consensus you need is between the voices in your head, at the cost of sacrificing participation in society.

“No man is free who is not master of himself.”
- Epictetus

If we all had perfect self-control & never gave in to vice, this discussion would be invalid. But since we’re not perfect, in future we may decide to rely on developers to build a reliable brainOS to help us make virtuous decisions to protect the liberty of all.

So it’s important to establish rules for responsibility & rights regarding mental processes, because someday someone will have the technology to read & change your mind, promise not to use it on you, and then ask you to sign a contract.

Show your support

Clapping shows how much you appreciated JJ’s story.