Author’s note: I’m currently in the process of transferring a bunch of content from an archive over to Medium. The following was the theory and charter I laid out in spring 2017 before starting a group house experiment in Berkeley in fall 2017. The experiment is over now; I’ll be reposting the retrospective tomorrow.
This IS a rationality post (specifically, theorizing on group rationality and autocracy/authoritarianism), but the content is quite cunningly disguised beneath a lot of meandering about the surface details of a group house charter. If you’re not at least hypothetically interested in reading about the workings of an unusual group house full of rationalists in Berkeley, you can stop here.
Section 0 of 3: Preamble
Purpose of post: Threefold. First, a lot of rationalists live in group houses, and I believe I have some interesting models and perspectives, and I want to make my thinking available to anyone else who’s interested in skimming through it for Things To Steal. Second, since my initial proposal to found a house, I’ve noticed a significant amount of well-meaning pushback and concern à la have you noticed the skulls? and it’s entirely unfair for me to expect that to stop unless I make my skull-noticing evident. Third, some nonzero number of humans are gonna need to sign the final version of this charter if the house is to come into existence, and it has to be viewable somewhere.
What is Dragon Army [Barracks]? It’s a high-commitment, high-standards, high-investment group house model with centralized leadership and an up-or-out participation norm, designed to a) improve its members and b) actually accomplish medium-to-large scale tasks requiring longterm coordination. Tongue-in-cheek referred to as the “fascist/authoritarian take on rationalist housing,” which has no doubt contributed to my being vulnerable to strawmanning but was nevertheless the correct joke to be making, lest people underestimate rather than overestimate the constraints they were signing up for. Aesthetically modeled after Dragon Army from Ender’s Game (not HPMOR), with a touch of Paper Street Soap Company thrown in, with Duncan Sabien in the role of Ender/Tyler and Eli Tyre in the role of Bean/The Narrator.
Why? Current group housing/attempts at group rationality and communitysupported leveling up seem to me to be falling short in a number of ways. First, there’s not enough stuff actually happening in them (i.e. to the extent people are growing and improving and accomplishing ambitious projects, it’s largely within their professional orgs or fueled by unusually agenty individuals, and not by leveraging the lowhanging fruit available in our house environments). Second, even the group houses seem to be plagued by the same sense of unanchored abandoned loneliness that’s hitting the rationalist community specifically and the millennial generation more generally. There are a bunch of competitors for “third,” but for now we can leave it at that.
Section 1 of 3: Underlying models
The following will be meandering and longwinded; apologies in advance. In short, both the house’s proposed aesthetic and the impulse to found it in the first place were not well-reasoned from first principles — rather, they emerged from a set of System 1 intuitions which have proven sound/trustworthy in multiple arenas and which are based on experience in a variety of domains. This section is an attempt to unpack and explain those intuitions post-hoc, by holding plausible explanations up against felt senses and checking to see what resonates.
Problem 1: Pendulums
This one’s first because it informs and underlies a lot of my other assumptions. Essentially, the claim here is that most social progress can be modeled as a pendulum oscillating decreasingly far from an ideal. The society is “stuck” at one point, realizes that there’s something wrong about that point (e.g. that maybe we shouldn’t be forcing people to live out their entire lives in marriages that they entered into with imperfect information when they were like sixteen), and then moves to correct that specific problem, often breaking some other Chesterton’s fence in the process.
For example, my experience leads me to put a lot of confidence behind the claim that we’ve traded “a lot of people trapped in marriages that are net bad for them” for “a lot of people who never reap the benefits of what would’ve been a strongly netpositive marriage, because it ended too easily too early on.” The latter problem is clearly smaller, and is probably a better problem to have as an individual, but it’s nevertheless clear (to me, anyway) that the loosening of the absoluteness of marriage had negative effects in addition to its positive ones.
Proposed solution: Rather than choosing between absolutes, integrate. For example, I have two close colleagues/allies who share millennials’ default skepticism of lifelong marriage, but they also are skeptical that a commitmentfree lifestyle is costlessly good. So they’ve decided to do handfasting, in which they’re fully committed for a year and a day at a time, and there’s a known period of time for asking the question “should we stick together for another round?”
In this way, I posit, you can get the strengths of the old socially evolved norm which stood the test of time, while also avoiding the majority of its known failure modes. Sort of like building a gate into the Chesterton’s fence, instead of knocking it down — do the old thing in timeboxed iterations with regular strategic checkins, rather than assuming you can invent a new thing from whole cloth.
Caveat/skull: Of course, the assumption here is that the Old Way Of Doing Things is not a slippery slope trap, and that you can in fact avoid the failure modes simply by trying. And there are plenty of examples of that not working, which is why Taking TimeBoxed Experiments And Strategic CheckIns Seriously is a must. In particular, when attempting to strike such a balance, all parties must have common knowledge agreement about which side of the ideal to err toward (e.g. innocents in prison, or guilty parties walking free?).
Problem 2: The Unpleasant Valley
As far as I can tell, it’s pretty uncontroversial to claim that humans are systems with a lot of inertia. Status quo bias is well researched, past behavior is the best predictor of future behavior, most people fail at resolutions, etc.
I have some unqualified speculation regarding what’s going on under the hood. For one, I suspect that you’ll often find humans behaving pretty much as an effort and energy conserving algorithm would behave. People have optimized their most known and familiar processes at least somewhat, which means that it requires less oomph to just keep doing what you’re doing than to cobble together a new system. For another, I think hyperbolic discounting gets way too little credit/attention, and is a major factor in knocking people off the wagon when they’re trying to forego local behaviors that are known to be intrinsically rewarding for local behaviors that add up to longterm cumulative gain.
But in short, I think the picture of “I’m going to try something new, eh?” often looks like this:
… with an “unpleasant valley” some time after the start point. Think about the cold feet you get after the “honeymoon period” has worn off, or the desires and opinions of a military recruit in the second week of a sixweek boot camp, or the frustration that emerges two months into a new diet/exercise regime, or your second year of being forced to take piano lessons.
The problem is, people never make it to the third year, where they’re actually good at piano, and start reaping the benefits, and their System 1 updates to yeah, okay, this is in fact worth it. Or rather, they sometimes make it, if there are strong supportive structures to get them across the unpleasant valley (e.g. in a military bootcamp, they just … make you keep going). But left to our own devices, we’ll often get halfway through an experiment and just … stop, without ever finding out what the far side is actually like.
Proposed solution: Make experiments “unquittable.” The idea here is that (ideally) one would not enter into a new experiment unless a) one were highly confident that one could absorb the costs, if things go badly, and b) one were reasonably confident that there was an Actually Good Thing waiting at the finish line. If (big if) we take those as a given, then it should be safe to, in essence, “lock oneself in,” via any number of commitment mechanisms. Or, to put it in other words: “MediumTerm Future Me is going to lose perspective and want to give up because of being unable to see past short-term unpleasantness to the juicy, longterm goal? Fine, then — MediumTerm Future Me doesn’t get a vote.” Instead, Post-Experiment Future Me gets the vote, including getting to update heuristics on which-kinds-of-experiments-are-worth-entering.
Caveat/skull: People who are bad at self-modeling end up foolishly locking themselves into things that are higher-cost or lower-EV than they thought, and getting burned; black swans and tail risk ends up making even good bets turn out very very badly; we really should’ve built in an ejector seat. This risk can be mostly ameliorated by starting small and giving people a chance to calibrate — you don’t make white belts try to punch through concrete blocks, you make them punch soft, pillowy targets first.
And, of course, you do build in an ejector seat. See next.
Problem 3: Saving Face
If any of you have been to a martial arts academy in the United States, you’re probably familiar with the norm whereby a tardy student purchases entry into the class by first doing some pushups. The standard explanation here is that the student is doing the pushups not as a punishment, but rather as a sign of respect for the instructor, the other students, and the academy as a whole.
I posit that what’s actually going on includes that, but is somewhat more subtle/complex. I think the real benefit of the pushup system is that it closes the loop.
Imagine you’re a ten year old kid, and your parent picked you up late from school, and you’re stuck in traffic on your way to the dojo. You’re sitting there, jittering, wondering whether you’re going to get yelled at, wondering whether the master or the other students will think you’re lazy, imagining stuttering as you try to explain that it wasn’t your fault —
Nope, none of that. Because it’s already clearly established that if you fail to show up on time, you do some pushups, and then it’s over. Done. Finished. Like somebody sneezed and somebody else said “bless you,” and now we can all move on with our lives. Doing the pushups creates common knowledge around the questions “does this person know what they did wrong?” and “do we still have faith in their core character?” You take your lumps, everyone sees you taking your lumps, and there’s no dangling suspicion that you were just being lazy, or that other people are secretly judging you. You’ve paid the price in public, and everyone knows it, and this is a good thing.
Proposed solution: This is a solution without a concrete problem, since I haven’t yet actually outlined the specific commitments a Dragon has to make (regarding things like showing up on time, participating in group activities, and making personal progress). But in essence, the solution is this: you have to build into your system from the beginning a set of ways-to-regain-face. Ways to hit the ejector seat on an experiment that’s going screwy without losing all social standing; ways to absorb the occasional misstep or failure-to-adequately-plan; ways to be less-than-perfect and still maintain the integrity of a system that’s geared toward focusing everyone on perfection. In short, people have to know (and others have to know that they know, and they have to know that others know that they know) exactly how to make amends to the social fabric, in cases where things go awry, so that there’s no question about whether they’re trying to make amends, or whether that attempt is sufficient.
Caveat/skull: The obvious problem is people attempting to game the system — they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100. The next obvious problem is that the price is set too low for the group, leaving them to still feel jilted or wronged, and the next obvious problem is that the price is set too high for the individual, leaving them to feel unfairly judged or punished (the fun part is when both of those are true at the same time). Lastly, there’s something in the mix about arbitrariness — what do pushups have to do with lateness, really? I mean, I get that it’s paying some kind of unpleasant cost, but …
Problem 4: Defections & Compounded Interest
I’m pretty sure everyone’s tired of hearing about one-boxing and iterated prisoners’ dilemmas, so I’m going to move through this one fairly quickly even though it could be its own whole multipage post. In essence, the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system. Another way to put this is that people underestimate by a couple of orders of magnitude the corrosive impact of their defections — we often convince ourselves that 90% or 99% is good enough, when in fact what’s needed is something like 99.99%.
There’s something good that happens if you put a little bit of money away with every paycheck, and it vanishes or is severely curtailed once you stop, or start skipping a month here and there. Similarly, there’s something good that happens when a group of people agree to meet in the same place at the same time without fail, and it vanishes or is severely curtailed once one person skips twice.
In my work at the Center for Applied Rationality, I frequently tell my colleagues and volunteers “if you’re 95% reliable, that means I can’t rely on you.” That’s because I’m in a context where “rely” means really trust that it’ll get done. No, really. No, I don’t care what comes up, DID YOU DO THE THING? And if the answer is “Yeah, 19 times out of 20,” then I can’t give that person tasks ever again, because we run more than 20 workshops and I can’t have one of them catastrophically fail.
(I mean, I could. It probably wouldn’t be the end of the world. But that’s exactly the point — I’m trying to create a pocket universe in which certain things, like “the CFAR workshop will go well,” are absolutely reliable, and the “absolute” part is important, and I’d rather just do extra work myself than risk someone else screwing it up.)
As far as I can tell, it’s hyperbolic discounting all over again — the person who wants to skip out on the meetup sees all of these immediate, local costs to attending, and all of these visceral, large gains to defection, and their S1 doesn’t properly weight the impact to those distant, cumulative effects (just like the person who’s going to end up with no retirement savings because they wanted those new shoes this month instead of next month). 1.01^n takes a long time to look like it’s going anywhere, and in the meantime the quick onetime payoff of 1.1 that you get by knocking everything else down to .99^n looks juicy and delicious and seems justified.
But something magical does accrue when you make the jump from 99% to 100%. That’s when you see teams that truly trust and rely on one another, or marriages built on unshakeable faith (and you see what those teams and partnerships can build, when they can adopt time horizons of years or decades rather than desperately hoping nobody will bail after the third meeting). It starts with a common knowledge understanding that yes, this is the priority, even — no, wait, especially — when it seems like there are seductively convincing arguments for it to not be. When you know — not hope, but know — that you will make a local sacrifice for the longterm good, and you know that they will, too, and you all know that you all know this, both about yourselves and about each other.
Proposed solution: Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings (and, correspondingly, be more careful and more conservative about which undertakings you officially take on, versus which things you’re just casually trying out as an informal experiment), with said norm to be modified/iterated only during predecided strategic check-in points and not on the fly, in the middle of things. Build a habit of clearly distinguishing targets you’re going to hit from targets you’d be happy to hit. Agree upon and uphold surprisingly high costs for defection, Hofstadter style, recognizing that a cost that feels high enough probably isn’t. Leave people wiggle room as in Problem 3, but define that wiggle room extremely concretely and objectively, so that it’s clear in advance when a line is about to be crossed. Be ridiculously nitpicky and anal about supporting standards that don’t seem worth supporting, in the moment, if they’re in arenas that you’ve previously assessed as susceptible to compounding. Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high cost for them to sustain, believe them, and make decisions accordingly.
Caveat/skull: Obviously, because we’re humans, even people who reflectively endorse such an overall solution will chafe when it comes time for them to pay the price (I certainly know I’ve chafed under standards I fought to install). At that point, things will seem arbitrary and overly constraining, priorities will seem misaligned (and might actually be), and then feelings will be hurt and accusations will be leveled and things will be rough. The solution there is to have, already in place, strong and open channels of communication, strong norms and scaffolds for emotional support, strong default assumption of trust and good intent on all sides, etc. etc. This goes wrongest when things fester and people feel they can’t speak up; it goes much better if people have channels to lodge their complaints and reservations and are actively incentivized to do so (and can do so without being accused of defecting on the norm-in-question; criticism =/= attack).
Problem 5: Everything else
There are other models and problems in the mix — for instance, I have a model surrounding buy-in and commitment that deals with an escalating cycle of asks-and-rewards, or a model of how to effectively leverage a group around you to accomplish ambitious tasks that requires you to first lay down some “topsoil” of simple/trivial/arbitrary activities that starts the growth of an ecology of affordances, or a theory that the strategy of trying things and doing things outstrips the strategy of think-until-you-identify-worthwhile-action, and that rationalists in particular are crippling themselves through decision paralysis/letting the perfect be the enemy of the good when just doing vaguely interesting projects would ultimately gain them more skill and get them further ahead, or a strong sense based off both research and personal experience that physical proximity matters, and that you can’t build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.
But I’m going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.
Section 2 of 3: Power dynamics
All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members. It was also meant to justify, at least indirectly, why a strong guiding hand might be necessary given that our community’s evolved norms haven’t really produced results (in the group houses) commensurate with the promises of EA and rationality.
Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it’s those aesthetics that will be used to resolve epistemic gridlock). In other words, it’s not so much those arguments as it is the fact that Duncan finds those arguments compelling. It’s worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like “does this guy actually think things through,” “is this guy likely to be stupid or meta-stupid,” “will this guy listen/react/update/pivot in response to evidence or consensus opposition,” and “when this guy has intuitions that he can’t explain, do they tend to be validated in the end?”
In other words, it’s fair to view this whole post as an attempt to prove general trustworthiness (in both domain expertise and overall sanity), because — well — that’s what it is. In milieu like the military, authority figures expect (and get) obedience irrespective of whether or not they’ve earned their underlings’ trust; rationalists tend to have a much higher bar before they’re willing to subordinate their decision-making processes, yet still that’s something this sort of model requires of its members (at least from time to time, in some domains, in a preliminary “try things with benefit of the doubt” sort of way). I posit that Dragon Army Barracks works (where “works” means “is good and produces both individual and collective results that outstrip other group houses by at least a factor of three”) if and only if its members are willing to hold doubt in reserve and act with full force in spite of reservations — if they’re willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both).
And since that’s a) the central difference between DA and all the other group houses, which are collections of non subordinate equals, and b) quite the ask, especially in a rationalist community, it’s entirely appropriate that it be given the greatest scrutiny. Likely participants in the final house spent ~64 consecutive hours in my company a couple of weekends ago, specifically to play around with living under my thumb and see whether it’s actually a good place to be; they had all of the concerns one would expect and (I hope) had most of those concerns answered to their satisfaction. The rest of you will have to make do with grilling me in the comments here.
Power and authority are generally anti-epistemic — for every instance of those-in-power defending themselves against the barbarians at the gates or anti-vaxxers or the rise of Donald Trump, there are a dozen instances of them squashing truth, undermining progress that would make them irrelevant, and aggressively promoting the status quo.
Thus, every attempt by an individual to gather power about themselves is at least suspect, given regular ol’ incentive structures and regular ol’ fallible humans. I can (and do) claim to be after a saved world and a bunch of people becoming more the-best-versions-of-themselves-according-to-themselves, but I acknowledge that’s exactly the same claim an egomaniac would make, and I acknowledge that the link between “Duncan makes all his housemates wake up together and do pushups” and “the world is incrementally less likely to end in gray goo and agony” is not obvious.
And it doesn’t quite solve things to say, “well, this is an optional, consent-based process, and if you don’t like it, don’t join,” because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone’s building a coercive trap, it’s everyone’s problem.
“Over and over he thought of the things he did and said in his first practice with his new army. Why couldn’t he talk like he always did in his evening practice group? No authority except excellence. Never had to give orders, just made suggestions. But that wouldn’t work, not with an army. His informal practice group didn’t have to learn to do things together. They didn’t have to develop a group feeling; they never had to learn how to hold together and trust each other in battle. They didn’t have to respond instantly to command.
And he could go to the other extreme, too. He could be as lax and incompetent as Rose the Nose, if he wanted. He could make stupid mistakes no matter what he did. He had to have discipline, and that meant demanding — and getting — quick, decisive obedience. He had to have a well-trained army, and that meant drilling the soldiers over and over again, long after they thought they had mastered a technique, until it was so natural to them that they didn’t have to think about it anymore.”
But on the flip side, we don’t have time to waste. There’s existential risk, for one, and even if you don’t buy ex-risk à la AI or bioterrorism or global warming, people’s available hours are trickling away at the alarming rate of one hour per hour, and none of us are moving fast enough to get All The Things done before we die. I personally feel that I am operating far below my healthy sustainable maximum capacity, and I’m not alone in that, and something like Dragon Army could help.
So. Claims, as clearly as I can state them, in answer to the question “why should a bunch of people sacrifice nontrivial amounts of their autonomy to Duncan?”
1. Somebody ought to run this, and no one else will. On the meta level, this experiment needs to be run — we have like twenty or thirty instances of the laissez-faire model, and none of the high-standards/hardcore one, and also not very many impressive results coming out of our houses. Due diligence demands investigation of the opposite hypothesis. On the object level, it seems uncontroversial to me that there are goods waiting on the other side of the unpleasant valley — goods that a team of leveled-up, coordinated individuals with bonds of mutual trust can seize that the rest of us can’t even conceive of, at this point, because we don’t have a deep grasp of what new affordances appear once you get there.
2. I’m the least unqualified person around. Those words are chosen deliberately, for this post on “less wrong.” I have a unique combination of expertise that includes being a rationalist, sixth grade teacher, coach, RA/head of a dormitory, ringleader of a pack of hooligans, member of two honor code committees, curriculum director, obsessive sci-fi/fantasy nerd, writer, builder, martial artist, parkour guru, maker, and generalist. If anybody’s intuitions and S1 models are likely to be capable of distinguishing the uncanny valley from the real deal, I posit mine are.
3. There’s never been a safer context for this sort of experiment. It’s 2017, we live in the United States, and all of the people involved are rationalists. We all know about NVC and double crux, we’re all going to do Circling, we all know about Gendlin’s Focusing, and we’ve all read the Sequences (or will soon). If ever there was a time to say “let’s all step out onto the slippery slope, I think we can keep our balance,” it’s now — there’s no group of people better equipped to stop this from going sideways.
4. It does actually require a tyrant. As a part of a debrief during the weekend experiment/dry run, we went around the circle and people talked about concerns/dealbreakers/things they don’t want to give up. One interesting thing that popped up is that, according to consensus, it’s literally impossible to find a time of day when the whole group could get together to exercise. This happened even with each individual being ostensibly willing to make personal sacrifices and doing things that are somewhat costly.
If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable. And yes, this means some kids left behind (ctrl+f), but the whole point of this is to be instrumentally exclusive and consensually high-commitment. You just need someone to make the actual final call — there are too many threads for the coordination problem of a house of this kind to be solved by committee, and too many circumstances in which it’s impossible to make a principled, justifiable decision between 492 almost-indistinguishably-good options. On top of that, there’s a need for there to be some kind of consistent, neutral force that sets course, imposes consistency, resolves disputes/breaks deadlock, and absorbs all of the blame for the fact that it’s unpleasant to be forced to do things you know you ought to but don’t want to do.
And lastly, we (by which I indicate the people most likely to end up participating) want the house to do stuff — to actually take on projects of ambitious scope, things that require ten or more talented people reliably coordinating for months at a time. That sort of coordination requires a quarterback on the field, even if the strategizing in the locker room is egalitarian.
5. There isn’t really a status quo for power to abusively maintain. Dragon Army Barracks is not an object-level experiment in making the best house; it’s a meta-level experiment attempting (through iteration rather than armchair theorizing) to answer the question “how best does one structure a house environment for growth, self-actualization, productivity, and social synergy?” It’s taken as a given that we’ll get things wrong on the first and second and third try; the whole point is to shift from one experiment to the next, gradually accumulating proven-useful norms via consensus mechanisms, and the centralized power is mostly there just to keep the transitions smooth and seamless. More importantly, the fundamental conceit of the model is “Duncan sees a better way, which might take some time to settle into,” but after e.g. six months, if the thing is not clearly positive and at least well on its way to being self-sustaining, everyone ought to abandon it anyway. In short, my tyranny, if net bad, has a natural time limit, because people aren’t going to wait around forever for their results.
6. The experiment has protections built in. Transparency, operationalization, and informed consent are the name of the game; communication and flexibility are how the machine is maintained. Like the Constitution, Dragon Army’s charter and organization are meant to be “living documents” that constrain change only insofar as they impose reasonable limitations on how wantonly change can be enacted.
Section 3 of 3: Dragon Army Charter (DRAFT)
Statement of purpose:
Dragon Army Barracks is a group housing and intentional community project which exists to support its members socially, emotionally, intellectually, and materially as they endeavor to improve themselves, complete worthwhile projects, and develop new and useful culture, in that order. In addition to the usual housing commitments (i.e. rent, utilities, shared expenses), its members will make limited and specific commitments of time, attention, and effort averaging roughly 90 hours a month (~1.5hr/day plus occasional weekend activities).
Dragon Army Barracks will have an egalitarian, flat power structure, with the exception of a commander (Duncan Sabien) and a first officer (Eli Tyre). The commander’s role is to create structure by which the agreed-upon norms and standards of the group shall be discussed, decided, and enforced, to manage entry to and exit from the group, and to break epistemic gridlock/make decisions when speed or simplification is required. The first officer’s role is to manage and moderate the process of building consensus around the standards of the Army — what they are, and in what priority they should be met, and with what consequences for failure. Other “management” positions may come into existence in limited domains (e.g. if a project arises, it may have a leader, and that leader will often not be Duncan or Eli), and will have their scope and powers defined at the point of creation/ratification.
Initial areas of exploration:
The particular object level foci of Dragon Army Barracks will change over time as its members experiment and iterate, but at first it will prioritize the following:
- Physical proximity (exercising together, preparing and eating meals together, sharing a house and common space)
- Regular activities for bonding and emotional support (Circling, pair debugging, weekly retrospective, tutoring/study hall)
- Regular activities for growth and development (talk night, tutoring/study hall, bringing in experts, cross pollination)
- Intentional culture (experiments around lexicon, communication, conflict resolution, bets & calibration, personal motivation, distribution of resources & responsibilities, food acquisition & preparation, etc.)
- Projects with “shippable” products (e.g. talks, blog posts, apps, events; some solo, some partner, some small group, some whole group; ranging from shortterm to yearlong)
- Regular (every 6-10 weeks) retreats to learn a skill, partake in an adventure or challenge, or simply change perspective
Dragon Army Barracks will begin with a move-in weekend that will include ~10 hours of group bonding, discussion, and norm-setting. After that, it will enter an eight-week bootcamp phase, in which each member will participate in at least the following:
- Whole group exercise (90min, 3x/wk, e.g. Tue/Fri/Sun)
- Whole group dinner and retrospective (120min, 1x/wk, e.g. Tue evening)
- Small group baseline skill acquisition/study hall/crosspollination (90min, 1x/wk)
- Small group circleshaped discussion (120min, 1x/wk)
- Pair debugging or rapport building (45min, 2x/wk)
- One-on-one checkin with commander (20min, 2x/wk)
- Chore/house responsibilities (90min distributed)
- Publishable/shippable solo small-scale project work with weekly public update (100min distributed)
… for a total time commitment of 16h/week or 128 hours total, followed by a whole group retreat and reorientation. The house will then enter an eightweek trial phase, in which each member will participate in at least the following:
- Whole group exercise (90min, 3x/wk)
- Whole group dinner, retrospective, and plotting (150min, 1x/wk)
- Small group circling and/or pair debugging (120min distributed)
- Publishable/shippable small group medium-scale project work with weekly public update (180min distributed)
- One-on-one checkin with commander (20min, 1x/wk)
- Chore/house responsibilities (60min distributed)
… for a total time commitment of 13h/week or 104 hours total, again followed by a whole group retreat and reorientation. The house will then enter a third phase where commitments will likely change, but will include at a minimum whole group exercise, whole group dinner, and some specific small-group responsibilities, either social/emotional or project/productive (once again ending with a whole group retreat). At some point between the second and third phase, the house will also ramp up for its first large-scale project, which is yet to be determined but will be roughly on the scale of putting on a CFAR workshop in terms of time and complexity.
Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to:
- Above-average physical capacity
- Above-average introspection
- Above-average planning & execution skill
- Above-average communication/facilitation skill
- Above-average calibration/debiasing/rationality knowledge
- Above-average scientific lab skill/ability to theorize and rigorously investigate claims
- Average problem-solving/debugging skill
- Average public speaking skill
- Average leadership/coordination skill
- Average teaching and tutoring skill
- Fundamentals of first aid & survival
- Fundamentals of financial management
- At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill)
- At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
Furthermore, every Dragon should have participated in:
- At least six personal growth projects involving the development of new skill (or honing of prior skill)
- At least three partner or small-group projects that could not have been completed alone
- At least one large-scale, whole-army project that either a) had a reasonable chance of impacting the world’s most important problems, or b) caused significant personal growth and improvement
- Daily contributions to evolved house culture
Speaking of evolved house culture…
Because of both a) the expected value of social exploration and b) the cumulative positive effects of being in a group that’s trying things regularly and taking experiments seriously, Dragon Army will endeavor to adopt no fewer than one new experimental norm per week. Each new experimental norm should have an intended goal or result, an informal theoretical backing, and a set reevaluation time (default three weeks). There are two routes by which a new experimental norm is put into place:
- The experiment is proposed by a member, discussed in a whole group setting, and meets the minimum bar for adoption (>60% of the Army supports, with <20% opposed and no hard vetos)
- The Army has proposed no new experiments in the previous week, and the Commander proposes three options. The group may then choose one by vote/consensus, or generate three new options, from which the Commander may choose.
Examples of some of the early norms which the house is likely to try out from day one (hit the ground running):
- The use of a specific gesture to greet fellow Dragons (house salute)
- Various call-and-response patterns surrounding house norms (e.g. “What’s rule number one?” “PROTECT YOURSELF!”)
- Practice using hook, line, and sinker in social situations (three items other than your name for introductions)
- The anti-Singer rule for open calls-for-help (if Dragon A says “hey, can anyone help me with X?” the responsibility falls on the physically closest housemate to either help or say “Not me/can’t do it!” at which point the buck passes to the next physically closest person)
- An “interrupt” call that any Dragon may use to pause an ongoing interaction for fifteen seconds
- A “culture of abundance” in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible
- A “graffiti board” upon which the Army keeps a running informal record of its mood and thoughts
Dragon Army Code of Conduct
While the norms and standards of Dragon Army will be mutable by design, the following (once revised and ratified) will be the immutable code of conduct for the first eight weeks, and is unlikely to change much after that.
- A Dragon will protect itself, i.e. will not submit to pressure causing it to do things that are dangerous or unhealthy, nor wait around passively when in need of help or support (note that this may cause a Dragon to leave the experiment!).
- A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party.
- A Dragon will assume good faith in all interactions with other Dragons and with house norms and activities, i.e. will not engage in strawmanning or the horns effect.
- A Dragon will be candid and proactive, e.g. will give other Dragons a chance to hear about and interact with negative models once they notice them forming, or will not sit on an emotional or interpersonal problem until it festers into something worse.
- A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts, i.e. will not engage in silent defection, undermining, halfheartedness, aloofness, subtle sabotage, or other actions which follow the letter of the law while violating the spirit. Another way to state this is that a Dragon will practice compartmentalization — will be able to simultaneously hold “I’m deeply skeptical about this” alongside “but I’m actually giving it an honest try,” and postpone critique/complaint/suggestion until predetermined checkpoints. Yet another way to state this is that a Dragon will take experiments seriously, including epistemic humility and actually seeing things through to their ends rather than fiddling midway.
- A Dragon will take the outside view seriously, maintain epistemic humility, and make subject-object shifts, i.e. will act as a behaviorist and agree to judge and be judged on the basis of actions and revealed preferences rather than intentions, hypotheses, and assumptions (this one’s similar to #2 and hard to put into words, but for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior). Another way to state this is that a Dragon will embrace the maxim “don’t believe everything that you think.”
- A Dragon will strive for excellence in all things, modified only by a) prioritization and b) doing what is necessary to protect itself/maximize total growth and output on long time scales.
- A Dragon will not defect on other Dragons.
There will be various operationalizations of the above commitments into specific norms (e.g. a Dragon will read all messages and emails within 24 hours, and if a full response is not possible within that window, will send a short response indicating when the longer response may be expected) that will occur once the specific members of the Army have been selected and have individually signed on. Disputes over violations of the code of conduct, or confusions about its operationalization, will first be addressed one-on-one or in informal small group, and will then move to general discussion, and then to the first officer, and then to the commander.
Note that all of the above is deliberately kept somewhat flexible/vague/openended/unsettled, because we are trying not to fall prey to GOODHART’S DEMON.
- The initial filter for attendance will include a one-on-one interview with the commander (Duncan), who will be looking for a) credible intention to put forth effort toward the goal of having a positive impact on the world, b) likeliness of a strong fit with the structure of the house and the other participants, and c) reliability à la financial stability and ability to commit fully to longterm endeavors. Final decisions will be made by the commander and may be informally questioned/appealed but not overruled by another power.
- Once a final list of participants is created, all participants will sign a “free state” contract of the form “I agree to move into a house within five miles of downtown Berkeley (for length of time X with financial obligation Y) sometime in the window of July 1st through September 30th, conditional on at least seven other people signing this same agreement.” At that point, the search for a suitable house will begin, possibly with delegation to participants.
- Rents in that area tend to run ~$1100 per room, on average, plus utilities, plus a 10% contribution to the general house fund. Thus, someone hoping for a single should, in the 85th percentile worst case, be prepared to make a ~$1400/month commitment. Similarly, someone hoping for a double should be prepared for ~$700/month, and someone hoping for a triple should be prepared for ~$500/month, and someone hoping for a quad should be prepared for ~$350/month.
- The initial phase of the experiment is a six month commitment, but leases are generally one year. Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected. After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of “keep paying until you’ve found your replacement.” (This will likely be easiest to enforce by simply having as many names as possible on the actual lease.)
- Of the ~90hr/month, it is assumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are independent or voluntarily-paired work. Furthermore, it is assumed that the commander maintains sole authority over ~15 of those hours (i.e. can require that they be spent in a specific way consistent with the aesthetic above, even in the face of skepticism or opposition).
- We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.
Conclusion: Obviously this is neither complete nor perfect. What’s wrong, what’s missing, what do you think? I’m going to much more strongly weight the opinions of Berkelyans who are likely to participate, but I’m genuinely interested in hearing from everyone, particularly those who notice red flags (the goal is not to do anything stupid or metastupid). Have fun tearing it up.
(sorry for the abrupt cutoff, but this was meant to be published Monday and I’ve just … not … been … sleeping … to get it done)