A Tale of Two Dystopias: Order and Chaos on the Electronic Frontier

Kevin Bankston
22 min readOct 27, 2016

--

Or, “What Science Fiction Taught Me (and Can Teach You) About the Future of Technology, Policy, and the Encryption Debate”

This article was originally delivered as a keynote address for The Frontiers of Cybersecurity Policy and Law Conference at The Robert S. Strauss Center for International Security and Law, University of Texas at Austin Law School, February 5, 2016, and published in the American Journal of Criminal Law. Please cite as A Tale of Two Dystopias: Order and Chaos on the Electronic Frontier, 44 Am. J. Crim. Law 156 (2016). A note on notes: the footnotes from the law journal version of this article have been converted for this version into links wherever possible and appropriate, with some references modified, removed or added as necessary.

Good evening everyone, and thank you to Bobby Chesney and the teams at the Strauss Center and UT Law for having me here and for organizing an exceptional day of panels. I’m admittedly at a bit of a loss to find myself here at my alma mater, the University of Texas, nominally keynoting — lecturing! — to a room filled with so many of the people I have worked with (or against) over the past fifteen years. There are mentors here, mentees, colleagues from my former organizations, friends, frenemies, esteemed opponents I’ve litigated against…. It’s an incredible group, which leaves me incredibly humbled and more than a little freaked out.

If this were some public keynote with a bunch of civilians I could just do my usual song and dance on the issue of the moment — most likely encryption and “Going Dark,” repeating arguments you’ve all heard before — and that would be that. But I owe you guys something more than that: some grand holistic theory of everything that pulls together all the threads from today’s panels, or some deeply personal and idiosyncratic take that gets to the meat of why I do what I do. I foolishly will try to do a little bit of both tonight, and will probably fail to do either one effectively, but success or failure, I hope the next 25 minutes will provide some light entertainment and some minor food for thought to complement your dinners.

I’m going to talk about the path that started for me here at UT and brought me to this work — perhaps prompting you to stop and reflect a bit on how you got here — and I’m going to talk about where I think we are now as a community, and about where I think we are going. Amongst other things, this talk will be a bit of a love letter to — and a friendly argument with — the Electronic Frontier Foundation (EFF), the organization where I spent most of my career and that basically raised me. And as will be no surprise to anyone here who knows me well, it’s also going to address one of my great passions: science fiction, and how it can be used as a tool to help us think about today’s problems.

Yep. Science fiction. Get used to it, non-nerds.

The topic of science fiction came up just a couple of weeks ago at another cybersecurity event, the Hewlett Foundation’s cybersecurity grantees’ meeting in Los Angeles. There, thanks to the foundation’s president Larry Kramer and its cybersecurity program officer Eli Sugarman, I was lucky enough to meet Walter Parkes, the screenwriter of WarGames and Sneakers and the producer of Minority Report amongst many other hit movies. If you want a good example of the feedback loop between fiction and especially science fiction and real world tech and policy, you can’t get much better than Walter, since WarGames inspired the Computer Fraud and Abuse Act that our panelists were arguing about earlier, Sneakers inspired countless security engineers and hackers and spooks to do what they do, and the movie version of Minority Report is probably the second most-cited fictional work in policy discourse about surveillance and privacy (and an inspiration for a lot of modern interface design to boot). What’s the first most-cited work? I think we all know the answer, but we’ll get to that.

Walter, being from Hollywood, is all about story. Beginning, middle, and end, strong protagonists, with clear conflicts and relatable goals, and right now he’d tell me to stop with all the throat clearing and begin the beginning. So here’s where it starts, for me:

Fade in on the UT campus, spring of 1993, my freshman year. It was easily one of the happiest years of my life. Coming from Southern Louisiana, a place where progressive bookish sci-fi nerds who don’t like football didn’t really fit in, I finally felt like I was home, and my only job was reading and thinking and writing for no other purpose than to explore — a freedom for which I thank Plan II, UT’s incredible liberal arts honors program. It was then that I was put on the path that led me here, today, though I didn’t know it at the time. I expected I’d just end up an English professor — until that spring of 1993, which changed the course of my life. And it wouldn’t have happened without science fiction.

It’s a truism at this point that science fiction often predicts or inspires real world developments, whether it’s famous sci-fi author Arthur C. Clarke coming up with the idea of communications satellites in the 40s, or sadly less famous sci-fi author John Brunner predicting computer worms in the 70s, or William Gibson in his early-80s cyberpunk classic Neuromancer coining the term “cyberspace” long before any of us were exposed to the modern Internet. And if you don’t know what cyberpunk is, well, you’re in luck, because 18 year-old me would be super-excited to tell you all about it right now because he’s totally a huge fan!

The cyberpunk science fiction of the 80s and 90s tossed out the spaceships and aliens and shiny exciting far futures of previous generations to look closer to home, painting a dark dystopian picture of a globally networked near future where multinational corporations were as powerful as states, states looked a lot more like corporations, and ordinary peoples’ lives were dictated by transnational networks of data and capital far beyond their control or comprehension. But the prototypical hacker protagonist, that anti-establishment hero, was not like those ordinary people. Only he — unfortunately it was almost always a he — could get behind the data, and make the technology work for him, and turn the tables on the powerful. Indeed, the hacker protagonist had become such an established trope by the early 90s that Neal Stephenson’s cyberpunk classic Snow Crash — which came out in the fall of my freshman year — had a hacker protagonist jokingly named Hiro Protaganist.

The teenage me just gobbled this stuff up. So too did several of the original founders of EFF, which was being born just as I was finishing high school. Frankly, if you want to understand why EFF was founded, who its constituency is, and why its culture is what it is now — if you want to understand the birth of the digital rights movement in the US, really — it very much helps to have read the cyberpunk fiction of the 80s and 90s. Indeed, EFF’s tag line for its recent anniversary wears the organization’s science-fictional origin on its sleeve. That slogan? “EFF: fighting dystopia for 25 years.”

But I digress, and Walter Parkes is getting impatient. Suffice to say, cyberpunk left me well primed when four things happened to set me on the path that brought me here.

The first thing was simply this: I got on the Internet for the very first time, to use email for the very first time. And coming fresh off of Gibson’s “cyberspace” and Stephenson’s “metaverse,” it kind of blew my mind.

Second, I read and was inspired by The Hacker Crackdown, a non-fiction book by science fiction writer (and Austinite) Bruce Sterling. It was all about the emerging hacker subculture; the Secret Service’s overreaching response to it, including the seizure of all the computers at Austin-based Steve Jackson Games (since they hosted a BBS to which someone had posted some proprietary AT&T documents); and about the foundation of EFF, which was initially created to raise the money and find the lawyers to fight for Steve Jackson Games to get its computers back. Reading The Hacker Crackdown was kind of like reading one of the cyberpunk novels that Sterling and Gibson and Stephenson were writing at the time, but it was real. And just to tie it all up in a nice bow, one of Steve Jackson’s most popular role-playing games at the time, inspired by those same novels, was called Cyberpunk. That book, well, blew my mind.

Third thing: being a fan of Bruce Sterling, I also picked up the first issue of Wired Magazine since he was on the cover, and then its second issue. That second issue had on the cover the so-called “cypherpunks,” including one of the founders of EFF, who were fighting the first round of the original Crypto Wars. (Again, note the EFF/cyberpunk connection.) Those magazines, which I distinctly remember buying at the UT student union not far from here, illustrated in a way that I hadn’t quite realized before that many of the technologies I’d been reading about in science fiction were real, or very close to it. And…it…blew…my…mind.

After that, fourth and finally: my computer nerd dorm neighbors, who are still some of my best friends, showed me, the liberal arts wussie who’s primary understanding of technology came from reading sci-fi, this funny new computer program. It was called Mosaic — the very first web browser, Netscape’s predecessor, and the first concrete glimpse of what the next twenty years would look like. It — yes, that’s right — blew mind.

As an English Major, my first response was: wow, I would love to see a hypertext edition of James Joyce’s Ulysses! (I was big on Joyce at the time.)

My second response, though, was: I don’t want to be an English professor. I want to help build this. I want to help defend it and grow it. And if you’d told me then that I’d actually end up doing that by working for nearly a decade at the EFF in San Francisco — and would eventually end up running a whole new digital rights shop in Washington, DC fighting for a more open and secure Internet, called the Open Technology Institute — I would have slapped you in the face and called you liar for getting my hopes up.

Yet here we are. Flash-forward to now, Twenty-odd years later. And in a lot of ways, we’re navigating the cyberpunk future that was predicted in the 90s, although with some unanticipated bright spots — see the fall of the Soviet Union, which loomed large as a state power in 80s cyberpunk — some tragic curveballs, see 9/11 — and some technological shifts that we didn’t predict, see the rise of the smartphone.

But again, Walter is nagging me: what’s the story, Kevin? These nice people want to know: who are the protagonists and what’s the conflict? Cut to the chase! And I’ll do that, though it’s not a chase. It’s a race. It’s a race against two futures — two dystopias — and the protagonists of this tale of two dystopias are us, here, in this room, and everyone else in our community that is trying to preserve the security of the Internet.

One of the dystopias is from a science fiction that you all know: it’s 1984, which I pointed to earlier as the most-cited fiction in privacy policy circles. 1984 was also cited by Walter Parkes in LA as a great example of how narrative can take a set of complex ideas and simplify them, package them, and create a powerful shorthand that everyone understands. “Big Brother” has become our go-to signifier when warning of a technology-enabled surveillance society. And it is still a relevant warning, as we’re looking down the barrel of the kind of totalitarian surveillance dystopia that 1984 portrays, but updated from a telescreen-centric to an Internet-centric mode — a world where all of the public expression and private communication flowing over the Internet can easily be monitored and manipulated and made to serve the interests of the state. We see the most virulent strains of this future already reaching into our present, especially in Russia and China, whose governments are wholly unapologetic in trying to turn the Internet into the most powerful tool of social control ever devised. We see a more friendly, velvet-gloved, Brave New World-ish version of that future infiltrating the West, as intelligence services have begun to live on the Internet backbone, and mass surveillance has replaced targeted surveillance, and many are left wondering where our dreams in the 90s of the Internet as an open frontier have gone. (Imagine here a footnote to either Jennifer Granick’s great keynote at Black Hat last year, “The End of the Internet Dream,” and/or the less apt but still hilarious “Dream of the 90s” sketch from the comedy show Portlandia, both of which I recommend wholeheartedly.)

Or, to reference another much-beloved science fiction: after several decades of mostly unconstrained growth of the global open Internet, the Empire is Striking Back. States are acting, in some cases very aggressively, to reassert their digital sovereignty.

In contrast to 1984, the other, second dystopia is one that none of you have ever heard of, from a science fiction book that none of you have read (though it, and the other books in the trilogy of which it is a part, are available for free online if you Google for them). It’s called The Maelstrom, by Peter Watts, one of my favorite sci-fi novelists writing today. What is the Maelstrom, you ask? Well, imagine today’s Internet with all its emergent insecurities, all the spooks and spies and hackers and criminals and state attackers, and botnets and phishing scams and mass data breaches and backdoors and bad guys. Imagine all the ugly trends outlined in Professor Ron Deibert’s Black Code, a harrowing portrait of the modern Internet that I highly recommend, and then multiply them by a 1000. Imagine us building new layer upon new layer over the chaotic, crumbling foundation that is the Internet, continually trying to make it more secure and continually failing, as the botnets we talked about in today’s panel grow and multiply into a pervasive infestation of self-evolving automated agents, an electronic war of all against all, red in tooth and claw, a digital state of nature. Imagine the Internet as a failed state, where no one wants to go but where everyone has to live, because we’ve built our entire society on top of it. That is the maelstrom.

And those are our two dystopias. 1984 versus the Maelstrom. Order vs. Chaos. The Scylla and Charybdis that we as a community need to navigate, because none of us want either of those futures. Sure, some of us fall a bit more on the order side or the chaos side — or as Dahlia Lithwick put it in one of her more popular columns, some of us are order muppets and some of us are chaos muppets. (Google it.) But I know that my oft-time opponents in government ultimately don’t want to live in the liberty-eroding 1984, any more than I want to live in the anarchic chaos of the Maelstrom where no one is safe. And we need to work together to plot a course that avoids those futures. Or, even worse, a combination of those futures, where the totalitarian states of 1984 ride atop the Maelstrom, and the only choice left to anyone — if they have a choice at all — is to be left in the utter chaos outside the walls of the state, or to embrace Big Brother. It’s that result that I actually fear the most.

So I look at the landscape, as many of you do, and try to figure out how to move the right levers to avoid those futures. But every lever you can move makes another lever move in the other direction. For every action, there’s an equal and opposite reaction — indeed, many of those actions and reactions are things that have been in the news, and that we have been talking about in our panels, just in the past few days. Take, for example, a company like Facebook turning on HTTPS for all of its traffic. That encryption prevents foreign countries — like, say, the UK, just in the news today on this issue — from sniffing their citizens’ Facebook messages as they pass through the territory to and from US servers. That in turn prompts attempts by those countries to either regulate against encryption, or demand that the data be stored locally, or try to apply their laws outside of their territory to demand the handover of data stored in the US, backed by the threat of throwing Facebook execs in jail if they ever visit. Or, consider the example of users turning to end-to-end encrypted apps like iMessage or WhatsApp — which prompts demands that the companies not offer such encryption or build a backdoor into it, and also prompts government stockpiling of vulnerabilities to hack into the phones as an alternative to wiretapping. Pushback on one issue leads to more pushback on another issue, back and forth, tit for tat, a never-ending whack-a-mole.

So how do we respond? Well, we discussed one of the current strategies during yesterday’s dinner panel: the idea of “letting off steam” and trying to give the governments more of the data they say they need in the least harmful ways, to avoid the most harmful ways. Last night we discussed the example of trying to come up with a new framework for cross-border data transfers. If we can come up with a way for Google or Facebook or whoever to voluntarily hand over data to the UK in a way that is consistent with human rights and perhaps even raises standards, the thinking goes, then we can help head off bad things like data localization or anti-encryption mandates or American Internet execs getting thrown in jail. Some privacy-minded technologists and advocates have similarly argued that if we can just routinize and standardize government hacking under clear and protective standards — I mean, they are doing it already anyway, right?, just under weak, unclear, secret standards — then we can head off bad anti-encryption policies.

That’s the thinking, anyway, and I have to admit that I’m very sympathetic to this approach. I’m one of the people trying to figure out what that reasonable compromise might be on cross-border data transfers. I’m one of those people who, without endorsing the idea of government hacking, wants to expose the practice to real regulation and ensure that it is included in the calculus as we continue to have this never-ending “Going Dark” debate. I’m scared we won’t make any progress and will continue to lose ground on these issues if we’re not willing to have those sorts of conversations. I’m scared we’ll fail to protect the technologies that we need to protect ourselves.

That said, I’ll also admit that I have some serious concerns about this approach because I fear the governments will be not be placated by our concessions. I think we may let off steam by conceding in one area, yet still fail to reduce the pressure elsewhere. The cross-border issue is a good example: we’re willing to compromise to arrive at a human rights-respecting solution when it comes to accessing stored content in criminal investigations, and yet it’s clear from today’s story in the Washington Post that the UK’s priority is a wiretapping solution for its national security investigations. The privacy advocates that are willing to engage in the cross-border data discussion are doing it because we’re concerned about the alternative of forced data localization, and yet data localization very well may happen anyway — due to that national security wiretapping need that we’re unwilling to address, or due to an ultimate failure of the EU data protection safe harbor that allows US companies to take in European data, or due to a variety of factors we can’t necessarily predict. Similarly, we may end up legitimizing government hacking through our attempts to expose and regulate it, while still not achieving the goal of getting law enforcement to stop pushing for limits on encryption.

This is why many of my friends and mentors — including folks in this room, like ACLU’s Chris Soghoian or my fellow EFF alum Jennifer Granick — would say “Don’t compromise. Just fight it. Just fight it all. Fight on every front and don’t give any ground.” That’s basically the way I was raised at EFF, and they do have a point. Why should we be working to give governments more access to data, when the overall trend is in the direction of a golden age of surveillance, where the information they don’t have access to is just a small portion of a massive amount of communications data that didn’t even exist twenty years ago and that they can now easily obtain? Why should we be the reasonable compromisers when the state’s appetite for data is essentially unquenchable, because the operating theory in law enforcement and intelligence seems to be that every scrap of data that exists or will exist should be technically accessible to them at any time, that they have a natural right to it, even though until very recently nearly the entire mass of human communication was always outside of their reach? Hell, electronic eavesdropping of any sort was an incredibly rare oddity until the back half of the last century, and yet now we are asked to build our entire world around the idea of guaranteed government access to all of our private communications and records? Screw them!

…is what they would say.

And I hear them. And I know where they are coming from. “Just fight it” was where I was as a younger man, as a litigator. Being young, and being a litigator, makes it easy to see the world in such blacks and whites. And it may be where I end up as an older man, assuming I’ve tired of the horse-trading and compromise and shifting alliances of the policy game, and have lost faith that the rules of law and reason can actually restrain the powerful.

But right now I will do whatever I can to protect the right to encrypt above all others, even if that means compromising on some other things, even if pushing them back on that front means they push us back on other fronts. Because here’s the nub: only encryption helps us avoid both dystopias. Only encryption. It protects us both from the Maelstrom of an insecure Internet and the 1984 of an unconstrained state that can see into every nook and cranny. It offers the middle way between order and chaos, the middle way between security of the state and security from the state. And right now that’s where I’m pointing all my guns, right down the middle, in defense of crypto.

And maybe I will tire of the middle, at some point. Maybe I will finally get sick of participating in discussions about cross-border data exchange and government hacking that make the “just fight it all” side of me worry. But right now, it’s where I’m standing, and I invite you to join me there. Perhaps you’d prefer not. Maybe your mileage may vary, and your calculus may be different, and I’ll just have to keep having all of these arguments, both with the people who I’m fighting and with the people who think I’m not fighting hard enough.

But it’s late and it’s been a long day, so let’s not argue about it now. Walter is telling me that this story has climaxed, and it’s about time to wrap things up. But before I let you go, let me offer two closing thoughts that might be useful in your thinking of our future, inspired by two science fiction novels about the future of our thinking.

The first is a newer sci-fi novel called Nexus, by Ramez Naam. Naam is an interesting character. He spent 13 years at Microsoft leading teams developing Internet Explorer and Outlook and their search engine Bing, but really wanted to be a sci-fi writer. Nexus is his first book, and although it’s maybe not the most literary novel — it reads like an airport techno-thriller on sci-fi steroids — it’s premise is a nice take on an old idea, the hybridization of man and machine. The titular technology, Nexus, is essentially a nano-technological drug that integrates with your brain and Internet-enables it, allowing — amongst other things — for people to silently communicate with each other, to share complex skills and masses of knowledge in an instant, and even create group minds just like you can network computers.

It’s pretty far out, but it’s not totally nuts. In fact, Naam has some fascinating afterwords in the book and its sequels laying out the current state of brain/computer interface research, and there’s a lot of surprising things happening in labs right now, such as: paralyzed people moving robot limbs using their thoughts; blind people whose vision has been restored by inputting electrical signals directly into their visual cortices; fMRI brain scanners that ‘read’ what a person is seeing and can reconstruct the video; and brain-damaged rats whose memories have been restored with computer chips implanted in the hippocampus — whose memories can be recorded and replayed in their brains at any time! Just last week, I read a story about researchers using electrodes to monitor a person’s brain and then matching that data against data collected from previous brains to figure out in near real-time what the person is looking at. And then there is perhaps my favorite brain research story, about the two University of Washington researchers who in 2013 cooperatively played a video game by networking their freaking brains. One researcher could see a video game display but had no controls, while the other had the controls but no display. When the one with the controls wanted to shoot, an EEG cap on his skull would transmit a signal across campus, where a magnetic stimulator on the other researcher’s head would send a targeted pulse through his motor cortex and cause his finger to twitch and hit the fire button.

Amazing, right? I know, it’s still a far cry from having Internet-enabled brains, but that kind of technology is foreseeable — perhaps even inevitable. So one science fictional thought experiment for you to consider while you finish your dessert: how would you approach cybersecurity policy if the things we were securing were our brains, and the data we were securing were our thoughts? How would you feel about not being able to encrypt the private thoughts that you share with your family? How would you feel about the possibility of a government secretly implanting malware in your brain? Sure, you can say that the whole idea is just science fiction nonsense, but like all good science fiction, it’s a metaphor that speaks to us about how we live today. Regardless of whether you think our brains will ever literally merge with computers, in a way they already have. As the unanimous Supreme Court recently recognized in Riley v. California, our mobile devices now house an absolutely enormous amounts private thoughts, images and information, amounts of data we couldn’t even conceive of a generation ago. Every day, these devices are looking more and more like our outboard brains. At what point does that change how we treat them, as a policy matter?

That’s closing thought number one. The second and final closing thought was inspired by Charlie Stross’ Accelerando, perhaps my favorite sci-fi novel of the 20-aughts, and certainly the one with the most crazy new ideas per square inch. In that book, several characters have an entertaining debate — one that we could also have ourselves right now — about whether or not the Singularity has already happened. If you’re not familiar with the Singularity, a term bandied about by AI researchers and sci-fi nerds alike, it’s the name for the hockey stick part of the curve in our technological growth, where the exponential progress of Moore’s law hits a point of accelerating acceleration and the capacity of our computing power begins to exceed our own grasp and starts improving on itself, and beyond that point we are supplanted by thinking machines that we can’t even comprehend, and we are unable to predict anything anymore because what’s going to happen next is so far beyond our ken.

Some people jokingly call this “The Rapture of the Nerds.”

No, seriously, this is a thing. A thing that some of the architects of our digital lives, including some people who make big decisions at Google, take very seriously. And just as you need to understand cyberpunk to understand the origin of American digital rights groups, you kind of need to understand the Singularity if you truly want to grok Silicon Valley’s weirdo strain of technolibertarian utopianism.

But anyway, back to Accelerando. These people in the book are wondering, seriously arguing at length, about whether they’re post-Singularity. And the joke is this: they are having this argument on a laser-propelled spaceship the size of a can of soup on its way to the next star system, and the reason they are able to be having that argument on that spaceship is because they aren’t people at all — they are the digitally uploaded brains of people.

Kind of a crazy idea, right? But there’s a real truth inside the crazy, and it’s this: you can’t tell when you’re in it when you’re in it. We’ve learned to take so much of our technology for granted and so quickly that if a time traveler from even just ten years ago visited us, they’d be totally disoriented by it, and if we went back even ten years, we’d be kind of helpless without it. Stross’ joke is the sci-fi version of an older joke that was central to a great commencement speech by not-quite-science-fiction author David Foster Wallace, entitled “This is Water.” (You should Google that too, it’s great.) The joke is: two young fish are swimming one day when an older fish passes by and says “how’s the water?” After the old fish passes, one fish turns to the other and asks him: “What the hell is water?”

We have told ourselves so many times that we’re in the midst of an epochal shift in the nature of human society and its relationship to technology that we’ve forgotten how true it is. This is water. We are in it right now. And it’s very easy to look at these seismic shifts and feel powerless in the face of them. But it’s actually the opposite. We are only at the beginning of this new era, still in the early days of what we currently call the Internet, and complex systems are very sensitive to initial conditions. That’s what people mean when they talk about the “butterfly effect.” The decisions we make now matter more than we know. We are the butterflies.

Or let me put it another way. The NSA has this acronym: NOBUS. It stands for “Nobody But Us,” and it refers to software vulnerabilities that the NSA thinks only the NSA will ever discover and exploit. I think it’s a profoundly flawed concept, as demonstrated at 2013’s Chaos Computer Congress when Jake Appelbaum did a keynote on the Snowden docs laying out the NSA’s catalog of exploits, and the vulnerability that one of those exploits relied on had been the subject of a research presentation just the day before. Someone other than “us” had found it.

But we don’t need to argue about NOBUS right now. Right now, I’m using the phrase Nobody But Us in a very different way. Right now, what I’m here to say is that Nobody But Us is going to set the initial conditions for the future of the Internet. Nobody But Us — those of us in this room, and our communities outside of this room, and our opponents who want to abuse the Internet to control us or steal from us or attack us — is going to determine which science fictional future we’re going to live in, whether it looks like the Maelstrom, or Big Brother, or both, or neither. Nobody But Us is going to create the future that we want. We are the butterflies. This is water. It’s up to us, all of us, chaos muppets and order muppets alike, to fight dystopia. So join me. Join together. Fight the Maelstrom. Fight 1984. Fight for your off-board brain. And read more science fiction. You’ll be glad you did.

Kevin Bankston (@kevinbankston) is the director of the Open Technology Institute at New America, which works in the public interest to ensure that all communities have access to an Internet that is both open and secure. He also convenes a book club for fellow tech policy professionals who enjoy science fiction, the book selections for which can be found at @techpoliscifi.

--

--

Kevin Bankston

Advocate for responsible tech development. Wonk. Lawyer. Nerd. Passionate about sci-fi, real-world tech policy, and everything in between. I fight for the user.