Software, It’s a Thing

This is the text of a speech I gave as the opening address to the Library of Congress’s Digital Preservation 2014 conference July 22 in Washington, DC. The audience was composed of professional archivists, technologists, and others who work in museums, libraries, universities, and institutions charged with what we generally term “cultural memory.” Where does software fit into that? What does it mean to think of software as a made thing, a wrought human artifact to be preserved, and not just as an intangible, ephemeral trellis of code? Should there be a software canon? Those are the questions I wanted to pose. I’ve retained the verbal character of the original text which was also accompanied by some 50 images, only a few of which are reproduced here.

This “Earthrise” was photographed by the Lunar Orbiter on August 24, 1966. It predates the familiar and iconic Earthrise image by two years. The restored first photo was released to the public on April 22, 2014, as part of an effort aimed at recovering imagery from the robotic probe, the first spacecraft to actually orbit the moon. To date, some 2000 of the Orbiter’s shots have been rescued from nearly 50 years of dormant tape storage by a team working out of a former McDonald’s near NASA Ames in Silicon Valley.

Two days after the publication of the Lunar Orbiter image, on April 24, 2014 came another press release, also concerning the recovery of images from obsolescent media, but of a rather different sort. A group based at Carnegie Mellon had identified and retrieved computer-generated graphics created by Andy Warhol on an Amiga 1000 personal computer in 1985. Acting on a hunch from new media artist and Warhol authority Cory Arcangel, the group forensically imaged floppy diskettes at the Andy Warhol Museum. After some elaborate intermediary steps, including reverse engineering the proprietary format in which the files were originally created and stored, the previously unseen images were released to the public. As befits a find of this magnitude—a dozen new Warhols!—press coverage was extensive, in all the major media outlets.

Two days after that came yet another bonanza for digital preservation. April 26, 2014. This was the day of the already legendary Atari excavation in the New Mexico desert. For those of you who don’t know the story, for decades rumor had had it that the Atari Corporation, facing financial ruin in the wake of the disastrous release of the notoriously awful E.T. game, had dumped thousands, or tens of thousands, or maybe hundreds of thousands of game cartridges in a landfill outside Alamogordo as a means of disposing of unsalable product. Earlier this year a group of documentary filmmakers obtained the necessary permissions and permits, hired the heavy equipment, and started to dig. Within hours they had found what they were looking for. The photographs that quickly blanketed the Web are striking, what media theorist Stephen Jackson would perhaps call “broken world” imagery, resonant for the familiar shapes and artwork distorted, eroded, corroded, by soil and environmental agents. No lost prototypes or other rarities were found—in fact, the games themselves are all widely available and have been playable for years thanks to the retro community. In contrast to both the Lunar Orbiter project and the Warhol images, this was not a discovery about content. It was about aura and allure, sifting the literal grit of what we now widely acknowledge as “digital materiality.”

I begin with this miraculous seven day stretch in part simply to celebrate these pathbreaking achievements and the widespread public visibility that accrued from them. Digital preservation makes headlines now, seemingly routinely. And the work performed by the community gathered here is the bedrock underlying such high profile endeavors. But I also want to set the stage for one more news cycle that followed several weeks later, one which garnered similar levels of publicity but speaks to a rather different dynamic: not the discovery and release of dramatic new content, and not the aura of actual artifacts excavated from the desert sands, but rather . . . something else, something quirkier, and arguably more intimate.

On May 13, in conversation with Conan O’Brien, George R.R. Martin, author of course of the Game of Thrones novels, revealed that he did all of his writing on a DOS-based machine disconnected from the Internet and lovingly maintained solely to run . . . WordStar. Martin dubbed this his “secret weapon” and suggested the lack of distraction (and isolation from the threat of computer viruses, which he apparently regards as more rapacious than any dragon’s fire) accounts for his long-running productivity.

https://www.youtube.com/watch?v=X5REM-3nWHg

And thus, as they say, “It is known.” The Conan O’Brien clip went viral, on Gawker, Boing Boing, Twitter, and Facebook. Many commenters immediately if indulgently branded him a “Luddite,” while others opined it was no wonder it was taking him so long to finish the whole Song of Fire and Ice saga (or less charitably, no wonder that it all seemed so interminable). But WordStar is no toy or half-baked bit of code: on the contrary, it was a triumph of both software engineering and what we would nowadays call user-centered design. The brainchild of programmer Rob Barnaby and MicroPro’s Seymour Rubinstein, WordStar dominated the home computer market for the first half of the 1980s, before losing out to WordPerfect, itself to be eclipsed by Microsoft Word. Originally a CP/M application that was later ported to DOS, WordStar was the software of choice for owners of the early “luggables” like the Kaypro computer and the Osborne 1. Writers who cut their teeth on it include names as diverse as Michael Chabon, Ralph Ellison, William F. Buckley, and Anne Rice (who also equipped her vampire Lestat with the software when it came time for him to write his own eldritch memoirs). WordStar was justifiably advertised as early as 1978 as a What You See Is What You Get word processor, a marketing claim that would be echoed by Microsoft when Word was launched in 1983. WordStar’s real virtues, though, are not captured by its feature list alone. As Ralph Ellison scholar Adam Bradley observes in his work on Ellison’s use of the program, “WordStar’s interface is modelled on the longhand method of composition rather than on the typewriter.” A power user like Ellison or George R.R. Martin who has internalized the keyboard commands would navigate and edit a document as seamlessly as picking up a pencil to mark any part of the page.

WordStar runs no less efficiently and behaves no differently in 2014 than it did in 1983. But if you’re running it today you must be a Luddite, or at the very least a curmudgeonly author of high fantasy whose success allows you to indulge your eccentricities! This is what was so fascinating (to me) about the public reaction to this seemingly recondite detail about Martin’s writing process: a specific piece of antiquarian software, WordStar 4.0 to be exact, is taken as a clue or a cue to the personality and persona of its user. The software, in other words, becomes an indexical measure of the famous author, the old-school command-line intricacy of its interface somehow in keeping with Martin’s quirky public image, part paternalistic grandfather and part Dr. Who character. We know, that is most of us of a certain age remember, just enough about WordStar to make Martin’s mention of it compelling and captivating. But what is WordStar? It is not content per se, nor is it any actual thing. (Or is it?) WordStar is software, and software, as Yale computer scientist David Gelernter has stated, is “stuff unlike any other.”

WordStar running on a Kaypro 4 at the Maryland Institute for Technology in the Humanities, earlier in 2014. Photo by author.

What is software then, really? Just as early filmmakers couldn’t have predicted the level of ongoing interest in their work over a hundred years later, who can say what future generations will find important to know and preserve about the early history of software? Lev Manovich, who is generally credited with having inaugurated the academic field of software studies, recently published a book about the early history of multimedia. That project, like my own on the literary history of word processing, presents considerable difficulties at the level of locating primary sources. Manovich observes: “While I was doing this research, I was shocked to realize how little visual documentation of the key systems and software (Sketchpad, Xerox Parc’s Alto, first paint programs from late 1960s and 1970s) exists. We have original articles published about these systems with small black-and-white illustrations, and just a few low resolution film clips. And nothing else. None of the historically important systems exist in emulation, so you can’t get a feeling of what it was like to use them.”

The emerging challenges of software preservation were explored just over a year ago at a two-day meeting at the Library of Congress called “Preserving.exe.” As Henry Lowood, himself a participant in that meeting (as was I) has subsequently noted, this was hardly the first attempt to convene an organized gathering around the subject. Notable earlier efforts included the 1990 “Arden House” summit, which boasted representation from Apple, Microsoft, and HP alongside the Smithsonian and other cultural heritage stakeholders for purposes of launching a “National Software Archives.” Nonetheless, more than two decades later, a roomful of computer historians, technical experts, archivists, academics, and industry representatives once again met to discuss what role the nation’s premier cultural heritage institutions, from the Library and the Smithsonian to the Internet Archive, ought to play in gathering and maintaining collections of games and other software for posterity. You can read various accounts of and responses to that meeting in the Preserving.Exe report that was published on NDIIPP’s Web site. But more than a year out from the event itself, the report’s impact and uptake seems modest, and as recently as 2014 the preservation of executable content was not included in the NDSA’s annual agenda for action. Thus, even as libraries, archives, and museums now routinely confront the challenges of massive quantities of content in digital format, actual software—not George R.R. Martin’s document files, not the character data, but WordStar itself—remains a narrow, niche, or lesser priority.

Matthew Fuller, editor of Software Studies: A Lexicon puts it this way in his introduction to a 2008 MIT Press volume on the subject: “While applied computer science and related disciplines … have now accreted half a century of work on this domain, software is often a blind spot in the wider, broadly cultural theorization and study of computational and networked digital media…. Software is seen as a tool, something you do something with. It is neutral, grey, or optimistically blue.” Software studies, as a sub-field of digital media studies, thus offers a framework for historicizing software and dislodging it from its purely instrumental sphere. Besides Manovich and Fuller, key names in software studies include Wendy Chun, Noah Wardrip-Fruin, Lori Emerson, Shannon Mattern, and yes, German media theorist Friedrich Kittler who memorably proclaimed “there is no software.” I would now like to take a few moments to offer my own elaboration of a software studies framework by sketching some different references points, vectors if you will, not so much for defining software, but for demonstrating the range of ways one might seek to circumscribe it as an object of preservation.

Software as asset. The legal perspective. In 1969, the US Justice Department opened an anti-trust suit against IBM, the result of which was that IBM “unbundled” the practice of providing programs—software—to its clients for free as part of its hardware operations. Instead, IBM introduced the distinction between System Control Programs and Program Products; the latter became a salable commodity. IBM’s unbundling decision is routinely cited as a catalyst for the emergence of software as a distinct area of activity within computer science and engineering at large. The point I would make here is that the object we call “software” is a legal and commercial construct as much as it is a technological one.

Software as package. The engineer’s perspective. Computer historian Thomas Haigh has argued that the key moment for conceptualizing software came when its originators began to think about “packaging” their code so as to share it with others. Haigh makes the analogy to envelopes for letters and shipping containers. In practice, “packaging” the software meant conceiving of the software object not just in terms of code, but also systems requirements, documentation, support, and even the tacit knowledge required to run it. “What turned programs into software,” Haigh concludes, “was the work of packaging needed to transport them effectively from one group to another.” Software becomes software, in other words, when it is portable.

Software as shrinkwrap. The consumer’s perspective. As Lowood has suggested, this is the model that has dominated institutional collecting to date, notably at places like Stanford with its Stephen Cabrinety collection and the Library of Congress’s own efforts at the Culpeper facility. The obvious appeal here is that most shrinkwrapped software is about the same size as a Hollinger box. Haha, no, I’m kidding of course. But the appeal is clearly that it is easy to visualize shrinkwrapped software as an artifact, and thus integrate it into collecting practices already in place for artifacts of other sorts. Nor is this a spurious consideration: on the contrary, the artwork, inserts, documentation, and so-called “feelies” that were part of what came in the box are vital to a history of software.

Software as a kind of notation, or score. Here we are talking about actual source code, and the musical analogy is more than casual. Thomas Scoville gives us a striking account of a programmer who conceives of his coding as akin to conducting a jazz ensemble: “Steve had started by thumping down the cursor in the editor and riffing. It was all jazz to him. The first day he built a rudimentary device-driver; it was kind of like a back-beat. Day two he built some routines for data storage and abstraction; that was the bass line. Today he had rounded out the rhythm section with a compact, packet-based communication protocol handler. Now he was finishing up with the cool, wailing harmonies of the top-level control loops.” Yet scores are also always of course artifacts, themselves materially embodied and instantiated.

The materiality of notation. “Liberty,” by John Dickinson (1768).

Software as object. Here I deliberately use the word “object” in multiple valances, both to connote the binary executable as well as its resonance with object-oriented programming (itself a paradigm about modularity and reuse) and perhaps even the emerging philosophical discourse around so-called object oriented ontologies, or “triple-O” associated with figures like Graham Harman. (Triple-O advocates for the virtues of a non-correlationist worldview, one in which human actors are not the sole agents or foci or loci of experience.) As described on its Web site, the work of the National Software Reference Library at NIST is to “obtain digital signatures (also called hashes) that uniquely identify the files in the software packages.” To date, the NSRL contains some 20,000,000 unique hash values, thus delineating a non-human ecology of 20 million distinct digital objects. Yet the NSRL is not a public-facing collection, and it has no provisions for the circulation of software, nor does it facilitate the execution of legacy code.

Software as craft. The artisan’s perspective. Here I have in mind accounts of software development which deliberately position themselves in opposition to enterprise-level software engineering. Not Microsoft Word, but Scrivener (or for that matter, Medium). Mark Bernstein of Eastgate Systems puts it this way: “Your writing doesn’t come from a factory. Neither does artisanal software. These are programs with attitude, with fresh ideas and exciting new approaches. Small teams work every day to polish and improve them. If you have question or need something unusual, you can talk directly to the people who handcraft the software.”

Software as epigraphy. Yes, I mean written in stone. Trevor Owens has noted the uncannily compelling feature of the tombstones in the classic game Oregon Trail: “What made the tombstones fascinating was their persistence. What you wrote out for your characters on their tombstones persisted into the games of future players of your particular copy of the game. . . . These graves enacted the passage of time in the game. Each play apparently happening after the previous as the deaths, and often absurd messages, piled up along the trail.” Likewise, consider the Easter Egg. One of the most famous is to be found in Warren Robinett’s graphical adaptation of Adventure for the Atari 2600 in 1979. As others have argued, this was an important and innovative game, establishing many of the conventions we use to depict virtual space today (such as exiting a “room” through one side of the screen, and emerging in a logically adjoining room on a new screen). At the time Atari’s programmers were not credited in the documentation for any of the games they worked on, so Robinett created an Easter egg that allowed a player to display his name on the screen by finding and transporting an all but invisible one-pixel object. This seemingly slight gesture in fact speaks volumes about shifting attitudes towards software as a cultural artifact. Does “code” have authors? Is software “written” the way we write a book? Robinett’s game will surely outlast him: is this a tombstone or title page?

Software as clickwrap. This is perhaps the dominant model today, combining the familiar online storefront with advanced DRM and cloud-based content distribution.

Software as hardware. Born of a doubtlessly chimerical conviction and commitment to authenticity, it is nonetheless a preservation model taking hold in some of our more liminal spaces, such as the Media Archaeology Lab at the University of Colorado and the Maryland Institute for Technology in the Humanities, a digital humanities center. Here [above], for example, is WordStar running on a working Kaypro we maintain at MITH. Similarly, Jim Boulton has done phenomenal work preserving the browser software of the early Web.

Software as social media. The Gitub phenomenon. You may think of Github as a place to stash your code. That’s not how they think of it, however. Github believes that the future of creativity, commerce, and culture is executable. At the Preserving.exe meeting, a representative from Github made the connection to such high-minded ideals all explicit, declaring the software culture on the Web a new cultural canon and invoking the likes of Emerson, the Beowulf poet, and Adam Smith.

Software as background. New media artist Jeff Thompson has collected some 11,000 screenshots documenting every computer appearing (usually in the background) of every episode of the TV series Law and Order. We can learn much from incidental popular representations of software. Compare, for example, the extremely realistic depictions of the applications the characters use in the movie adaptations of Stieg Larsson’s Millennium series to the fanciful and extravagant interfaces of many Hollywood blockbusters. What can such shifts and contrasts tell us about popular attitudes toward software?

Michael Douglas checking email (1 new message!) in Disclosure (1994).

Software as paper trail. The documentary perspective. In 2012 I spent a week in the archives at Microsoft in Redmond, Washington looking at the early history of Word. I spent almost all of my time looking at paper: specs, requirements, design documents, memos and correspondence, marketing research, advertising and promotional materials, press clippings, swag, memorabilia, and ephemera. Besides corporate entities such as Microsoft, documentary software archives are available at such institutions as the Charles Babbage Institute, the University of Texas, and the Strong Museum of Play, as well as, again, Stanford.

Software as service. Facebook and YouTube, but also future iterations of such formerly shrinkwrapped products as the Microsoft Office suite. What’s notable about the service model is that it’s also supplying some of the most promising models for preservation. The Olive project at Carnegie Mellon is exploring solutions for streaming virtual machines as a preservation strategy. Likewise, Jason Scott (you knew I would get there eventually, didn’t you?) has been doing some simply astounding work with the JSMESS team at Internet Archive, turning emulated software into Web content akin to embedded video and other standard browser features. Is this legal, you ask? Talk to the guy in the funny hat.

Finally, software as big data. Jason Scott again. Having ingested thousands of disk images and ROMs into the Internet Archive’s Historical Software Collection, Jason is now algorithmically analyzing them. A dedicated machine “plays” the games and runs the software 24/7, taking screenshots at intervals and storing these for posterity on the Internet Archive’s servers. Software in other words, preserving software; machines preserving machines. Have you seen this movie before?

So that’s what I’ve got, over a dozen different approaches in all. Doubtless there are others. I haven’t talked about software as apps or software as abandonware or as bits to be curated or as virtual worlds, for example. But underlying all of these different approaches, or “frameworks” as I have called them, is the more fundamental one of what it means to think of software as a human artifact, a made thing, tangible and present for all of its supposed virtual ineffability. Scott Rosenberg, whose book Dreaming in Code furnishes a masterful firsthand account of an ultimately failed software development project, says this: “Bridges are, with skyscrapers and dams and similar monumental structures, the visual representation of our technical mastery over the physical universe. In the past half century software has emerged as an invisible yet pervasive counterpart to such world-shaping human artifacts.”

Prehistoric software: control cards for the Jacquard Loom. Science Museum, London. Photo by author.

We tend to conceptualize software and communicate about it using very tangible metaphors. “Let’s fork that build.” “Do you have the patch?” “What’s the code base?” Software may be stuff unlike any other, it may be even intangible, but it is still a thing, indisputably there as a logical, spatial, and imaginative artifact, subject to craft and technique, to error and human foible. Writing software is not an abstract logical exercise; it is art and design, intuition and discipline, tradition and individual talent, and over time the program takes shape as a wrought object, a made thing that represents one single realization of concepts and ideas that could have been expressed and instantiated in any number of other renderings. Software is thus best understood as a dynamic artifact: not some abstract ephemeral essence, not even just as lines of written instructions or code, but as something that builds up layers of tangible history through the years, something that contains stories and sub-plots and dramatis personae. Programmers even have a name for the way in which software tends to accrete as layers of sedimentary history, fossils and relics of past versions and developmental dead-ends: cruft, a word every bit as textured as crust or dust and others which refer to physical rinds and remainders.

Knowledge of the human past turns up in all kinds of unexpected places. Scholars of the analog world have long known this (writing, after all, began as a form of accounting—would the Sumerian scribes who incised cuneiform into wet clay have thought their angular scratchings would have been of interest to a future age)? Software is no less expressive of the environment around it than any object of material culture, no different in this way from the shards collected and celebrated by anthropologist James Deetz in his seminal study of the materiality of everyday life, In Small Things Forgotten. In the end one preserves software not because its value to the future is obvious, but because its value cannot be known. Nor are the myriad challenges and technical (or legalistic) barriers it presents, or the fear of loss, reason to hesitate. As Kari Kraus has noted in a paper published out of the NDIIP and IMLS Preserving Virtual Worlds project, “preservation sometimes courts moderate (and even extreme) loss as a hedge against total extinctive loss. It also indirectly shows us the importance of distributing the burden of preservation among the past, present, and future, with posterity enlisted as a preservation partner. However weak the information signal we send down the conductor of history, it may one day be amplified by a later age.”

Long ago, before even the first dot-com bubble, Sumner Redstone opined, “Content is king.” We all cherish content, like a lost lunar Earthrise image or a newly discovered Warhol. Indeed, I would submit that this community has gotten pretty good at preserving at least certain kinds of digital content. But is content enough? What counts as context for all our digital content? Is it the crushed Atari box retrieved from a landfill? Or is it software, actual executables, the processes and procedures, the interfaces and environments that create and sustain content? Perhaps it is indeed time to revive discussion about something like a National Software Registry or Repository; not necessarily as a centralized facility like Culpeper, dug into the side of a mountain, but as a network of allied efforts and stakeholders. In software, we sometimes call such networks of aggregate reusable resources libraries. Software, folks: it’s a thing. Thank you.

This essay draws from some of my earlier writing on the topic, including this on George R.R. Martin and WordStar and this in Slate, as well as my contribution to the aforementioned Preserving.exe report.

Software, it’s a thing. Photo by Lauren Lancaster.
Next Story — Am I a Digital Humanist? Confessions of a Neoliberal Tool
Currently Reading - Am I a Digital Humanist? Confessions of a Neoliberal Tool

Am I a Digital Humanist? Confessions of a Neoliberal Tool

5.15.16: Since this is now attracting readers beyond its original context: it is occasioned by, but is not primarily intended as a response to, a recent essay in the Los Angeles Review of Books (LARB) entitled “Neoliberal Tools (and Archives): A Political History of Digital Humanities,” authored by Daniel Allington, Sarah Brouillette, and David Golumbia.

On the face of it the question is absurd. Of course I’m a digital humanist.

I am heavily, publicly, and consistently identified with the digital humanities. I’ve written a trio of widely circulated essays on various permutations of the question “what is digital humanities.” I had a chapter in the formative 2004 Blackwell’s Companion volume, and I’ve contributed chapters and essays to many of the most important collections and journals since, from Debates to the differences “Shadows of” issue. I’ve publicly advocated that humanists should learn to program (though what this means turns out to be nuanced), and even suggested, within limits, that programming languages could be allowed to sub in for the doctorate’s foreign language requirement. I’ve worked on externally sponsored research at several different institutions, typically in the service of building “tools and archives.” I have a footprint in the library and archives world, alongside of literary studies. And, of course, I was trained in the University of Virginia’s English department, where I also worked first in UVA’s Electronic Text Center and then its Institute for Advanced Technology in the Humanities (IATH).

My mentors at UVA were John Unsworth and Jerome McGann. (Johanna Drucker arrived as I was leaving, but she too was an early and important professional friend.) And for the last decade I’ve served as an Associate Director of the Maryland Institute for Technology in the Humanities (MITH), a self-identified digital humanities center that was founded very much on the IATH model. So yeah, I guess I’m a digital humanist. I’ve also advised more than a few doctoral students who have gone on to careers as self-identifying digital humanists themselves. But there are lots of ways in which I don’t fit the mold. I can program, in some limited sense, but I don’t identify as a programmer or developer or even really as a builder or maker. My coding chops are okay for some parlor tricks or personal messing around, but I couldn’t implement software for real. My limited sojourn in big data research, meanwhile, was easily the weakest work of my career. I’ve never reviewed for the NEH’s ODH, and I’ve received exactly one NEH ODH grant myself, which may still hold the record for the smallest amount ever awarded by that office. My first book had “new media” in its subtitle, not digital humanities, and the most frequently asked question about my new book, on the literary history of word processing, is why it doesn’t contain any data mining or text analysis.

My aim in what follows isn’t (heaven forbid) to justify or defend or adjudicate or historicize the digital humanities one more time. It’s also (spoiler alert) not the recantation or mea culpa that apparently some have been salivating for, and so anyone anticipating that should just continue on in hate-read mode if at all. Instead I want to talk about the trajectory of a single career — my own. Not because my work and career is exceptional or so worthy of notice, but rather because I believe it is typical — notwithstanding certain key advantages and privileges, both structural and accidental, that I have been the beneficiary of. (You’ll hear more about those.) But “typical” in the sense that, at the end of the day, a career is just a career and the work is just the work. We all operate and identify on variety of local and global levels. That’s what typical. What we do, what we choose to work on and who we choose to work with emerges out of a complex skein of personal history, personally held values, circumstances, encounters, and all the other agents of chance, privilege, and socialization.

Which is why I’ve advocated an STS approach — focusing on microhistories of individuals, grants, centers, projects — as the most promising valance for a robust critique of digital humanities research. It’s the tack I took in my own documentation of what happened at UVA in “Digital Humanities as/is a Tactical Term,” not cited in the recent account that offers up an institutional history of DH as though it were a revelation. Here, though, I want to go deeper, as someone who was there and lived some of that history. (Others were there too and likewise lived it, and so this is not meant to invalidate or replace those experiences, but it is meant to co-exist with them; aspects of that history have also been the focus of at least one book written by someone from someplace else, so you might look there for an external perspective.)

This gets personal at times, and it gets a little messy. It’s not always self-consistent or safe. But it’s what I’ve got.


Midway through my undergraduate years at SUNY Albany (the first of four public state institutions in which I have spent my academic career) I decided I wanted to go to graduate school in English. Yes, I had read that piece in the New York Times, the one heralding a massive shortfall in humanities professors. (In fact I read it, as some of you may have, taped to one of my undergraduate mentors’ doors.)

I got in to UVA, and they offered me money. When I first arrived on Grounds, as they say there, I knew nothing of textual studies or humanities computing. My experience instead was quickly and totally defined by the oppressive culture of Permission that held sway among the first-years. No one at the time was admitted directly to the doctoral program. Permission (which was how we mentally pronounced it) thus meant permission to proceed, a decision that would be made for you based on your first two or three semesters of coursework and the advocacy of rabbi figure on the faculty. (You found out from a slip placed in your mailbox.) The first paper I wrote at UVA was a deconstruction of Frederick Douglass’s Narrative, revolving around scenes of public reading. It got an A-. I was thrilled: room for improvement to be sure, but not bad for my first piece of graduate-level scholarship, right? Then I learned that there were really only two grades, A and A-, and no one could hope to receive Permission without straight As.

I got Permission. Gradually, however, I came to the understanding that there was no one on the faculty with whom the dissertation I had thought I wanted to write on early American literature would really work. I also came to the realization that I didn’t want to write such a dissertation. So I took seminars with Deborah McDowell and Eric Lott. I also thought more about the person who had been my most influential undergraduate teacher, Don Byrd. Don was one of those magic trickster figures that you could still find in English departments at the time. He worked on alternative American poetry, was a student of Charles Olsen and Robert Duncan. He also had some contacts with John Perry Barlow and the West Coast virtual reality crowd. He had us reading Neuromancer around 1990 or so, and we logged on to the nascent internet for collaborative writing jams in what I now know was a MOO (not MOOC). We read Lipstick Traces and Derrida. I was twenty years old and it was intoxicating. When I had told him I was going to UVA, Don had left me with the advice to take a class with someone named Jerome McGann.

My third year in the program, the opportunity came up. I was done with coursework, but got McGann’s approval to audit a seminar on the Pre-Ralphaelites, about whom I knew little. McGann was working on something he called “hyper-editing,” and it was to be a central feature of the course. I had seen him demonstrate what would become the Rossetti Archive in the Bowers Library (yes) in old Wilson hall, on a Unix-based terminal running the Web’s first browser, Mosaic. The Blessed Damozel leaned out from the gunmetal grey navbar and dribbled down the screen in 24-bit color splendor. I was transfixed. The Lingua Franca article on McGann and IATH (written by a recent Columbia grad student named Steven Johnson) came out at about the same time. This was what I wanted to do, even though I didn’t really understand what “this” was or how to go about doing it. I was responding to the technology, yes, but also to textuality, manifest as palpably in that moment in front of the Unix terminal as any encounter with a special collections manuscript.

The night before the seminar McGann emailed (we had all been told to sign up for something called “e-mail”) that there would be no auditors admitted. Crushed, I fired a missive back, pleading my case. He relented. The seminar was transformative.


The people most drawn to the early humanities computing centers at UVA were the book nerds. Far from seeing computers as an abandonment or repudiation of books, archives, and the material remains of culture and society, the new technologies were understood to be extensions of those preoccupations. This may have had something to do with temperament: the same person who was willing to sweat over collation formulae was probably also suited to tagging texts and parsing lines of code. But it also had a lot to do with the ways in which computing surfaces the barely subterranean machinery of scholarship. Both of UVA’s original humanities computing centers were physically embedded in the library, as was the subsequent NINES and the Scholar’s Lab today, and as MITH is at the University of Maryland.

Academics are well acquainted with libraries of course, but that acquaintance often stops at the stacks or perhaps the special collections reading room. UVA’s humanities computing centers took me into the world on the side of the wall. As one commentator put it recently, “Digital humanists tend as part of their scholarly practice to foreground self-reflexively the material underpinnings of scholarship that many conventional humanists take for granted. . . . If anything, DH is guilty of making all too visible the dirty gears that drive the scholarly machine, along with the mechanic’s maintenance bill.”

So sometime in 1995 I went behind the wall. I learned text encoding and some Perl scripting. I learned my way around the UNIX command line and Photoshop. Two things stand out to me about the acquisition of those “skills”: first, I never considered myself a “programmer.” Programming (or “coding”) is often reified in discussions of digital humanities, as in do you have to learn to code? Code and coding is not one single homogenous thing. It takes different forms depending on what you want to do. That I could write some batch scripts to facilitate processing a bunch of texts in a Unix directory in no way meant I was a software engineer. Or a systems administrator. Or an interface designer. I was just learning what I needed to do to do the things I wanted to do. Before long I initiated and completed my first digital project.

The other thing that stands out to me now is how those skills related to my scholarship. I thought what I was doing was important, but I didn’t think of it as scholarship. (I think brushing my teeth is important too, but it’s not my scholarship.) My scholarship, such as it was, was what I was doing in the dissertation I had begun to write, a Web-based hypertext (all flat HTML pages and hard-coded links) about what we nowadays call electronic literature and the aesthetics of information. Yes, I “used” Perl for one of the two foreign language reading requirements, an ad hoc request granted by Eric Lott, then DGS. The motive was pure practicality: I knew Perl and had the requirement to fulfill. I was able to make a reasonable argument that functionally it afforded me a kind of self-reliance that accorded with the rationale behind the language exam. For his part, I doubt Lott spent much time agonizing over the metaphysical stakes of the decision: it must have seemed to make sense, it was even probably something he saw as a small kindness.

By the mid-1990s, it was clear that the professor jobs promised by the New York Times weren’t going to materialize. Some of my peers had already left, often with more rudimentary “skills” than I had, to take (at the time) what seemed like lucrative and glamorous positions as “Webmasters.” Meanwhile the atmosphere in Wilson Hall (the English department’s home at the time) was increasingly toxic, a major flashpoint being the meagerness of graduate student stipends. And it wasn’t just UVA: 1996 was the year dining hall workers went on strike at Yale, where they were joined in solidarity by graduate teaching assistants, many of whom faced repercussions from faculty mentors. A Yale dean lambasted them for their actions, proclaiming them “the blessed of the earth.” One of those Yale faculty who had garnered a reputation for punitive behavior was coming to speak at UVA, at the personal invite of the department chair. The graduate students turned the occasion into a forum on labor inequities in the academy, with one of the Yale doctoral students flown down and secreted in the audience to act as respondent.

Our own graduate student organization, led, as it happens, by a longtime member of the Etext Center staff, threatened similar action but local grievances were redressed (more or less) short of organized action. A department summit meeting was held. The chair did not seek another term. For my part, I began working with members of the MLA’s Graduate Student Caucus, organizing national pushback against what at the time was widely perceived as the organization’s disinterest in its graduate student and adjunct members. These actions eventually earned me a letter of “concern” from the then-Executive Director, which was copied to my department chair.

Life at UVA was complex. Many faculty, including the ones who became my mentors, were wonderful and supportive. Others distanced themselves from their graduate students, a stance made easy to adopt by the lingering culture of Permission. Grad students themselves, as was natural, dispersed into their own circles and networks. Over everything hung anxiety about the job market: the golden age had ended long before I and my cohort arrived in Wilson Hall. I did my work by day, acquiring some “skills” I thought prudent; I wrote my dissertation in the evenings, trying my best to do something I thought might matter. That I was more fortunate than many in the one kind of work informing the other was not lost on me.

Through all this, the English department was not in lockstep with what was happening in the library’s digital centers. Far from it. Out of a faculty of fifty-something, there were perhaps 4–5 professors who were interested in humanities computing. Out of the hundreds of grad students — because UVA was still admitting grotesquely bloated cohorts — the interest was more widespread but also more diffuse, with a core community of between one and two dozen. Out of that community came many people you probably know: myself, Steve Ramsay, and Bethany Nowviskie, yes, but also Amanda French, Lisa Spiro, Mike Furlough, David Gants, Matt Gold (who ended up doing his PhD at CUNY), and Andy Stauffer. (Still others, like Ryan Cordell and Wesley Raabe, were a little behind us.) There was an almost equally robust cohort in History there at the time, largely as a function of Ed Ayers and the Valley of the Shadow project. Not all of us were necessarily close friends — it wasn’t some Borg entity. (I barely knew Matt Gold during his time there, for example.) I also met cool folks from elsewhere who were networked into IATH via their own early projects, or those of their mentors, including George Williams, Lara Vetter, Lisa Antonille Rhody, Jason Rhody, Rita Raley, Carl Stahmer, and Kari Kraus. There were also a lot of people whose names you won’t know, because despite being smart and talented they eventually decided — or were forced by circumstance — to do something else instead.

I hope I’m not embarrassing anyone. It’s hard to avoid naming names in these paragraphs since the individuals were so much a part of what was happening. And in addition to graduate students, IATH and Etext were both spaces heavily trafficked by visitors from elsewhere. Sitting in front of your keyboard, doing your work, you could eavesdrop on conversations with Famous People. I met Martha Nell Smith that way, and Alan Liu, and others. And sometimes you could do more than eavesdrop: that was the real appeal of the work for many of us. Not that it transcended what used to be parochially referred to as the apprenticeship model, but because it offered the rare vector for fulfilling it. You could talk to faculty, even mirabile dictu go out for lunch, because you knew things that they didn’t. Not just instrumental things (skills), but things about books and texts that you had learned because no one else had had occasion to scrutinize them at the level tagging, scanning, and coding demanded. We could be grownups, or if not exactly grownups we could at least have the kind of experience we had naively imagined graduate school to be. That all of this was happening at Virginia, with its baked in culture of Permission seemed all the more remarkable. (The Blake Archive was the primary project I was associated with at IATH, but I think what I say speaks to a certain level of generalized experience.)

In this climate, the centers at IATH and Etext functioned as the proverbial third place for many of us, just as Rare Book School and the Writing Center did for others: not home (where the dissertation was), not Wilson Hall (where the Real World was, in the form of anxiety, rivalries, and structural insecurities), but a communal and collaborative place to exist. What some brand as “neoliberalism” was my personal affect, lived every day, for a period of years, at my most vulnerable professional moment.


I briefly considered leaving for what we would nowadays call an alt-ac job. John Unsworth convinced me to stay, and finish, which I did, just as Johanna Drucker was arriving in 1999. My first tenure-track job was at the University of Kentucky in response to an ad for “humanities computing.” Kentucky was a pretty interesting place to be at the time: Dana Nelson was there, and Susan Bordo, and Gordon Hutner and Dale Bauer had just arrived. Mike Uebel was there too, someone I had not exactly been friends with in Charlottesville but who was welcoming and kind to me in every way. Other great people too. But it was obvious from the get-go I had been brought in as Kevin Kiernan’s hire. Kevin was a medievalist who was doing far out stuff imaging the Beowulf manuscript, something which had attracted its share of NSF dollars. The idea was to bring in a junior colleague who could collaborate with computer scientists. (I learned later on that I almost didn’t get the job because my job talk had been all “theory” and Kevin had balked at that.) I went, still under-ripe, the dissertation hastily concluded and defended, and without any clear sense of what I was really getting into, professionally speaking.

Lexington turned out to be a long way from Charlottesville. I did some writing there, but did nothing as far as turning the dissertation into a book, something it would have been unsuited to anyway. The project on which I spent the most time, the Virtual Lightbox, was a classic tool-building enterprise; it was half-baked and had but modest uptake. I left Kentucky in 2001 after two years, largely because I was unhappy living there.

There were two tenure-track jobs I was really interested in: Digital Studies at the University of Maryland, and Digital Humanities at UCSB, both based in the English departments. I ended up at Maryland and Rita Raley went to UCSB. (Ironically, Rita is surely identified as a “digital studies” person these days, whereas I read out as a “digital humanist.” I suspect we’d say we both embrace elements of both, a symptom of how arbitrary these labels can be. Additional fun fact: For a time I even gave out my title as “Assistant Professor of English and Digital Studies,” until my department chair told me to stop. This too is how identities are created and managed.) What brought me to Maryland was Martha Nell Smith and Neil Fraistat, as well as MITH, which had gotten underway just when I had first gotten to Kentucky.

The expectations at Maryland were made clear from the outset: a book for tenure. So I got cracking on what became Mechanisms, which was not the dissertation except for some of the chapter on Joyce’s Afternoon and a couple of other bits and pieces. I wrote the book happily, which is not to say easily or without stress and duress: but happily in that it was work I genuinely wanted to be doing. I got the contract for it in my second or third year at UMD (can’t remember), and the book was in press by the time I came up for tenure. The tenure process itself was uncontested and unremarkable, at least in so far as I know.

While writing Mechanisms, I identified professionally with two areas: humanities computing and new media. Digital studies seemed vague, amorphous; as a term, digital humanities was still just taking hold at the time, though of course I had started to use it. I had no ambiguity that I wanted to publish the book with the MIT Press, which was then the go-to press for work in new media, also publishing, at about the same time, Nick Montfort, Wendy Chun, Lisa Gitelman, Noah Wardrip-Fruin, and many others. (Minnesota’s Electronic Mediations series would also have been a desirable placement, but it was only just beginning to take off.) If asked whether it was humanities computing or new media, I would have branded Mechanisms “new media,” the term that appears in its subtitle; but it was also unmistakably informed by the kind of praxis I had absorbed in my work at the Etext Center and IATH. I did software studies, platform studies, and critical codes studies in the book, all avant-la-lettre, and all as a consequence of my training (or if you prefer, my skills, though I note how unselfconsciously we say “training” in relation to our own professionalization). Mechanisms’ forensic readings of individual software objects were deeply informed by my exposure to archival and bibliographical practices at UVA, a connection I sought to make explicit in its pages.

I did not rush to begin a second book as many newly-minted Associate Professors do, or at least attempt to do. Instead I spent several years on a series of funded research projects in conjunction with MITH, where I was also now an Associate Director. These projects resulted in a lot of writing, a couple of hundred thousand words of collaboratively written prose in the form of white papers and reports, several of which have become standard citations in the literature. Almost all of this work revolved around the preservation and dissemination of born-digital materials, including literary manuscripts, interactive fiction and electronic literature, computer games, and virtual reality environments like Second Life. Once again I understood what I was doing to be very much in line with the ethos informing textual studies, bibliography, and the Rare Book School: unflattening (to borrow a term) the material and historical forms of transmission across media, platforms, and environments.

In the course of working and writing with them, I learned that archivists don’t in fact need English professors — or digital humanists — to tell them about the political stakes of their jobs. I’ve had this conversation with them: “Yes, I love it when someone comes to me with a certain gleam in their eye, telling me they’re going to problematize what I do,” said one, the winner, as it happens, of a major award from the SAA not long ago. Archivists don’t need us to tell them that archives are incomplete and arbitrary, that there are gaps and “archival silences.” They deal with this brute reality every day, as a condition of their working lives. Archivists (and archives) are what stand between remembering and oblivion. The notion that archival work might somehow be considered ideologically suspect or a betrayal of some more authentic version of the humanities strikes me as one of the sorriest features of the recent spate of attacks on DH, and one wonders what markers of privilege are embedded therein.

By 2011 I had begun work on the book that would become Track Changes, deeply informed by the contacts I had made in the professional archives community. And I was also working on the BitCurator project, a tool-building initiative undertaken with colleagues in the School of Library and Information Science at UNC Chapel Hill. Whereas the Virtual Lightbox had had only some very modest uptake, and whereas my contributions to two early distant reading projects, nora and MONK, are best forgotten — at least by me — I remain proud of what we accomplished with BitCurator. So let me tell you what building a tool means to me, and you can decide if I’m, well, a tool. (Urban Dictionary: tool: “One who lacks the mental capacity to know he is being used. A fool. A cretin. Characterized by low intelligence and/or self-esteem.”)

BitCurator is a tool — really a set of tools, open source and many originally built by other people — that allows archivists to process what we call born-digital materials: Floppy disks and hard drives and the stuff that has helped make up our collective cultural heritage since personal computers became a part of life in the early 1980s. That writer or singer or politician you’re interested in? Chances are he or she has used a computer at some point in their career, and if their born-digital materials are included with their personal papers eventually collected by some institution or repository you are going to want to be able to read them along with everything else.

I can’t remember when or where he said it, but Jerome McGann once characterized scholarship as a commitment to the arts of memory. He elaborates on the premise in a book called The Scholar’s Art: “Not only are Sappho and Shakespeare primary, irreducible concerns for the scholar, so is any least part of our cultural inheritance that might call for attention,” asserts McGann. “And to the scholarly mind, every smallest datum of that inheritance has a right to make its call. When the call is heard, the scholar is obliged to answer it accurately, meticulously, candidly, thoroughly.” BitCurator helps you answer that call, or more precisely it helps the archivist who is processing the collection create the conditions that allow you to do so.

“To the scholarly mind, every smallest datum of that inheritance has a right to make its call.” In this McGann seems to follow Edward Said, who in Humanism and Democratic Criticism glosses Giambattista Vico’s famous verum factum principle — Vico is a touchstone for McGann too— as something like “We can really only know what we make” and “To know is to know how a thing is made, to see it from the point of view of its human maker.” This ethos — and the commitment to the arts of memory — has become, as UVA’s Richard Rorty was wont to say, a part of my final vocabulary. I believe it is true, in so far as anything in this world is true, whatever epistemic frames might be thrown around it notwithstanding. It’s not that either the scholar’s art or the verum factum are sufficient on their own — there are all sorts of considerations and contexts we want and need to bring bear — but this is where I start from. If it falls suspect to an ideological litmus test, if the space it opens for resistance or critique is insufficiently pure, then that’s it; I’ve got nothing left; I’m out of “moves.” Better scholars and better players of the game than I have left it all on the field at that point. I’m off to go take a shower.

Or else I could just tell you a story.

On a rainy morning a couple of years back a colleague and I slipped into the Houghton Library at Harvard. I had been there before to do research for Track Changes, but this time I was going behind the wall. We were in a small basement room, packed with curators and archivists (they’re not the same thing you know), and other Houghton staff to introduce them to BitCurator. I sat and watched as the small red LED lamp on an external 3.5-inch floppy drive flickered, the decades-old bits from what Jason Scott once called “poor black squares” passing through the sensors in its read/write head, interpolated by firmware, sifted through a WriteBlocker (to ensure no cross-contamination between the two systems), and reconstituted in the form of a disk image, a perfect virtual surrogate.

(Wolfgang Ernst writes of a similar flickering lamp, the magnetic recording light that pulsed as Milman Parry recorded the folk songs of the Serbian guslari singers: Archaeography, or the archive writing itself was how Ernst glossed the moment. The lamp flickered on and off. Something, nothing, memory, oblivion.)

We found no smoking gun, no “LostNovel.doc.” But the canon of collective memory swelled just a little bit more. It was a good morning’s work, done with a good tool.


Am I a digital humanist? The question feels less and less relevant, to be honest. There are other names I think of as more descriptive of what I actually do, including digital studies (the moniker under which I was hired), book history, and media archaeology. My favorite formulation is one from Jessica Pressman and Kate Hayles: comparative textual media. That one seems to get at most everything I’d want to associate myself with now. But am I a digital humanist? Of course I am. Not because my career profile matches a prescribed template, not because I can (or can’t) code, but because of the socialization of academia. In other words, I am a digital humanist because of the people I came up with and the people I run with. In that sense, as one of its three original authors acknowledged in an importantly candid moment deep within one of the innumerable Facebook threads it spawned, the LARB piece was indeed, and specifically, aimed as an attack upon “a group of tightly networked scholars on the East coast, many of whom are referred to by name.”

What would motivate such an attack? It’s not about technology per se or “the digital.” The feuds and acrimonies within and around digital humanities have their roots in all the material minutiae of institutions: jobs, promotions, resources, money, as well as less tangible but even more visible economies of prestige, prizes, and academic celebrity. It’s no speculation or commentary on anyone’s “bad faith” (a phrase that was circulating) to suggest that all of these are a symptom of the ways in which digital humanities is controlled by, rather than complicit in, the “neoliberal” forces we’ve been hearing so much about.

Neoliberal tools and forces . . . those are dark and ominous words. Let me try to make them into something humanly legible.

In my English department I currently serve on our coordinating and personnel committees, both of which have been busy this cold, wet spring. These committees do all the little things that are the levers by which “DH” would presumably seeks to orchestrate its neoliberal take-over. We propose language for position descriptions. We edit and revise (heaven help us) the department’s mission statement. We provide input to the dean when the College does the same. We make recommendations to address the department’s enrollment shortfall (yes, my department has an enrollment shortfall in its major; does yours?) If ever there was a fox in the henhouse scenario, this is it. And if I were a Silicon Sith Lord (or even just the tool of one) I would seek to use these occasions to revise our mission statement to something more to my cryptofascist taste; I would seek to ensure we hired only those whom I could coerce into adherence with my corporatist agenda; my single-minded solution to enrollment woes would be: coding über alles.

You’d have to ask them I guess, but there’s really no aura of hostile takeover as I and my colleagues go about our business. Disagreements and discussion, sure, but not the Manichean dualism that the LARB authors are so concerned to insist upon. I sit at the same table, purchased from the same institutional catalog (the furnishings obtained from prison shops), as the person doing poco next to me and the medievalist across the way. DH is not about killing the humanities and it’s not about saving the humanities; it’s about colleagues sitting around a prefab table doing business, and working for what everyone genuinely believes, more or less, to be the greater good: Keeping the major healthy, sustaining the size and strength of the faculty, and generally making the department a place that people want to inhabit to pursue the things that matter to them.

I read the LARB article when it came out — rife with the barely concealed classism, weirdly retrograde technological determinism, and moral absolutism others have remarked upon — and I spent a day or two turning it over in my mind, reflecting on the various backstories and backchannel histories acknowledged, if only just, by a pair of dates lodged in one of the authors’ bios. People wrote to me asking me for my reaction. I demurred, but I was clearly moved enough to draft this piece, and for that I guess I owe it something. This is my backchannel.

I owe it something else besides: After reading the LARB essay and scanning the fallout on social media I came to the realization that I’d much rather use my days to work on the things I love than the things I hate. And that, let me confess, has nothing to do with being a digital humanist. It has everything to do with being me.

Next Story — On Mechanisms’ Materialism: A Reply to Ramón Reichert and Annika Richterich
Currently Reading - On Mechanisms’ Materialism: A Reply to Ramón Reichert and Annika Richterich

On Mechanisms’ Materialism: A Reply to Ramón Reichert and Annika Richterich

While I was, naturally enough, pleased to see the coverage accorded Mechanisms: New Media and the Forensic Imagination (2008) in the editors’ introduction to Digital Culture and Society’s inaugural issue on “Digital Material/ism,” I was also (naturally enough) displeased with Ramón Reichert and Annika Richterich’s final assessment of it and its place in the theory canon. Though they acknowledge the contributions of the book, notably the dual concepts of forensic and formal materiality, they also point to a “certain theoretical vulnerability” (11) which, the reader is told, will be explained in further detail.

There is a backstory here that first ought to be related. In January of this year Reichert contacted me asking if I might be interested in contributing an article to the issue. The CFP he sent included extended (and positive) reference to Mechanisms’ influence. I was flattered and told him so, but declined as my writing schedule could not allow a full-length article; I suggested that we might instead explore the possibility of an interview or some other short format. Reichert replied immediately, expressing enthusiasm, and indicated interview questions would follow. I waited for them but they never came, and we had no further contact until six months later.

On June 28 Reichert, without preamble or any communication in the interim, indeed sent me a list of questions that would form the basis of the interview and asked for my reply within the next three weeks. Under most any other circumstance I would have been delighted by the unexpected renewal of a correspondence I had long since written off (so to speak), but I was then in the home stretch of finishing a project; I asked if an extension might be possible or else if I could instead contribute to a future issue. Reichert took a week to reply that neither of these was possible, and indeed, with that message, truncated the deadline to July 8! (Perhaps a typo, but if so a very unfortunate one.) The questions, for their part, were altogether benign, raising none of the concerns subsequently articulated by the editors in their introduction. To be honest, the questions did not seem especially interesting, nor did they seem to reflect a very careful reading of my work (one asked me to state how novel was the New Materialism, another asked me to comment on the “Internet of Things”). I replied that I would have to pass on the opportunity given the abrupt and non-negotiable deadline.

So now we come to the commentary appearing in the introduction to the Digital Materialism issue (PDF). I quote the most hostile passage in full:

. . . but on the other hand, the term of media forensics emphatically used by Kirschenbaum adopts criminological discourses of searching for the truth (as applied by Bertillon and the National Security Agency alike) in an unreflecting and ahistorical manner, as he regards material-based forensics as an incorruptible method of making digital media culture de facto readable. (12)

This is the entirety of the “further detail” the reader had been promised earlier. There seem to be two primary concerns. The first is the charge of an “ahistorical” adoption of practices rooted in criminalistics and the security state. My book, of course, deals centrally with the technics of digital forensics or computer forensics, as that field is known. Mechanisms’ first chapter explicitly situates these practices in the rise of trace evidence as a category of criminalistics, notably the career of Edmond Locard (in fact Bertillon’s student, whose pseudo-scientific “anthropometry” Locard’s Exchange Principle displaced). More to the point, however, Reichert and Richterich seem to miss entirely (or at least are silent on) the much more extended positioning of the book’s project in relation to the venerable philological traditions of textual studies and bibliography, which early on I describe as “among the most sophisticated branches of media studies we have evolved” (16). Absent a more extended presentation of what they might mean by “ahistorical” I cannot read their language here as more than a reflexive invocation of a specter (the NSA!), in response to the book’s overt discourse on forensics.

The second charge is that of my belief in forensics as “an incorruptible method of making digital media culture de facto readable.” Here I will say only that I cannot imagine how any even moderately careful reader of Mechanisms would come away from its pages — filled with detailed (some might say ponderous) case studies detailing gaps and silences in the media archive — with such a claim. I can do no better by way of reply than to quote what I say in the closing passage of the book:

The mechanism, as articulated in Gibson’s poem, is the agent of irrevocable difference, the shutter of the camera “Forever/Dividing that from this.” This is the very singularity whose mute evidence is incarnate on Leyton’s deserted subway platform. That moment of division, like a dropout on a tape — the point at which the magnetic signal ceases to register above the tolerances of the read/write head — is a synecdoche for dropouts and gaps all over the pres­ent pasts of new media. To Alan Turing, peeping at the lights in his Williams tubes, and to the suburban software cracker listening to the clacking of the drive head as it skips over protected tracks, I would add whatever did or didn’t happen as Kroupa and Templar hit the keys to upload “Agrippa” to MindVox, or whatever “papers” of Michael Joyce’s are not and never will be collected in the Michael Joyce Papers. Now that new media is being actively stored in archives and museums, as well as on the network — deliberately, as in the case of The Agrippa Files; socially, through abandonware sites; or automatically, through Google caches or the crawlers of the Wayback Machine — such absences will become more palpable. We will feel the loss of what is al­ready missing more keenly.

Mechanisms will soon be ten years old (!) and, like any human artifact, is a product of its time and circumstance. The field has moved since then, and Mechanisms is among the many books that helped move it. Mechanisms is also, of course, not above critique, and has been the subject of other critiques (Jean-François Blanchette’s in “A Material History of Bits” being amongst the best I have seen).

So too, however, is Digital Culture and Society a product of time and circumstances, as perhaps some of the particulars rehearsed herein may underscore. Either the editors’ critique, however inchoate, was not yet formulated in late June when Reichert first sent me his interview questions, or, if formulated, was not presented to me for a response that I could have given at that time instead of here and now.

I wish Reichert and Richterich well with the new journal and I look forward to reading many of the contributions in their inaugural issue. All the more reason, then, that I regret that this episode raises, at least for me, questions of confidence both with regard to some particulars of editorial procedure and to the ethos of care required for critical accuracy and truly generative disagreement amid our ineluctably material networks of exchange.

Matthew Kirschenbaum
November 9, 2015

Next Story — Bel Pesce e o empreendedorismo de palco: porque a Menina do Vale não vale tanto assim
Currently Reading - Bel Pesce e o empreendedorismo de palco: porque a Menina do Vale não vale tanto assim

Bel Pesce e o empreendedorismo de palco: porque a Menina do Vale não vale tanto assim

Quando fiz meu vídeo sobre o hilariante fiasco da campanha de crowdfunding da “hamburgueria” Zebeleo (sim, ainda tenho um implicância quase irracional com esse termo desnecessariamente gourmetizado), fui duramente criticado por reduzir a tal Bel Pesce a “um desses playboys aí” com o que muitos julgaram ser um ar de desmerecimento.

O vídeo da presepada dos três foi tirado ao ar, mas a internet jamais nos priva dessas coisas. Aqui está o reupload:

https://www.youtube.com/watch?v=_MSb5y6tDwI&ab_channel=MatheusB.

Eu jamais tinha ouvido falar na moça, pra surpresa de muitos, e minha suposição é que ela era apenas mais um rosto entre essa turminha de hipstersmillenials descoladinhos e cosmopolitas que orbitam o mundo marketeiro brasileiro enquanto repetem jargões da propaganda. Você sabe, aquela galera que está sempre inovando o mindset 2.0 do paradigma com sinergias do brand pra agregar ao engajamento do upcycling de um job e coisa assim.

Ledo engano. Fui informado que, em vez disso, a garota é uma wunderkind brasileira sem paralelos. Formada pelo célebre MIT, a menina passou por pelas mais consagradas instituições do mundo da tecnologia — Microsoft e Google –, e até meteu o dedo no sistema bancário. Não parando por aí, ela também fundou várias empresas (uma delas que, seguindo a cartilha de sucesso no Vale do Silício, foi posteriormente vendida por milhões).

E após todo esse sucesso que não deixa a desejar perante as biografias dos grandes luminários da tecnologia como Bill Gates, Steve Jobs ou Elon Musk, Bel Pesce voltou ao Brasil pra injetar uma necessária dose de empreendedorismo na nossa combalida economia.

Em outras palavras: eu sou um perdedor imbecil invejoso e a garota é um prodígio promissor que trouxe reconhecimento e o espírito empreendedor ao Brasil.

Da mesma forma que minha falta de respeito com os respeitáveis louros da garota provocou incômodo em muitos, houve um outro tipo de chateação no vetor oposto — alguns inscritos se revoltaram com a oportunidade que eu perdi de expor a moça que, de acordo com eles, é uma charlatã do emergente (e lucrativo) mundo do chamado “empreendedorismo de palco”. A mulher é uma fraude, insistiam alguns, e quando ouviram seu nome saindo de minha boca, eles esperavam que o foco do meu vídeo seria desmantelar a fachada de sucesso que a garota montou à base de palestras de auto-ajuda vazia salpicada com clichês requentados do tipo “acredite no seu sonho” e “cada derrota é uma lição aprendida”.

Esses detratores fizeram a moça soar como um Robert Kiyosaki de saias, isso é: um suposto empreendedor que é referenciado e reverenciado exclusivamente por gente iludida com promessas de riqueza e glória através de esquemas furados. Da mesma forma como o Kiyosaki é um profeta da galera das pirâmides financeiras, a Pesce seria da turminha com gana de “empreender”.

Incerto de qual dessas versões de Bel Pesce seria mais fiel à realidade (e já antecipando que a verdade estaria mais ou menos na intersecção das duas, o que é geralmente o caso), fiz o que fui ensinado a fazer dois mil anos atrás nas minhas aulas de Metodologia Científica na UFMA — observei sistematicamente, verifiquei a veracidade dos fatos propostos, e elaborei uma hipótese passiva da revisão por pares.

E a hipótese em que cheguei, lastreada nos fatos que discutirei nesse texto, é a terceira etapa do processo. Sejam meus pares e digam-me aí vocês o que pensam.

Então. Pra entender melhor a biografia da moça, fiz o mesmo que faço quando a solução de um puzzle num videogame me escapa: recorri ao Google.

Fui levado ao seu site em inglês, onde é declarado que:

She studied at the Massachusetts Institute of Technology (MIT), where she got Majors in Electrical Engineering & Computer Science and Management Science, and got Minors in Economics and Mathematics.

Eu franzi a testa. É uma forma particularmente curiosa falar qual a sua formação acadêmica dizendo que tem “majors em X” e “minors em Y”, e pra entender porque, preciso explicar como funciona a educação superior gringa.

Nos EUA/Canadá, o processo de formação acadêmica permite que as disciplinas eletivas (ou seja, aquelas que não são diretamente fundamentais para o seu diploma) se agreguem de forma que você pode ser dito um mini-especialista num determinado assunto que foge da sua área principal, mas é também do seu interesse. Por exemplo: tenho um amigo que é formado em Biologia (ou seja, esse é o seu “major”; ele é biólogo, essa é a área de enfoque da sua carreira acadêmica e seu título), com um “minor” em Psicologia. Ele não é um psicólogo e nem pode se meter a diagosticar ninguém; ele tem apenas conhecimento superficial dos fundamentos da psicologia.

Que fique claro: o objetivo do minor é puramente saciar um interesse leve duma disciplina. Academicamente falando, é pouca coisa acima de ler artigos na Wikipédia sobre um assunto. Não é vantagem que se conte.

Além disso, dentro da cultura norte-americana, a linguagem do “tenho um major em X” é típica de alguém que cursou algo, não completou, mas quer ainda usar este fato para imbuir-se de autoridade acadêmica num determinado assunto, levando o interlocutor a concluir que está falando com um especialista formado naquela area.

Seria como eu querer usar o fato de que “cursei Física!” pra soar erudito e detentor da razão num assunto científico, omitindo o fato de que não me formei e que foi há tanto tempo atrás que não lembro mais de quase nada do curso.

Isso talvez se deva, naturalmente, a uma certa de falta de familiaridade da garota com a cultura e a língua (ou não, já que ela morou lá por sete anos), mas me deixou com várias pulgas atrás da orelha. A sintaxe mais comum seria dizer algo como “I have a degree in X”; informar major e minor é desnecessário.

…exceto, é claro, caso você queira pintar-se como um super-especialista que domina inúmeros campos diferentes. Ao longo da minha “investigação”, descobri que parece recorrente o hábito da empreendedora de exagerar seus feitos usando palavreado vago.

A impressão que acabei tendo da Bel Pesce é, talvez mais do que os “Electrical Engineering & Computer Science and Management Science, Economics and Mathematics” que seu site enumera, a área em que ela é realmente expert é aumentar seu capital social aparente inflando seus feitos através de uma linguagem cirurgicamente específica que, embora evite entrar descaradamente na mentira, tem um claro design de induzir o interlocutor ao engano em relação às suas realizações.

Sabe o cara que descreve seu trabalho de caixa no McDonalds como “analista responsável pelo fluxo de capital operacional de uma grande empresa multinacional”? É nesse território em que estamos, e eu acho que posso provar isso de forma inegável.

Foi por isso que a moça parecia ter diplomas de Schrodinger — o número de canudos dela sempre variava entre 4 e 6 dependendo de quem estava escrevendo a matéria em português, um sintoma perceptível da dificuldade brasileira em compreender o que diabo seriam os “majors” e “minors”. “Bota aí que ela tem seis ‘diplomas’ mesmo, porra”, consigo ouvir mentalmente o redator preguiçoso ordenando alguém a simplificar a coisa.

E se a Bel Pesce se incomodava em publicarem erroneamente que ela era uma multi-profissional especialista em tudo e um pouco mais, ela não fez grandes esforços pra esclarecer isso.

Esse detalhe de “major/minors” (ao mesmo tempo que parce deliberadamente evitar se identificar como formada) foi justamente o proverbial “onde tem fumaça, tem fogo” que desencadeou meu interesse em verificar as supostas conquistas da moça. Se a moça tivesse dito desde o começo “sou formada em X e Y, ponto”, eu não precisaria ter que escrever 10 parágrafos explicando isso, porque ninguém estaria pensando que a mulher tem um número surreal de formações e usando isso como argumento de que ela não pode estar errada. Como falei, fazer acreditarem que ela é uma profissional com múltiplas áreas de expertise não foi acidente — foi por desígnio.

Olha até a porra da UNICAMP falando que a mulher “se formou simultaneamente em cinco faculdades: engenharia elétrica, ciência da Computação, administração, matemática e economia“.

Em seu site em português, ela diz com todas as letras que se formou em cinco disciplinas. Ela também omite, mas é óbvio, que “Electrical Engineering and Computer Science” é um curso só no MIT,e não dois como ela obviamente tenta fazer parecer.

Diga-se de passagem, através do link aí do OpenCourseWare você pode literalmente assistir todas as aulas, acompanhar todos os exercícios do curso, fazer as provas e tudo. Espantoso!

Voltando às lorotas da Bel. Essa forma estranhamente inflada de descrever sua formação, somado a sites gringos dizendo explicitamente que ela “dropped out of MIT” (ou seja, “largou o MIT”), me faz pensar que nem formada ela é. Não estou dizendo que ela não é — estou dizendo que ela usa linguagem típica de quem não é, e que isso é… estranho. Uma formanda do MIT não deveria precisar desse tipo de palavreado pra inflar seu currículo.

O que ela está tentando esconder…?

Em seguida, voltei minha atenção à Lemon, uma (finada) empresa de planejamento financeiro que a mídia brasileira reportou que Pesce teria fundado. A página de Economia do UOL diz explicitamente que a brasileira fundou a Lemon, adicionando o floreio poético de que a empresa “nasceu das idéias dela”. A IstoÉ confirma que a autoria da Lemon é de Pesce, dizendo que a moça “montou sua própria empresa”. Nesta outra matéria, o UOL dá crédito de fundadora da empresa à Pesce (além de martelar novamente as supostas 5 formações da garota, num exemplo prático da máxima da “mentira contada mil vezes que se torna verdade”).

A fonte disso, evidentemente, são afirmações da própria Pesce — visto que nada no registro histórico da empresa confirma isso. De acordo com a Wikipédia, o fundador da compania é um empresário chamado Wences Casares.

A propósito, Casares deu em 2012 ao The Next Web esta entevista falando sobre a adição de Bel Pesce ao time. Por que um outro maluco estaria apresentando a suposta fundadora da parada como “uma adição ao time”, eu não sei. Ela não é citada como co-criadora ou nada assim.

Literalmente todas as matérias escritas sobre a Lemon que falam sobre um fundador (que não sejam brasileiras, e portanto usando como fonte a própria Bel) identificam Casares como tal. Aí estão algumas:

http://www.bizjournals.com/phoenix/blog/techflash/2015/08/lifelock-lemon-founder-locked-in-dueling-lawsuits.html

http://www.coindesk.com/lemon-wallet-acquired-lifelock-42-6m/

http://mashable.com/2013/12/12/lifelock-acquires-lemon/#YLeyy1Qj4mqf

http://www.recode.net/2014/3/13/11624538/lemon-digital-wallet-founder-wences-casares-gets-20-million-in

https://aerolab.co/lemon

http://latino.foxnews.com/latino/money/2013/12/20/son-sheep-ranchers-lemon-wallet-co-founder-wences-casares-is-serial/

http://www.forbes.com/sites/brucerogers/2012/08/23/will-wences-casaress-lemon-com-replace-your-wallet/#697a181d43cc

É claro e inegável — A única pessoa alegando que Bel Pesce fundou a Lemon é Bel Pesce. Curiosamente, ela jamais corrigiu os repórteres que atribuiram a empresa a ela (de onde você acha que veio a versão em que ela é a criadora da parada, afinal de contas…?).

Ela trabalhou na empresa, sim, mas exagerou os detalhes de sua atuação, o que é bem similar ao exagero dos quatro ou cinco ou seis diplomas.

Veremos que isso é um padrão no currículo da “empreendedora”.

Antes da Lemon, a Bel já era conhecida como uma história de sucesso por “ter trabalhado no Google, Microsoft, e Deutsche Bank”.

Exceto que ela não “trabalhou no Google, Microsoft e Deutsche Bank” da forma que vem em mente quando lê esse currículo, e essa ilusão é mais uma vez intencional. No seu LinkedIn, ela é atipicamente franca — na verdade, ela fez apenas curtos estágios facilitados por um programa do MIT que envia estudantes pra trabalhar em grandes empresas. A realidade é que não há nada de muito glamuroso nesses estágios — os estudantes geralmente executam afazeres triviais ao redor do escritório e participam em modo “read only” (ou seja, só observando, sem muito input ou autonomia) de alguns projetos paralelos das empresas. Basicamente, pra ver como é que é trabalhar no Vale do Silício.

Diga-se de passagem, o MIT manda aluno a rodo pra ser escraviário em empresa de tecnologia. Não é algo particularmente excepcional ou prestigioso. Vários destes estágios sequer são remunerados.

Somando todo o tempo que ela passou nessas três empresas, dá pouco mais de um ano — 4 meses no Google, 4 no Deutsche Bank, e outros 8 na Microsoft (embora neste vídeo ela diga que só passou 3 meses lá…?). E, novamente, o registro histórico não confirma suas alegações de que ela participou de projetos das empresas.

Por exemplo. No LinkedIn, Pesce diz sobre sua atuação na Microsoft:

[Bel Pesce] was part of a project to develop software that uses a webcam to track users’ actions. The main goal was to create a Multi-Touch interface that would let people interact with computers by only using a webcam and colored objects. The project also included a Software Developer Kit (SDK) that would allow other users to create their own Multi-Touch applications. Bel was part of the day-to-day of the project, documented the SDK, produced a demo to show the power of the SDK, recorded walkthrough videos to teach how to use the SDK.

Só tem um probleminha. Aqui está a lista de coordenadores e desenvolvedores do projeto. Aqui há uma página onde o grupo responsável pelo Touchless presta agradecimentos a membros da comunidade que também os ajudaram. Repare a distinta ausência do nome da Menina do Vale nas duas.

E este é o vídeo da apresentação do SDK do Touchless:

https://www.youtube.com/watch?v=hJuJJOK7MMc&ab_channel=MikeWasserman

A empreendedora brasileira também não se faz presente nessa apresentação. O que é muitíssimo provável é que em sua curta curta passagem na Microsoft, ela fez nada além de auxiliar o grupo em tarefas triviais de escritório — ou seja: coisa de estagiário mesmo.

Isso não a impediu de, aos 24 minutos deste vídeo, se caraterizar como líder/organizadora do projeto. Michael Wasserman, o real idealizador do Touchless, talvez não gostaria de saber que uma autora brasileira de livro de auto-ajuda está tomando crédito por sua invenção.

Quando fala de seus dois meses no Google, Bel diz que…

Developed a tool that help find bottlenecks in the machine translation code. The tool puts together CPU, RAM and disk usage information, along with periodic code profiles.

Mas que “tool” foi essa? Cadê o nome da ferramenta? Por que omiti-lo…? E a documentação? Referência em algum lugar qualquer? Confirmação externa de seu envolvimento com tal ferramenta?

Não existe.

Em sua outra passagem pela Microsoft, ela atribui a si mesma…

Development of software for Smartphones
Fully experienced Program Manager, Developer and Tester roles during the project:
Program Manager: organize the project as a whole — write specifications, negotiate features, drive meetings, research technologies, design project website
Developer: Write clean and efficient code, making use of the newest technologies to improve coding solutions
Tester: Create smart test cases and debug the software

Que software ela desenvolveu pra smartphones? Estagiária program manager? Como assim? Aliás, é curioso que esta prolífica programadora e “fully experienced program manager” não tem uma página no github, ou uma linha de código sequer atribuída a ela. Como alguém frequentou uma das maiores faculdades de tecnologia e se formou em Ciência da Computação sem ter literalmente UMA LINHA DE CÓDIGO PUBLICADA chega a ser fantástico.

Já na Ooyala, uma plataforma de vídeo online que ninguém nunca ouviu falar na vida, ela teria “liderado três times de engenheiros”. Aliás a citação é perniciosamente recorrente:

Eu te desafio aqui e agora a achar QUALQUER menção da moça trabalhando na Ooyala, qualquer documentação, e liderando os tais “três times de engenheiros” que não seja um texto citando isso como seus atributos de palestrante. Vai lá.

Ela só diz que fez e aconteceu, e a mídia acreditou sem pestanejar. Além de aumentar sua contribuição em projetos, essa é a outra marca registrada de Bel Pesce — a estranha ingenuidade que a mídia brasileira tem perante suas alegações facilmente refutáveis.

Além dessas conhecidas empresas em que Bel Pesce teve uma brilhante participação [citation needed], a inovadora também iniciou inúmeras empresas próprias. Quando eu digo “inúmeras” é literalmente porque não consigo enumera-las; quando mais eu pesquisava, mais empresas supostamente criadas pela Bel Pesce apareciam. A garota é uma boneca russa de empreendimento, você abre uma e tem outra empresa dentro.

Por exemplo. Neste artigo, aparentemente escrito por algum tipo de fanboy da garota, aparece a menção do Talenj, uma empresa co-fundada e comandada pela Bel. O site descreve o Talenj como “a company that makes and designs websites”. No Twitter, ela diz que a proposta da Talenj era “conectar consumidores a marcas por meio de competições“. A UNICAMP descreveu o Talenj como uma empresa que “promove aprendizagem por meio de desafios on lines“.

É quase como se ninguém soubesse que porra afinal é o tal Talenj, né?

Hoje eu farei algo que ninguém da mídia fez: vou te mostrar o que é realmente o Talenj.

É disso aí que a garota é CEO. Ou nem isso, já que de acordo com a política de privacidade da “empresa”, o responsável pelo site é um tal de “Alex”.

Voltando ao LinkedIn da moça, vemos que ela foi responsável pelo “business development” de um tal de Krowder.com. A página é defunta, e até o Wayback Machine tem dificuldade de catar seus elementos. Por que ela estaria clamando atuação com título glamuroso numa “empresa” morta, que ninguém jamais ouviu falar, supostamente num estado onde a Bel Pesce nunca morou?

Acho que podemos imaginar.

Ela é também a CEO e fundadora do WhatIf, um site com design que eu esperaria de um adolescente em 1999 e não de uma graduada em ciência da computação pelo MIT. Novamente — página quebrada, defunta, sem qualquer referência a ela como fundadora, e que muito evidentemente não rendeu um centavo qualquer.

Entre 2007 e 2008, Bel também diz ser a CEO e co-fundadora do “WaterAfrica”, engajada no “Development of a solar-powered piping system that enables better water distribution in Africa“.

Achei duas WaterAfrica na internet inteira. Uma foi fundada em 2006 por alguém chamado Bill Savage, e a outra existe desde 2001. Lembre-se disso da próxima vez que um fanboy da empreendedora disser que a menina “gastou tanto de seu tempo com ONGs beneficentes”, como foi o caso nos comentários do meu vídeo. Talvez ele ache que ela DE FATO fundou tais empresas, quando a realidade é que eram devaneios esparsos de uma garota imaginativa.

Eu paro pra pensar que esse texto seria bem menor e mais fácil de escrever se a Bel não tivesse inventado TANTA história.

Eis a minha hipótese. O mérito real da Bel resume-se a ser aceita e formar-se (?) no MIT. Lá ela tentou entrar na indústria da tecnologia, e aparentemente não obteve muito êxito, porque tudo que ela conseguiu fazer foi estágios curtos e sites mal-acabados sem muito propósito ou sequer usuários. A empresa que ela supostamente fundou foi vendida por US$ 42 milhões e a menina não recebeu um centavo sequer, aparentemente não manteve equidade na empresa, nadica de nada.

Com o visto de estudante expirando e nenhum prospecto trabalhístico concreto que a permitisse estender sua estadia na gringa (em um vídeo que agora não encontro, ela deixa esse detalhe escapar, chegando a brincar que cogitou casar com um americano pra permanecer nos EUA), o jeito foi voltar ao Brasil. Foi aí que ela decidiu reinventar a “Bel Pesce que se formou numa das mais prestigiosas instituições de ensino tecnológico do mundo e que não conseguiu transformar esse diploma em NADA rentável e sequer permanecer nos EUA” pra “Bel Pesce prodígio com cinco formações, quarenta startups de sucesso, posições prestitiosas no Google e na Microsoft, autora de inúmeros produtos e serviços”.

Não importa quão absurda seja a sua lorota — alguém vai cair nela. Tem gente que acredita no Inri Cristo, afinal de contas. Eu não esperava é que a porra do nosso jornalismo nacional (mesmo tão sedento por histórias de brasileiros vencedores) deixasse a peteca cair tão lamentavelmente, repetindo feito papagaio o suposto sucesso da mulher, inquestionavelmente dando respaldo a “empresas” como a Talenj, sem excercer o mínimo de ceticismo responsável, e assim sendo cúmplice em seu processo de finalmente abrir uma empresa de verdade:

Uma empresa que ensina os outros a fundarem as próprias empresas — com cursos ministrados por alguém que nunca fundou uma própria empresa.

Uma ouroboros do empreendimento. Um loop recursivo de “inovação”. E como não falta trouxa nesse mundo, um moto-perpétuo de dinheiro.

Se a história parece inacreditável, se a despeito de todas as provas que você mesmo pode verificar você ainda acha que a mulher DEVE ser tudo que alega ser “porque apareceu na TV, saiu na IstoÉ…”, eu tenho que te informar que você é muito novinho, ou tem memória curta. Não é a primeira vez que um suposto intelectual com mais títulos universitário do que a maioria das pessoas tem bonés foi à TV relatar seus feitos fabulosos, salpicando suas abobrinhas travestidas de sabedoria. Lembram do Omar Khayyám?

Diga-se de passagem, esse negócio de empreendedorismo de palco lembra muito o esoterismo de rituais religiosos. O culto de personalidade em volta dos”líderes” dos quais não se pode falar mal, lendas passadas de boca a boca sobre seus feitos magnânimos, essa histeria de SIGA SEU SONHO REALIZE SEU POTENCIAL… agora tem até videoclipe chifrim semi-gospel declamando as virtudes do estilo de vida empreendedor:

https://www.youtube.com/watch?v=mtu5jiGOAzA&ab_channel=Gera%C3%A7%C3%A3odeValor

Que negócio brega do caralho. Troque uma ou duas palavras e você pode passar esse vídeo numa reunião de Herbalife ou em culto evangélico.

Esquecemos do Luiz Almeida Marins Filho, outra estrela do circuito de palestras motivacionais, com passagens por liderança de empresas gringas e inúmeras formações (até DOUTOR ele era!) — até o dia em que alguém olhou a fundo e descobriu que boa parte do currículo era aumentado. Já esquecemos do Bernard Madoff, um dos maiores charlatões que o mundo já viu, que abusou de sua influência no mundo finnaceiro pra fraudar investidores por mais de 18 BILHÕES de dólares?

Há uns cinco anos atrás, certamente alguém que tentasse alertar um amigo admirador do Madoff ouviria um “afff mano, ele é bilionário, tá lá em Wall Street e o caralho, apareceu em mil matérias sobre empreendimento, você acha que sabe mais que ele?” Hoje Madoff, que atende por “Prisioneiro #61727–054”, anseia pela data de sua liberação do chilindró: 14 de novembro de 2139 (sério, ele pegou 150 anos de cadeia. Os gringos não passam a mão na cabeça dos 171).

Algumas pessoas obtém reconhecimento (merecido ou não) e usam isso pra vender o ilusório. Parece exatamente ser o caso da Bel Pesce — foi aos EUA, frequentou uma instituição prestigiosa, passou (rapidamente) por várias empresas, apareceu em algumas matérias na gringa, o que a conferiu o verniz da legitimidade, e pronto: mesmo sem jamais ter empreendido na vida, faz pose e fala como especialista.

E pior, vende como especialista. Ela não fala muito sobre isso porque talvez ainda esteja explorando a validade do modelo de negócios, mas aparentemente a Bel planeja em breve iniciar franquias da FazINOVA, sua escolinha de empreendimento/auto-ajuda, prevendo tiers de investimento superior a cem mil reais.

Bel Pesce não tem literalmente conteúdo algum. Esta é a verdade inconveniente. Ela é basicamente um equivalente feminino do Tai “Here in my garage in Beverly Hills” Lopez: tem dinheiro, é supostamente um famoso empreendedor, já falou no TED também… mas todo mundo sabe que o cara é um charlatão do caralho, e ele é zoado abertamente por isso.

Ela tentou enturmar-se no Vale do Silício, mas nem a formação no MIT ajudou. Sem sucesso, voltou ao Brasil enaltecendo os próprios feitos na Meca Tecnológica tipo o Alfaiate Valente que anuncia “matei sete!”, omitindo que foram na verdade sete moscas — e como o protagonista da fábula, uma vez que a patuléia acreditou no homicídio séptuplo, manter a fama foi só uma questão de malandragem.

Seus livros são repletos de anedotas que, a julgar pelo sua característica de falta de compromisso com a verdade, tem o valor histórico das estorinhas do Sítio do Picapau Amarelo. Os conselhos de “empreendimento” não chegam nem a ser rebuscados como os dos outros autores de auto-ajuda venerados por piramideiros e outras amebas intelectuais. Eu te digo pra “acreditar nos seus sonhos” e “continuar perseverando” de graça.

As “empresas” atuais de Pesce citadas em suas biografias são a tal FazINOVA, que como mencionei é um cursinho de auto-ajuda que ela tem aspirações de transformar em franquia; a Enkla, uma editora que só publica livros dela, A “Figurinhas”, uma agência de publicidade que nem site tem, e oBeDream, com um site tão vago e piramidesco que eu te DESAFIO a me explicar do que se trata.

A moça não fez nem metade do que é atribuído a ela, e seus “empreendimentos” são transparentemente um veículo pra reafirmar sua habilidade de empreendedora. Empreendedorismo vindo do nada e servindo pra alimentar o próprio empreendedorismo: há algo quasetermodinamicamente errado com essa equação.

E nem precisei ir pro MIT pra perceber isso.

(Postei o texto neste tal de Medium porque meu site tá caindo desgraçadamente desde que o publiquei. Isso que dá não ter 5 diplomas do MIT e não saber configurar servidor. Meu site é www.hbdia.com, e estou sempre lá pelo Twitter como @izzynobre)

Next Story — How I Used & Abused My Tesla — What a Tesla looks like after 100,000 Miles, a 48 State Road trip…
Currently Reading - How I Used & Abused My Tesla — What a Tesla looks like after 100,000 Miles, a 48 State Road trip…

How I Used & Abused My Tesla — What a Tesla looks like after 100,000 Miles, a 48 State Road trip, 500 Uber Rides, 20 Rentals & 2 AirBnB sleepovers.

Most $100,000 cars are babied by their owners. Never taken out except on a warm Sunday. Garaged and kept with extremely low mileage. Only driven by the owner, not even allowed to be driven by a spouse, much less a stranger.

Not my poor Tesla.

I’ve worked that thing like a rented freaking mule.

So, you ask, how did the Tesla hold up? What’s it actually look like now? What are the exact operating costs, repair numbers and dollars spent & earned on this car over the 2 years of ownership?

Read on to find out all the gory details…and the photos to prove it.

It all started on August 27th, 2014 when I purchased my Blue Tesla Model S P85. I bought it used with 35,000 miles from a local Phoenix owner for $79,000. It originally sold for well over $100K when new.

Here’s the car when I bought it with the original 21" Turbine wheels:

In just under 2 years, on August 16th 2016, I reached dual milestones: 100,000 Miles and 500 Uber Rides.

100,000 Miles & 500 Uber Rides happened within the same hour on August 16, 2016

As this was the first really expensive car I’ve owned, I needed to find a way to help pay for the car. Naturally, Uber came to mind so I signed up and actually gave the first official Uber ride in Flagstaff AZ when they opened the market on September 17th, 2014. As it turned out, this would be just one of many firsts for this particular Tesla. Here’s the tweet from the Uber rep in Flagstaff:

I ended up getting commercial insurance as I wanted to do UberBlack, the high end service. However, I didn’t actually get activated on Black for another 5 months as there was a waiting list in Phoenix. My first UberBlack ride was worth the wait: It was during the SuperBowl in Phoenix, and it was a ride that cost $305 of which I made $225.

My First UberBlack ride during SuperBowl 49 in Phoenix

During the same SuperBowl week, something crazy happened. My Tesla was getting world wide press.

Why?

Oh, just this little story about how I rented out my Tesla as “The World’s Fastest Hotel” on AirBnb. The story went completely viral as it was on CNN, CBS, ABC World News Tonight, and more blogs than I could count.

And yes, while I turned down several potential renters I did have 2 automotive reporters pay $85 & $385 (after I upped the price hoping to discourage more guests) to sleep in my Tesla as it was parked in my garage.

Awkward? Oh hell yes.

Funny? Certainly.

A real business idea? Ummm, that would be a big fat NO.

That media frenzy is what inspired my next Tesla adventure, the admittedly poorly named “Million Dollar Tesla Trip”. It was a 4.5 month, 27,615 mile journey across all 48 States plus Canada where I video interviewed interesting & inspiring people in the Tesla as we drove across the country. Interviewees ranged from founders of incredible charities, to the former Driver for Martin Luther King, several authors, lots of fellow Tesla owners, and another cross country road tripper who was volunteering with 50 youth organizations in all 50 States. It became the longest continuous road trip in an electric vehicle (unofficially) and I was the first Tesla owner to visit 200 SuperChargers. Read about my Top 11 Tips for Road Tripping in a Tesla.

After completing the massive road trip, I started renting my Tesla out on Turo.com, the “AirBnB for Cars” in October of 2015. Since my job is renting out Vacation Rentals, it wasn’t much of a stretch for me to rent out my Tesla. Turo provides the match-making service as well as insurance, so it’s worth their 25% cut.

Since I ditched my commercial insurance before the trip and wasn’t too excited about the low UberX rates, I didn’t restart driving for Uber till July of this year. I’m able to do UberX, the cheapest service, along with Select which is reserved for nicer cars and is about 2X the price although only about 1 in 15 rides is Select. Once I started though, it’s become somewhat addicting, but the beauty is I can quit or slow down any time.

Uber’s prices are so low, it really doesn’t pay to drive for Uber in an expensive vehicle especially if earning an income is your only goal. Personally, I wouldn’t Uber in any car besides a Tesla. I do it for several reasons: a great excuse to drive more, sharing the Tesla experience, and it’s fun meeting the mostly cool passengers. If you use it smartly, it can be a lot of fun, and slightly profitable.

There is no better way an individual owner can help Tesla achieve its mission “To Accelerate the Advent of Sustainable Transport” than to drive for Uber or Lyft.

One of the ways to Uber with very little time investment is to use Uber’s commute option where it only offers you riders going your same direction. This way you are paid for going where you were going already. Make someone smile while making some lunch money. Not too bad.

Total Cost of Ownership:

Cost of Tesla: $79,000 used with 35,000 miles

Regular Maintenance Cost over 65,000 miles in 24 months:

  • “Annual Service”: $600 (Yes, I’ve only done this once at 49,000 miles. Probably not a bad idea to do another soon)
  • 2 sets of tires: $1700
  • Oil Changes: Hahahahaha
  • Brakes? Nope. The regenerative braking does 95% of the work and recharges my battery at the same time.

Total Maintenance = $2300

Out of Pocket Repairs from 50,000 to 100,000 miles:

  • 12v Battery $400
  • Door Handle Repair $1000
  • Wheel well fasteners $80

Total Repairs = $1500

Total Maintenance + Repairs = $3800. Keep in mind, 65,000 miles is 5 years of “normal” driving at 13,000 a year.

I’d love to hear about any other $100K car go that far (with 50,000 miles out of warranty) and cost less than $4000 ALL IN? Oh, and I’ve probably spent less than $1,000 on electricity as well.

Earnings:

Uber — 500 Rides totaling $6,142.47 in 9 active months = $682 average per month. Less than 1 month was on UberBlack. Most of it was on UberX & Select.

Other Rides: $360

Turo — 20 Rentals totaling $6652.25 in 11 months = $604 average per mo.

AirBnb— 2 Rentals totaling $470

Total Tesla Income =$13,624.72 / 24 months = $567.69 a month average

Tesla Road Trip Savings: My 27,615 mile (the circumference of the Earth is 24,901 miles) 48 State plus Canada road trip cost $8.37. I had to pay for electricity 2 times, the rest was FREE thanks to the Tesla SuperCharger network. There were about 180 SuperChargers when I started the trip. There are now almost 300 in the USA. Gas savings assuming a 25 MPG car using a national average of $2.75 a gallon = $3037.

I also used the “Tesla Hotel” about 20 times out of the 132 nights on the road since the Tesla allows you to run the A/C or heat all night with no issues. With an average hotel cost of $75, this saved me $1500.

Total Road Trip Savings of just over $4500.

Should I have purchased the Extended Warranty?

As 50,000 miles approached, I had to decide wheather or not to purchase the Tesla Extended Warranty for $4000. This would extend the regular warranty to 100,000 miles. My choice? I was confident in the Tesla so I rolled the dice. No warranty for me.

As I hit 100,000 miles, I finally found out if I had made the right decision.

As noted above I spent $1500 out of pocket versus $4000 on the warranty so I made out by $2500.

Tesla also has an 8 year, unlimited mileage warranty for the Drive Train & Battery. This was great, as I did have the drive train replaced at about 65,000 miles and the battery replaced at about 76,000 miles. Tesla service was beyond fantastic in dealing with both issues and I was on my way with zero out of pocket cost.

The moral of the story? The Tesla isn’t a typical prissy $100,000 car. It’s meant to be driven, and driven hard. It’s not just a daily driver, it’s a high performance yet practical and extremely safe car. It’s better than a traditional car in so many categories it’s fall down funny.

So, you want to see the 100,000 mile photos??

Tesla with 100,000 miles and 19" Cyclone wheels — not as sexy as the 21’s but more economical
A few bits of road wear. The Xpel protectant has helped avoid rock chips
Some Road Rash courtesy of the concrete jungle: Manhattan, NYC

In my opinion, the Tesla has held up very well. Most of my Uber riders are very surprised when I tell them the car is almost 4 years old. Yes, there are a few more minor blemishes on the paint, but nothing out of the ordinary for 100,000 miles. I really don’t think you could tell any difference between my car and any other with similar milage even though I’ve given 500 Uber rides and rented the car out 20 times to complete strangers on Turo.

I implore any Tesla owner to throw out any notions of keeping your Tesla to yourself because you are worried you will ruin the car.

Share the hell out of it!

Sign up for Uber or Lyft and give people rides. Trust me, their reactions alone are worth it when they hop in your Tesla. Let others get a taste and they will soon realize what we already know. Let’s help spread the word about these world changing cars. My experience should prove that your car can take all the abuse you can dish out and then some.

Bonus Prediction:

I think even Tesla fans and industry analysts are massively underestimating what Tesla will do in the next few years with the cheaper Model 3 that should be fully autonomous shortly after it’s released. I think Tesla could sell 1 to 2 million units a year by 2020.

Tesla Model 3 starting at $35,000

To clarify, I believe the demand for that volume will be there, but the hard part is being able to ramp up production that fast. Odds say that will be tough to pull off.

However, once people realize they can pay $35,000 for a killer car that can earn them $30,000 in a year by simply pressing a button and telling your car to go pick up passengers for you while you work or sleep — it’s game over.

Wait, a car that makes me money?

Wait, a car that can drive me across the state for free, while I sleep or get work done? It can autopilot me through stop and go traffic, but I can drive it like Mario Andretti on the weekends?

Yes, please.

Not only will this affect car sales, but airlines will see more people shifting to driving vs flying and it will even make not owning a car more practical. This, along with many other ripple effects we are not even thinking about yet.

Bring on the disruption. It’s coming and coming fast. Just like a Tesla.

Sign up to continue reading what matters most to you

Great stories deserve a great audience

Continue reading