I was never a marathon runner; even us cross-country and track runners were never THAT hardcore. Props to you if you did it.

The Infantilizing Nature of Technophobia: A Matter of Will

It is no secret to many in the health and fitness communities that individual willpower is weak. People will say “I’m going to lose __ weight” or “I’m going to go on __ diet” but their ability to do so is, at best, often limited. Serious gymgoers often clear out during the early months of every new year due to the flood of people hitting the gym for their New Year’s Resolution only to abandon it once other aspects of life intrude. Fitness in particular is often achieved through external prodding of some sort, the most familiar of which for most Americans occurs in high school in the form of physical education or team sports.

As a onetime track and cross-country runner, I understood this all too well. At my peak I briefly placed at the top of the team; most of the other time I was merely satisfied to try to keep up with the top runners. Our coach exploited the rivalry within the team — during races we were urged to keep up with our peers. It was never really entirely about outrunning the other team. We were ideally motivated by the simple desire to not let our teammates down; to keep up with the boy or girl with the same uniform next to us. Another interesting thing I observed was that there were always a contingent of people from the basketball team forced to run on our team. Their coach did not trust them to exercise off-season, hence he or she made them run despite their loathing of the sport and the somewhat embarrassing uniforms we had to wear. Sadly, our own inability to do the same meant constant defeat — training during the summer and other periods when we were not directly forced was out of the question. Our rivals, however, trained constantly. So we never really amounted to much as a team competitively.

In sum, people have poor willpower. Successful fitness clubs like Crossfit have to optimize retention and minimize attrition and do so by creating a sense of mystique and group belonging as part of a common community. More mundanely, small-scale clubs and meetups help people keep each other committed when they may fail alone. And let us be honest — this is a form of pressure and coercion. Unfortunately, research has shown in general that willpower in general may not be something we have a choice over. Low socioeconomic status produces chronic stress, and simply getting through the day (much less exercising willpower) can be an unbearable burden. And whether or not we scrape by or use a silver spoon is not something any of us have control over at birth. Rich or poor, policymakers increasingly try to design interventions around insights from behavioral psychology about “nudging” individuals into a desired behavioral direction. Andrew Gelman and others have made some good points about the sometimes illiberal implications of this approach to policymaking, but my point is simply that “nudge” is (good or bad) an establishment cliche.

I bring this up in the context of a ridiculous New York Times article on the so-called “infatilized self.” Their beef? Exercise and health tracking devices now make behavioral suggestions in an effort to prod their users to better behavior — with those oh-so-terrifying “algorithms” that *gasp* use data:

This new category of nudging technology, she says, includes “hydration reminder” apps like Waterlogged that exhort people to increase their water consumption; the HAPIfork, a utensil that vibrates and turns on a light indicator when people eat too quickly; and Thync, “neurosignaling” headgear that delivers electrical pulses intended to energize or relax people.
“There is this dumbing-down, which assumes people do not want the data, they just want the devices to help them,” Ms. Schüll observes. “It is not really about self-knowledge anymore. It’s the nurselike application of technology.”
In the move to the mass market, it seems, the quantified self has become the infantilized self. ….Lately, however, devices are asking consumers to cede their free will to machine algorithms.

“Cede their free will.” Oooh, sounds scary. “Infantalized self.” Man, what a zinger! And of course the “nurselike application of technology” and “dumbed down” brings to mind images of patients in nursing homes that can barely exhibit basic independence in everyday tasks without the aid of special personnel. The story is written to suggest that these technologies are just yet another example of those “creepy” algorithms taking away more and more power from the human and making decisions for them. But there’s, of course, another way to look at it.


I began this blog with a discussion of willpower and its sometimes costly nature due to the way in which this story (like all “algorithms” pieces) speaks from the absurd perspective that technology augmenting human willpower is somehow evil, creepy, or a sign of human infantalization. What if technology might actually offload internal processes that are costly for us to deal with? What if the demonstrated inconsistency of human decisionmaking and willpower might be helped via a computational mechanism? After all, as long as we are accepting the somewhat Puritan-esque notion that humans ought to stick to rigid health regimens, we might as well go all the way. A popular textbook in the psychology of judgement and choice, after mercilessly outlying all of the reasons why we should not trust ourselves to do the right thing, argues we should copy the legendary warrior Odysseus and tie ourselves to the mast to resist the siren call of our desires.

Perhaps you might find this logic objectionable, if not outright offensive. But the idea of offloading costly computation itself is neither necessarily objectionable or offensive in and of itself. The paradigm of embodied cognitive science holds that our cognition is not solely limited to the brain:

Embodied cognitive science appeals to the idea that cognition deeply depends on aspects of the agent’s body other than the brain. Without the involvement of the body in both sensing and acting, thoughts would be empty, and mental affairs would not exhibit the characteristics and properties they do. Work on embedded cognition, by contrast, draws on the view that cognition deeply depends on the natural and social environment. By focusing on the strategies organisms use to off-load cognitive processing onto the environment, this work places particular emphasis on the ways in which cognitive activity is distributed across the agent and her physical, social, and cultural environment (Suchman 1987, Hutchins 1995). The thesis of extended cognition is the claim that cognitive systems themselves extend beyond the boundary of the individual organism. On this view, features of an agent’s physical, social, and cultural environment can do more than distribute cognitive processing: they may well partially constitute that agent’s cognitive system. (Clark and Chalmers 1998, R. Wilson 2004; A. Clark 2008, Menary 2010).

Philosopher and cognitive scientist Andy Clark famously made the case in his book Natural Born Cyborgs that humans have been uniquely successful due to the ability offload cognition to external objects and thus augment our cognitive capacities. Intuitively, one can see an example of this in the novel Fahrenheit 451. When all books have been destroyed, the remaining vessels of human knowledge are groups of wandering intellectuals — each of which has memorized the entire contents of a foundational book. One can, of course, quibble with this thesis in several respects. Not all external devices are equal in their benefits, as the distinction between pen and paper and computer note taking in school illustrates. And the Internet in particular may lend a false sense of intellectual superiority, as recent cognitive science research notes.

Regardless, Clark makes a point that is echoed somewhat by evolutionary psychologist Robin Dunbar (of the famous “Dunbar’s number”) about the evolutionary role of language. Dunbar points out that highly demanding and intensive methods of social interaction (and preservation of social bonds) did not scale as human group size increased. Hence the rise of gossip and language as information transmission mechanisms. The idea of the evolution of language and other forms of sociality as a tool for offloading computation is echoed in a 1998 simulation of evolving language in neural networks and Joshua Epstein’s 1999 paper with the awesome title: “Learning to be thoughtless: social norms and individual computation.” Summary:

This paper extends the literature on the evolution of norms with an agent-based model capturing a phenomenon that has been essentially ignored, namely that individual thought — or computing — is often inversely related to the strength of a social norm. In this model, agents learn how to behave (what norm to adopt), but — under a strategy I term Best Reply to Adaptive Sample Evidence — they also learn how much to think about how to behave.

Philosophy, evolution, and simulation models aside one should also observe that the goal of offloading cognition motivated one of computation’s brightest pioneers, J.C.R Licklider — the man who first envisioned the Internet and modern personal computing. Writing in 1960, Licklider envisioned something that would give today’s “algorithms” fearmongerers nightmares — man/machine symbiosis. I first read his paper on the subject as a Masters student taking an course on technology policy, and it was amazing and tremendously inspiring. How could someone, in 1960, imagine such wonders? One need only read Licklider’s own words to get a sense of the possiblities he dreamed of:

Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Prerequisites for the achievement of the effective, cooperative association include developments in computer time sharing, in memory components, in memory organization, in programming languages, and in input and output equipment.
…..It is to bring computing machines effectively into processes of thinking that must go on in “real time,” time that moves too fast to permit using computers in conventional ways. Imagine trying, for example, to direct a battle with the aid of a computer on such a schedule as this. You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes 5 minutes to assembling your program and 47 seconds to calculating the answer to your problem. You get a sheet of paper 20 feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously, the battle would be over before the second step in its planning was begun. To think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own will require much tighter coupling between man and machine than is suggested by the example and than is possible today.

One can justifiably quibble with Licklider’s Don Draper-dated language (ought women be able to benefit from this computing paradigm as well?) or laugh at the idea of waiting a week to get an answer to a computing question. But the nobleness of Licklider’s vision is far more difficult to dismiss. It harkens back to the notion of the so-called “machine in the garden” that animated 19th century optimism about technology. (Wo)man would feel free to exercise higher intellectual capacities, set the goals, and determine the nature of operation. And the machine, humming away without complaint, would solve the problem.

To some extent, Licklider’s dream has been realized. I wrote a small script, for example, that acted as a mini-operating system for organizing my schoolwork, technical notes, and other desiderata. When I first saw it work, it felt like magic to me. Complex operations in the command line terminal or the Mac graphical interface were reduced (via scripting) to one or two keystrokes. I can’t say it solved all of the problems in my life, but it made me better able to focus on what was important to me. And everything from the personal assistant x.ai to software to help those that must contend with cognitive disabilities are some of the most marvelous aspects of our time. The rise of domestic robots also may help even out the domestic division of labor, freeing up both partners to focus on their children, their careers, and of course their love for each other.

But if we’re to be honest, Licklider’s utopian vision never really came to pass. Members of my family that did statistical computations for homework in the era of punch cards cursed their fate. They lacked personal computers and had to wait for all of the jobs on a university machine to complete, thus necessitating staying up until midnight to submit analytical computation programs (in FORTRAN and COBOL no less). They had to wait all night to get the results of their computation (and to know if there was a bug in the code). People of my generation — while not faced with such “I walked 10 miles in the snow” esque problems — still expend an enormous amount of tedious labor in the era of R and Python (and still need to go on often irritable help forums to solve problems).

Less narrowly, I remain resolutely skeptical that technology will ever banish problems that fundamentally arise out of human social conditions. If anything, the trouble with technology is that it doesn’t obviate the worst human problems — it only reproduces them in another form, forcing us to often scramble to “solve” them all over again. Learning about technology and writing programs for school has, if anything, intensified my prior suspicions of techno-utopianism I had as a student of history and politics and budding (non-computational) social scientist. I don’t think technology will save us. Every new era of technologies is fundamentally also about the social contexts technology is embedded within; the same telecommunications that revolutionized the early 20th century also enabled its most destructive wars by facilitating modern combined arms combat.

The “technics” beneath our civilization are complex, and not always positive in nature. The notion of so-called “normal accidents” is a function of modernity, and our dependence on complex engineered systems and the experts capable of divining their mysteries. And “opting out” is difficult; only the most privileged among us can afford to fully “unplug” from the various networks that our lives take place within. And the disturbing thing about technological progress always lies in its double-sided nature. The same domestic robots I mentioned earlier may alleviate some problems (fights over who does the laundry or cooks) but if it scales it will also obliterate an enormous amount of low-skilled jobs and thus take food off the tables of countless laborers.

So what about apps that nudge you to drink more water or go to the gym? Are they a good or bad thing? Frankly, I am circumspect myself. They could be a good thing for those that otherwise lack the willpower to accomplish health goals. And there is no shame in admitting that. A friend worried about my late hours as a PhD student and my sleep recommended a handy app called F.lux that modulates your computer screen’s hue based on time of day. Other people I know use more invasive systems that tell them when they are not meeting fitness, study, work, or organizational goals and nudge them. It’s ultimately an empirical question whether or not these apps can help people achieve the lives they want, and the problems I mentioned above regarding computer note-taking and Google-fu suggest that this question will probably require actual study (as opposed to claims from marketers).

But this question isn’t really the reason I wrote this blog. As a PhD student, I myself am rather aware that many of my habits (especially concerning sleep) are not healthy. So I am not exactly well placed to talk about health from a position of authority. Rather, I felt that the term “infantalized self” sums up the myopic and ridiculous nature of modern technophobia. The article is a story about people that have supposedly surrendered their free will to creepy “algorithms,” indulged in weakness, and are little more than the wards of “nurselike” machines. Not only is this stupid, it’s also frankly offensive.


I frankly don’t care about whether or not I hit the treadmill enough to buy a gadget that records what I do and prods me when I don’t live up to my own expectations. But as a PhD student, I also have empathy for those who set an ambitious goal for themselves and struggle in an uphill battle to achieve it. Every day a voice in my head asks me whether what I am doing is meaningful, as I read stories about the omnipresent PhD glut or watch my friends hit life milestones (career promotions, marriage, and sometimes even children) while I slave away through a dark and winding tunnel. My willpower isn’t consistent. Sometimes I feel I’m the king of the world, other times it flags and I give in to gloom. But whatever you do, don’t feel sorry for me. My story isn’t unique. Every PhD student has a story like this, and most of us simply don’t make it to the finish line. PhD attrition is enough of a topic that it has, well, PhDs paid to study it! But nonetheless I’m committed and unless something dramatic changes in my life I’m going to keep going on until I have the “Dr.” attached in front of my name.

I don’t care what your goals are. Want six-pack abs, to eat more vegetables, or to run a marathon? Knock yourself out. Have a dream to climb Mt. Everest? Go ahead. Want to be a Nobel Prize-winning physicist? Well, I wouldn’t advise it but you do you homie. Whatever your ambitious goal is (as long as its not, say, violently establishing a caliphate in Iraq), you have my unconditional support, respect, and admiration. I will cheer you on and support whatever means — technological or otherwise — you need to get the job done as long as you are serious about sticking with it. We’re both, after all, in the same boat. And I don’t care if you need a computer to remind you to do those squats or crunches. Whatever it takes to do the job is fine, as long as you own what it means and it isn’t illegal or unethical.

A former undergraduate classmate that I consider as a honorary doctorate holder in the art of the hustle, Quantasy’s Julian Mitchell, said in a recent Forbes column that you need to understand the difference between working smarter or harder. In other words, hustle, don’t grind:

Yet, there is an even more common mantra that sets a solid line of differentiation between two types of people — work smarter, not harder. That invaluable phrase is the foundation for understanding the difference between grind and hustle. To achieve lasting success, and continue conquering multiple missions, it’s important to know which category you currently fit into. …Hustlers make the right moves and master positioning, while someone who is a grinder always searches for moves, seeking to master where they’ve forcefully been positioned. Someone who is a grinder can work tirelessly and see no return. Their sense of fulfillment is found in the chaos of moving at a fast pace, juggling multiple tasks, or simply being busy. However, someone who is a hustler makes sure every effort reaps a valuable return on investment.

Do you want to be a hustler, or just some guy or gal that grinds along? Make your choice. After all my former classmate Mitchell, who hustled successfully enough to meet with one Sean Combs’ approval, knows what he’s talking about.


So in contrast to the title of that New York Times piece, people who use machines to help them meet their goals and targets in life and bolster their willpower aren’t the ones who are “infantilized.” I did not consider myself a weakling or an infant because I used the pressure to match my teammates to do better or I exploited my coach’s endless hectoring as an impetus to run that 5K faster. I was, in contrast, a competitive track and cross country runner once capable of keeping up with (and sometimes beating) the top people on my team. I once also was capable of running long distances routinely that should embarass my present self (who is merely happy to do a bit of minor jogging in my neighborhood). Part of that was grit and determination. But it was also the fact that I ran 800 meter practice runs while a belligerent team coach screamed into my ear to do it harder and faster and had plenty of choice words for me when I did not. If you need a computer to do that for you, I’m not going to judge you. How we get to the finish line isn’t as important as simply getting across it to begin with.

The only people who are “infantalized,” in contrast, are technophobes unwilling to account for the problematic nature of human willpower and the fact that sometimes we could just use a (computational) helping hand to get ourselves where we want to be in the world. Instead of people brave enough to admit that, yes, we might need some help from our robot friends, they demonize those who use computational aids to help them succeed as losers who have surrendered their free will to creepy “algorithms.” They refuse to grapple with the problem that everyone — PhD student to John Doe trying to make that New Year’s Resolution stick — faces with holding on to a goal over time and blame the need for help in doing so on computers. But one wonders if they would apply the same strict standards to themselves when they put their dishes into a dishwasher instead of washing every one. They probably didn’t do all of the math in their SAT prep by hand either. And they would never dare tell, say, Stephen Hawking that he is weak or controlled by computers for using AI technologies to communicate brilliant scientific insights to the world via specialized voice systems. Not all of us face as severe challenges as Hawking, but his determination and example is a powerful case study in the power of technology to help us become who we want in the world.

No technology, no matter how useful, removes the human element of achieving a complex and difficult task. But people like Hawking, who face adversity but nonetheless utilize technology to triumph over it, are everyday equivalents of gym-going he-men like Charles Atlas. Those who spread fear over technology are, in contrast, the 97-pound weaklings getting sand kicked in their faces. It’s hard to be the man or woman you want to be in the world. A saying popular in the old days was that every great man has a great woman in his corner. Subtract the gendered language and its still fundamentally right. Maybe the 21st century variant is that behind every great man or woman is a great computer, and there ought to be no shame in admitting that.