Rise of the Brain Machines

How Computers Would Extend Human Ability According to Scientific American, 1949–1955

A woman operating the input/output console of a UNIVAC/ERA 1103. Public domain image.

Following World War II, science and technology assumed an elevated position in American society. Technical innovations were expected bring unprecedented benefits to humankind. Jonas Salk had cured polio, televisions and radios brought immediate news and entertainment to directly to the home, and many believed atomic energy would soon power the world.

But a dark cloud hovered over this age of techno-enthusiasm. Just as fear of technological displacement had permeated the industrial era, public concerns in the postwar years included worries about mass unemployment caused by the new electronic computers. Would humans lose their jobs to faster, more efficient machines?


​Enter the Scientific American, a popular science magazine whose writers ignited enthusiasm for scientific and technological innovation while assuaging readers’ worries about what the future might hold. Science writing had been a staple of American journalism ever since the American Association for the Advancement of Science introduced the genre in the 1920s. Science journalists addressed society’s concerns in ways meant to sustain public support for continued technical development.

​When Dennis Flanagan and Donald Murray took over the Scientific American in 1948, they sought to “increase the immediacy, timeliness, and authority of the magazine” by publishing articles written primarily by subject matter experts: expository writing. Their efforts paid off. The magazine’s readership — it was and continues to be aimed at an educated base — grew steadily from 40,000 in 1947.

​Expository articles on computers published in the Scientific American between 1949 and 1955 reveal that readers found computers elusive and intimidating. The fears were not hyperbolic, but rather thoughtful reservations about how computers might impact society. These hesitant concerns reflected the magazine’s readership demographics: “scientists, engineers, and others looking for detailed articles about specialties outside their own.” In addressing these worries, writers characterized computers as analogs of human physiology and extensions of human capability.

During this five-year period, Scientific American authors tried to make computers more approachable by establishing a narrative about how computers could improve society. Of course, writers’ individual experiences and historical circumstances such as the Cold War influenced articles about computing. Set those themes aside for now though and instead focus on focusing how these writers understood cybernetics, computerized assistance, and the melding of man and machine.

​Writers employed two main analogies to describe computers: the human brain and nervous system and “inorganic organism.” Through these analogies writers made predictions about computing’s future. They imaginatively and critically addressed cybernetic possibility, in which in which computers would replace humans in certain menial and intellectual tasks while remaining subordinate to humans in general. Despite any capability or advantage machines may have held over the human brain, they were expected to always be safely under control of human programmers.


Giant brains and Mechano-biological organisms

Freight trains, scratch pads, and books — these were just a few comparisons between computers and other objects Scientific American writers made between 1949 and 1955. But of all the analogies made to help readers understand the computer, that of the human body or living being was most common. Computers, both analogue and digital, enormous and compact, were explained as mechanically analogical to human physiology and biological processes. Within this analogy, computers were either electronic, mechanical brains or inorganic organisms.

Most frequently, computers were described as similar to human brains. This comparison reflected a preoccupation with whether computers could think. According to professional science writer Harry M. Davis, “the use of ‘memory’ as a technical term of the computer trade has bolstered their anthropomorphic analogy to “brains.’” Davis considered the replacement of human intellectual and communications labor by computers to be nothing short of revolutionary. “The first phase of the Industrial Revolution meant the mechanization, then the electrification, of brawn. The new revolution means the mechanization and electrification of the brains,” he wrote in 1949.

​The brain analogy was more than an easy etymological transition from memory. According to Dartmouth mathematics professor John G. Kemeny, the famous mathematician John von Neumann made a “detailed comparison of human and mechanical brain in a series of lectures at Princeton University.” The “mechanical brains,” as Edmund C. Berkeley, author of Giant Brains, or Machines that Think, called them, were “endowed with the spark of automatic activity…the capacity to pay attention to and respond to a series of stimuli.” Just as a human brain gathered and processed information to make sense of the world, the computer brain received input in the form of a programmer’s commands and processed the information, either digitally or by analog means, to perform calculations.

Kemeny extended the brain metaphor to include the rest of the nervous system. One early computer, the Turing machine, “in fact resemble[d] a model of the human nervous system, which can be thought of as having a dial with many various positions and combining many simple acts to accomplish the enormous number of tasks a human being is capable of.” The dial, representing the brain, and the turning of the dial, representing the transmission of signals through neurological pathways, demonstrated the logical simplicity of Turing’s computer. Upon receiving and interpreting external stimulation, the brain sent tiny signals dictating the appropriate response through the nervous system. Similarly, the dial in Turing’s computer, upon receiving a command, spun to issue a series of responding signals to produce output.

​Kemeny continued the analogy by describing digital computing in relation to the nervous system, in which the brain interprets the binary firing or dormancy of a neuron as a signal requiring some type of action. Digital computers, which relied on the equivalent binary of on of off electrical signals, represented the “most complex logical machine” mimicking human neurology. Mathematician John von Neumann’s conceptual “universal machine” imagined a computer possessing a virtually complete electronic and mechanical nervous system. When a human programmer entered a program into the universal machine, the “neurons and transmission cells” would be “either quiescent or they [would] send out an impulse if properly stimulated.”

John von Neumann hanging out with his high-speed computer, 1952. Photo by Alan W. Richards, Institute for Advanced Study, Princeton.

​Only one article contested the computer-brain analogy. In “Computer Memories,” Louis N. Ridenour, the first Chief Scientist of the United States Air Force in 1950, declared computer memory to be less like the human brain and more like the tools used by a “human computer” preparing a tax return. The computer’s inner memory was like a “scratch pad” used to store the data and instructions in current use by a computer. The intermediate step of memory was an “analogue to a human computer’s notebooks and files of documents,” while the third category corresponded “to books and similar large repositories of the knowledge of mankind.” Ridenour’s criticism of the brain-computer analogy stood alone; both the magazine writers and the authors of two letters responding to articles about computing seemed to prefer imagining the computer as an active, automatic, perhaps even “thinking” piece of equipment.

​The computer as organism was a less common analogy than that of the computer-brain or computer-nervous system, but its use shows that Scientific American writers considered computers to be not merely brains, but organisms. Lawrence P. Lessing, an editor of the Scientific American from 1953 to 1955, described computers as “impressive monsters” that “have proved harder to tame and put to work than we first thought.” Even if computers did not literally resemble humans or other organisms, writers still employed biological terms to describe computers’ components, processes, and potential for development.

​More than one journalist envisioned an evolutionary model of computer development, in which machines descended from less technically advanced ancestors toward future states of progressive improvement. Lessing adopted the taxonomic term “genus” to describe different generations of computers. “Big machines” were a “different genus from the smaller electronic computers or data processing machines that evolved out of them in some profusion after 1945.” Machines of different types shared the same types of “organs,” he suggested, furthering the biological nature of the system. Lessing’s 1954 article did not describe how the organs were connected to create an organism, but set the stage for Kemeny’s 1955 article that reaffirmed and enhanced the biological analogy.

​Kemeny made a more extensive comparison between computers and organisms when he reviewed von Neumann’s “universal machine” concept. Such a computing machine would be “remarkably human,” capable of learning, evolving, and even reproducing on its own. Kemeny sought to redefine the concept of living in order to show the possibilities for computing; the theoretical universal machine was “not alive,” but was still able to “create a new organism like itself out of simple parts contained in the environment.” If being alive simply meant the ability to reproduce, rather than possess organic material that was susceptible to change, growth, and decay depending on the ingestion of environmental factors, then computers could be considered living organisms.

​Von Neumann’s machine, comparable to a “higher order animal” would possess three kinds of parts to merge the “brains” and “brawn” necessary for reproduction. Neurons “similar to those… in the central nervous system,” provided the “logical control.” “Transmission cells” carried the messages from the brain to the “muscles” whose “primary use is, of course, the changing of an inert cell into a machine part.” Not only did the universal machine share neurons, cells, and organs with its human designers, but it also might imitate their behavior. Machines could “get into conflict with each other — imitating even in this their human designers” and, if left to reproduce in an environment of limited inert components, might even resort to killing each other. The universal machines, if given the command by their operators, might even undergo evolutionary processes. “If one might design the tails [of coded instruction] in such a way that in every cycle a small number of random changes occurred” without compromising the reproduction instruction, the machine would pass the “mutation” on.

​Writers understood the computers of the 1950s as “brain machines,” complete with neurons, and as organisms possessing brains, cells, organs, and even the potential for ears. Though they recognized that computers were not perfectly analogous to humans, the comparison enabled them to consider whether computers could “think” in the same way as their human users. The concept of the machine as an evolving, perhaps even reproducing, entity allowed for the possibility of a world in which humans and machines collaborated in the realms of labor, engineering, intellectual work, and even home management. Though Scientific American writers did not use the term “cybernetic” when describing the prospects of the human-computer relationship, they suspected that computers would become invaluable companions to humans.

Computer beings

Scientific American writers celebrated the rapid advances in computing made since World War II. They predicted that computers would assume control of many menial and tedious tasks that required humans at the time. Some even hoped that advances in computer speed and memory would allow computers to become intellectual companions to their human designers. While predicting the future benefits of advances in computing, writers also acknowledged the dark side of technology, including concern that computers would become dominant over human controllers. To alleviate these fears, writers always expressed that computers were subordinate to the human programmers who ultimately controlled them.

​Magazine writers celebrated the real and potential advantages of computers, particularly the speed and efficiency of digital machines. Digital computers were free from the “play” and “noise” that plagued even the most advanced analog computers. Ridenour suggested in 1952 that the Radio Corporation of America’s Typhoon missile simulator was likely “the most complicated analogue device ever built, and very possibly the most complicated that it will ever be rewarding to build.” The nature of digital computer programming enabled digital “brain machines” to operate logically and quickly.

​Although the human brain was more efficient electronically and chemically, it was also the “slowest” of computing machines, which possessed “advantage of high-speed operation, freedom from errors, and freedom from laziness.” If programmed correctly, future digital calculators and computers would “make all sorts of quick decisions that now require an alert and hard pressed human being.” Digital machines were “beginning to operate at levels of speed, temperature, atomic radiation and complexity that make automatic control [of tasks then requiring human operators] imperative… and the results are certain to be dramatic.”

​Scientific American writers also predicted that computers would supplement, if not replace, human labor in in both physical and intellectual occupations. Computers were already applied to tasks such as managing seat reservations for major airlines, collating flight schedules for the Civil Aeronautics Authority, managing inventory for the mail-order house John Plain & Co., and controlling human traffic flow at the Pennsylvania Railroad’s New York terminal. “Audrey” the Automatic Digit Recognizer machine developed by Bell Telephone, showcased the possibility of computers capable of hearing and listening.

Although Audrey could only “hear” ten numbers and sixteen English sounds spoken by one of her designers, Edward E. David celebrated Audrey as bringing a “talking, and listening, robot” closer to reality. Writers considered the new computers to be profitable, efficient alternatives to the expensive and error-prone human workers they replaced. Kemeny described the “electric eyes” of the New York terminal as “vastly faster than any doorman,” while the tally clerks John Plain employed to record orders during the holiday season could not keep up with their records without making errors.

​The writers did not mention potential job loss created by computers in these circumstances. Labor issues, particularly for jobs considered tedious or menial, were evidently not a concern for the writers. Beyond simple jobs, digital computers could also handle “elaborate engineering computations.” The computer could “take the place of several human brains” and had already replaced human calculators in aircraft manufacturing and missile piloting. Digital computers’ reliability compared favorably to that of the human brain in many writers’ eyes, meaning “problems that would not be practical for the human brain may be submitted to the machines.” In factory settings, computers could be used to calculate the “best way to distribute available manpower, funds, equipment, and so forth, to maximize a particular effort or to minimize cost.”

​The computer posed an opportunity to transfer control from human operators to steady, reliable machinery. Rather than requiring a human to operate “simple specialized control mechanisms,” the application of computers to automatic control in manufacturing, transportation, and even commercial industries would enable machine supervision of a whole job. Compact computers, perhaps successors to the small “Simple Simon” machine, might someday be found in average homes. These “little robots” would help their owners manage their household accounts and assist children with their homework.

“Simple Simon” featured on the front cover of the Scientific American’s November 1950 issue. Unlike its contemporary ENIAC, which could perform 5000 addition calculations per second, Simon performed just 1.5 per second.

Ultimately, computers might even become the intellectual partners of their makers. According to Turing, who one optimistic letter-writer quoted, “We may hope that machines will eventually compete with man in all purely intellectual fields.” Chess-playing computers already faced off against human opponents and could perform simple, word-by-word translations. These simple accomplishments and hopes for the future reflect the vision of a digital utopia. Computers would be programmed to handle tedious or complicated jobs for humans, assist them in managing their homes, and even provide opportunities for recreation and mental stimulation.

​Despite the apparent advantages of computers over humans in performing certain tasks, human operators held ultimate control over the machines. Interestingly, some of the faults assigned to computers were peculiarly anthropomorphic in nature. Computers, although faster and more reliable than human brains, were susceptible to “temperaments and moods” which led to incorrect answers; they were also “inconsistent in their mistakes.” The chess-playing computers occasionally made occasional bad moves based on same flawed reasoning. “This, of course, is precisely what human players do: no one plays a perfect game,” recognized Shannon.

Claude Shannon and Edward Lasker, a chess champion, playing with Shannon’s relay-based chess machine in 1950. Photo from the Computer History Museum.

Other shortcomings of computers emanated from the actual mechanics and programming of the machine. Davis quoted on expert: “the more I deal with these machines… the more impressed I am with how dumb they are. They do nothing creative. They can only follow instructions, which must be reduced to the simplest terms. If the instructions are wrong, the machines go wrong.” Computers could do only what they had been programmed to do; their logic worked by trial and error, “but the trials are trials that the program designer ordered the machine to make… the machine makes decisions, but these decisions were envisaged and provided for at the time of design.”

No matter how automatic or thoughtful machines appeared to be, human operators still controlled a computer’s functions. As advances occurred, machines might be given more routine responsibilities in factories, businesses, and, someday, homes, but the “human supervisor [would] still be vital to proper operation” and the “provision for human veto” of machine action would continue to be built in. “Though they replace other kinds of human mental effort,” predicted Davis, “the mathematical machines will never replace the mathematician.”

​Could a computer even be designed that did not require a human operator beyond the initial programming stage? Central to this question was the possibility of machine thought. “Could computers really think?” asked Davis in 1949. Davis was far from alone in asking this question; both magazine writers and letter writers posed the question and attempted to answer it through chess-playing scenarios, the conceptual possibilities advanced by von Neumann and Turing, and explanations of the actual electronics and mechanics of computing machines. At the heart of the argument against machine autonomy was the idea that the creative aspect of human thought gave humans the superior position.

​Berkeley and Shannon considered chess to be the best test of machine intellect. Both writers granted that machine brains were capable of logical reasoning. The chess-playing computer could be built to calculate the best move in any given situation. If thinking were regarded as a “property of external actions,” such as logical moves on a chessboard, then the machine could be considered a thinking entity. Adopting a psychological definition, Shannon defined thought as “essentially characterized by the following steps: various possible solutions of a problem are tried out…without being carried out physically; the best solution is selected by a mental evaluation of the results of these trials; and the solution found in this way is then acted upon.” Substituting the word “machine” for “mental” in this description, Shannon argued, rendered an exact definition of how computers operated.

​Only humans, however, employed truly flexible chess strategies marked by imagination. If a game of chess was a painting, only a human chess master could draw an outline of the strategy he would follow and follow it while making minor alterations here and there. Machines would “always make the same move in the same position” and could not learn from their mistakes. Either of these functions would require intervention from the programmer. The computer, at least as it existed in 1950 through 1955, lacked the creative capability to alter its own instructions, learn from its mistakes, or act without an initial human command.

​As early as the 1950s, computers were believed to extend human capability even though mechanical and programming limits rendered them subordinate to their human designers. Computers could be efficiently used to automate and manage industrial, transportation, and business tasks in addition to solving complicated equations for mathematicians and engineers. Some writers hoped that computers might eventually be applied to household duties, although such talk assumed computer designers could adequately miniaturize the machines.

​Despite the economical advantages of enlisting computers rather than humans for certain jobs, the machines still relied on human control. Computers could accomplish only what their coded instructions allowed, and were not yet capable of learning from mistakes and revising their programming to improve operations. Yet many authors expected that advances in computing would endow computers with these creative capabilities. Kemeny remarked that there was “no conclusive evidence for an essential [and permanent] gap between man and machine” and that “for every human activity we can conceive of a mechanical counterpart.”

​Turing, quoted by letter writer Samuel Ross, predicted that by the end of the twentieth century, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” If computers could not think for themselves in 1955, it was only a matter of time before advances in computer science endowed machines with real


As contributors to the Scientific American explored the human-machine divide, they determined that computers mimicked their human designers both physically and operationally. Computers were brains composed of neurons or mechanical organisms made up of brains, cells, and organs. Despite being limited by their lack of creativity, computers could compete with humans in a variety of industrial, business, and engineering jobs and face off against human opponents in chess. A few writers expressed hope that in the future, miniature computers would find a place in the average American household.

​Even though Scientific American writers did not use the term “cybernetic” when describing the prospects of the human-computer relationship, their bodily analogies and descriptions of how computers could be applied to human tasks indicated the potential for computers to extend human capabilities.

​The hopes and concerns of writers were, in some cases, tentative and would be resolved or forgotten within a few years. Expository writing about computers tapered off over the next five year period as developments in nuclear knowledge and extraterrestrial rocketry drew the public gaze elsewhere. By 1960, cybernetics pioneer Oliver Selfridge and cognitive psychologist Ulric Neisser published an article in the magazine declaring the question of whether machines could think to be an “old chestnut,” the answer to which was a definitive yes. New questions awaited computer scientists, for there was “not much doubt that [computers] can think, but they still cannot perceive.”​


Interested in reading more? Articles from the Scientific American archives are available for a small fee.

Edward E. David, “Ears for Computers,” Scientific American 192 (1955): 92–98.

Harry M. Davis, “Mathematical Machines,” Scientific American 180 (1949): 29–39.

Charles A. Du Pont, “Letter,” Scientific American 182 (1950): 2.

John G. Kemeny, “Man Viewed as a Machine,” Scientific American 192 (1955): 58–67.

Lawrence P. Lessing, “Computers in Business,” Scientific American 190 (1954): 21–25.

Louis N. Ridenour, “Computer Memories,” Scientific American 192 (1955): 92–100 and “The Role of the Computer,” Scientific American 187 (1952): 116–130.

Samuel A. Ross, “Letter,” Scientific American 193 (1955): 2–6.

Oliver G. Selfridge and Ulric Neisser, “Pattern Recognition by Machine,” Scientific American 203 (1960): 60–68.

Claude E. Shannon, “A Chess-Playing Machine.” Scientific American 182 (1950): 48–50.