FROM THE ARCHIVES OF PRAGPUB MAGAZINE, FEBRUARY 2019

The Post-PC Era — A Moment in the History of the Computer: Everywhere and Nowhere

by Michael Swaine

PragPub
The Pragmatic Programmers

--

📚 Connect with us. Want to hear what’s new at The Pragmatic Bookshelf? Sign up for our newsletter. You’ll be the first to know about author speaking engagements, books in beta, new books in print, and promo codes that give you discounts of up to 40 percent.

In this latest installment in his computer history series, Mike considers what comes after the personal computer, including the direct brain-computer link.

This series on the history of the personal computer takes a light approach, including fictional scenes meant to put you in the moment. Fictionalized might be a more accurate description: in some cases, the scene is consistent with what might have happened, and in other cases, the scene represents an actual conversation that took place between the individuals, but with dialogue that I improvised based on what I know.

A girl is biking from place to place, stopping here and there to pull out her tablet and communicate with friends, write, draw, take a picture, pay for a meal, or look things up, all while sitting on a stoop or in a tree or lying on a lawn. Her mother asks what she’s doing on her computer and she responds, “What’s a computer?”

It’s a 2018 commercial from Apple Computer — sorry, from Apple: they dropped “Computer” from their name in 2007.

What’s a “Personal” Computer?

Image by David Stanley from Nanaimo, Canada, CC BY 2.0, via Wikimedia Commons

The second half of the 20th century saw the relationship between people and computers change drastically. What was once seen as massive, frightening, and depersonalizing had evolved into something truly personal. While once the phrase “personal computer” would have seemed like an oxymoron, now it was taken for granted.

Actually, in the first decade of the 21st century, forces were massing that would change the relationship between people and computers again, and more fundamentally than the developments that had led to this acceptance of computers as personal devices. But that part of the story is yet to come.

But by the end of the 20th Century, the personal computer revolution — the push to give the power of computers to ordinary people — was arguably over. The “computer power to the people” folks had won. Everyone now had a personal computer — at least if you were sufficiently casual in your use of the word “everyone.”

But it’s not really everyone. So here’s a — perhaps naive — question: If we now have these devices that we are calling “personal computers,” doesn’t that imply that everyone should be able to have their own computer? After all, anyone can say, I’m a person, where’s my computer? And if you live in the developing world or below the poverty line, the answer is most likely, “Sorry, you don’t get one.”

So maybe the revolution is not over.

A Revolutionary Idea

Since the earliest inklings of the possibility of individuals owning computers, the term “personal computer” has evoked this democratizing aspiration, this notion that the power of the computer should be made available to all the people.

This idea was present in the 1960s, when college students were dealing with computers by punching decks of cards and handing them to a white-coated member of the computer priesthood, those special souls who were allowed direct access to the campus mainframe. Or later, when they were learning to program by signing up for time on the mainframe and logging in on a terminal chained to the wall in a common workroom. The frustration of this feeling of not being privileged rankled. The longing to have your own computer was growing. There was power in those machines, and the difference between being granted temporary, limited access to that power versus owning it yourself was easy for the students to feel.

It was present in the early 1970s, when electronics hobbyists could feel how close the possibility was of actually owning their own computer — but still out of reach. They were impatient. And their desire connected with the political movements of the time. Surely everyone should have access to the power of the computer. In his literally revolutionary book Computer Lib, Ted Nelson denounced the computer priesthood that guarded the mainframes and doled out access in little pieces. He told his readers, “you can and must understand computers now.” He railed against “cybercrud,” the lies the computer elite were telling ordinary people to keep them from realizing that they could, in fact, understand computers now. He explained how computers worked, what you could do with a computer, and why you should conspire to gain the power they represented. Early personal computer pioneers like Lee Felsenstein took up the message. And not just as a slogan. Nelson’s populist message was not a fringe phenomenon or a fad. The very people who were building and selling these new personal computers often believed strongly in this “computer power to the people” message.

By the end of the 1980s, computer power to the people was, for many, an accomplished fact. It was no longer a pipedream to imagine having your own computer on your desk at work or in the family room or home office at home. It was now the expected thing. It was the new reality.

But, as we have observed, not for everyone. You had a computer on your desk if your business made that investment. You had a computer in your home if your family could afford it. That left out the poor and most of the developing world. Computer power to the people had only reached some of the people.

The Programmable Turtle

We now drop the curtain, shuffle the scenery, and raise the curtain on a different scene. That computer power to the people message, it turned out, resonated extremely well with what seemed like a completely unrelated body of thought: the educational theories of Jean Piaget.

Seymour Papert had worked with Piaget University of Geneva and had absorbed the famous educator’s ideas about learning, adapting them into a theory of his own, that he called constructionism. By 1963, Papert was at MIT working as a research associate in applied math. In a few years, he was a professor of applied math, an expert in the growing field of artificial intelligence, and a highly regarded researcher, studying how children learn.

Papert explained his constructionism in terms of two propositions regarding learning. First, that learning is a matter of building on what you already know, of discovery, rather than having knowledge transmitted to you by another person. Second, that “learning can happen most effectively when people are active in making tangible objects in the real world.”

“Tangible objects,” it turned out, didn’t necessarily mean physical objects. Papert and two colleagues developed a programming language specifically for children, called Logo. He soon had children using Logo to draw. A child gave commands to an on-screen cursor called a “turtle.” The turtle drew lines on the screen as it moved under the child’s direction, creating drawings. If you wanted to execute the same command or sequence of commands several times in a row, there was a command for that, and you learned to use that repeat command, because it met a need in this world you were simultaneously exploring and building. The child was creating objects, making discoveries, and painlessly learning programming concepts.

Papert described this learning environment as Mathland. To learn French, live in France, he said. To learn math, live in mathland. Of course, all these students had computers.

In 1985, Papert was among the founding faculty of the MIT Media Lab. The Lab was created to do interdisciplinary research. It draws on work in technology, media, science, art, and design. Nicholas Negroponte was its founding director. Alan Kay visited the Lab in the late 60s and saw Papert’s work, and was inspired to conceive of his Dynabook, the prototype for modern tablet computers and itself an inspiration to personal computing pioneers.

One Laptop Per Child

But in the late 1990s, Papert moved to Maine, where he set up a program to teach kids struggling with drug, alcohol, anger, or psychological problems. He got to know the Governor, Angus King, and pushed for a program that made Maine the first state in the U.S. to treat ownership of a computer as a student’s right. In 2001, Governor King put forth a plan that put a computer in the hands of every Maine seventh-grader. Their own, personal computer, to keep.

Soon, Papert and his Media Lab colleague Nicholas Negroponte were pushing for a much bolder plan. The plan had a name: One Laptop Per Child, or OLPC. OLPC would do for the rest of the world what had been done in Maine. This was computer power to the people argued from an educational position. Papert said that computers in a computer lab were like books in a library chained to the walls. Negroponte said that shared computers were like shared pencils. A computer for every child. One Laptop Per Child. They were campaigning as fervently as though they were running for office.

Negroponte addressed world leaders at the World Economic Forum in Davos, Switzerland, in 2005. Here he pitched the OLPC idea hard, but also made it clear that OLPC would only work if you could reduce the cost of a personal computer from the fifteen hundred dollars or so that the State of Maine could afford, to something like one hundred dollars. You need to figure out how to build a hundred-dollar computer. How do you manage that? He had an idea.

Enter Mary Lou Jepsen. Jepsen had been a student at the Media Lab years earlier. She was a memorable student. She came to the Media Lab with an academic background in studio art and electrical engineering. She had had an almost religious reaction when she created her first hologram, and most of her subsequent work involved light. At this point in her career, she had developed the world’s first holographic video system, done pioneering work in wearable display devices and laser displays, had massive-scale holographic art installations, started and run her own company to manufacture LCD chips for high-definition TV displays, and had been headhunted by Intel to run their display division. Now she was back at the Media Lab and Negroponte knew she was the person to make OLPC happen.

In 2005, she launched OLPC as a company. All that first year she was its only employee. She did it all.

Take the screen. One key to making a hundred-dollar computer was to create a new screen. Existing screens were prohibitively expensive. Jepsen’s screen was remarkably cost-effective, but it had other advantages. It drew so little power that the computer could easily run on solar power. It also treated monochrome and colored light differently, a trick that meant that the screen could be read in full sunlight. These were big advantages for a computer that might be used in remote areas off the grid and without electric-lit classrooms.

Jepson brought that same design philosophy to the whole computer. She designed and brought into existence a computer that could be sold for a hundred dollars. And then she sold it. She enlisted industry leaders and international organizations to help in the effort to actually get the computers to kids throughout the world. With the help of the United Nations Development Programme and others, OLPC targeted schools in the least developed countries. Over 2.5 million OLPC computers were shipped to children around the world under Jepsen’s brief two-year tenure.

“Bill Gates, Steve Jobs, Michael Dell, Craig Barrett the then-CEO of Intel, all said it would never work,” Jepsen said. “I hand-soldered a prototype and Kofi Annan, the head of the United Nations, wanted to unveil it at this world forum and then every head of state wanted access to the laptops.”

Jepsen’s little startup ultimately had revenues of a billion dollars. It set records for the rapidity of its growth. And it transformed the lives of children in the developing world.

“And,” Jepsen points out, “[it] is still the lowest power laptop ever made.”

The hundred-dollar personal computer that she invented was such a revolutionary development and work of art that it earned a place in the Museum of Modern Art in New York.

The OLPC project scored many successes. In the end, though, the results fell short of Jepsen’s ambitious goals. Even in Uruguay, where conditions were ideal and they were able to get computers into the hands of nearly all students, the computers ended up being used mostly for play. They didn’t succeed in getting a computer in the hands of every child worldwide. They didn’t even come close. And it wasn’t the educational breakthrough Papert had expected.

The problem was that the desktop — or even portable — computer was no longer the digital tool of choice, and this was especially true in the third world.

The Third-World Leapfrog

Lower the curtain. Shuffle the furniture. Raise the curtain.

It turned out kids in developing countries didn’t want or need computers. Nor did their parents. Not if they could get their hands on a smartphone.

Now a phone is not a computer. But starting around 1999, phones began to add capabilities of computers, starting with internet access.

A Japanese company called NTT DoCoMo launched the world’s first mobile phone that could access Internet services in 1999. DoCoMo meant “DO COmmunications over the MObile network.” This was the birth of the smartphone, which would turn out to be more or less a computer in the form of a phone. And also, of course, a phone.

Smartphones were rare outside of Japan for the next three years, and then some competitors began to appear, including the Danger Hiptop and phones based on software from Microsoft. One of the most successful of these early smartphones was Research In Motion’s Blackberry, a product that was so addictive it began to be called Crackberry. Blackberry devices were ubiquitous among government workers in the late 2000s. This Blackberry smartphone was arguably a computer by von Neumann criteria. It had a CPU and an operating system, memory, and input and output capabilities, and could act as a calculator, a calendar, an address book, and many other things.

But these devices were not presented as computers. Even when Apple entered the market with its iPhone in 2007, Steve Jobs, who was never inclined to undersell a product, introduced it as “a widescreen iPod with touch controls, a revolutionary mobile phone and a breakthrough internet communications device.” He thought he was making a bold claim, but he could have called it a personal computer that you can slip into your pocket. A year later the first smartphone appeared using Google’s Android operating system, and by 2012, Android dominated the market. By this time the ability to install apps, that is, programs, on these phones made it more obvious that they were indeed computers.

These smartphones represented a huge and exploding market. Let’s quantify that. By 2017, there were more mobile phone subscriptions in the world than there were people.

And where they were found was interesting. Smartphones were spreading throughout the world, leapfrogging conventional computers. Seventy percent of people in the least-developed countries were cellphone users. Internet access lagged a bit behind this, but still nearly fifty percent of the world’s population now had a cellphone, and while that didn’t always mean a smartphone, increasingly it did.

A railway porter in India might make eight dollars a day. He can’t buy a computer with that, but in parts of India, it is sufficient wealth to be able to buy a cheap smartphone. The railway provides free Wi-Fi. With a rock-bottom cheap data plan, he’s on the internet. With the right apps, he can track train arrivals to make sure he’s always at the gate at the right moment. Another app lets him call an Uber for the traveler he’s just assisted with her bags.

In Nigeria, nearly everyone has a smartphone, and uses it for banking, and maybe for education and healthcare information. Much of what Jepsen hoped to achieve with OLPC is happening via smartphones.

The developing world is simply bypassing the landlines and desktop computers that the developed world has invested in so heavily and going straight to mobile devices for phone and internet connectivity and other computer uses.

Steve Jobs spoke of these small, internet-connected devices as heralding the “post-PC era.” But smartphones meet John von Neumann’s criteria for a computer and are even more personal than what we have been calling personal computers. So maybe these smartphones are not post-PC devices. Maybe they are the next generation of personal computers — the more-personal computers.

But they were built around a different central capability: not computation but connection.

The Network Is the Computer

Note that the first computer capability that these smartphones were given, and the one most touted in advertising, was access to the internet. Technically, some of these phones are called “feature phones” rather than smartphones. But given their ability to access the internet and online services, the distinction is getting blurred. Because internet access opens a new world. And phones, being communication devices, felt like natural on-ramps to this other platform for communication.

Xerox PARC had emphasized communication as a component of their computer systems. Robert Metcalfe and David Boggs had invented a standard for connecting computers over short distances. The technology, called Ethernet, let PARC create powerful local networks.

But local networks were just local. For connections that went beyond the office, there was an established technology: the internet. The word “internet” literally means a network of networks, and in the early days of the internet, academic and government labs were making use of this network of networks. Other folks were pretty much locked out — until consumer information services came on the scene.

One of the first of these was CompuServe. CompuServe allowed anyone who signed up and paid a subscription price to log in via a phone line and a modem, and gain access to a variety of communication services. You could download software, engage in discussions with other users, and send e-mails. Newspapers began offering content on CompuServe, and you could play online games, including multi-player games with other subscribers. By the 1990s, CompuServe had hundreds of thousands of subscribers.

CompuServe was not alone in this market. One large competitor was The Source. But each of these services was a walled garden. If you were a CompuServe subscriber, that didn’t help you connect with subscribers to The Source, or to access Source services.

These competitive information services became obsolete when Tim Berners-Lee invented the World Wide Web. Now there was just one service and it was universal, accessible by everyone, and (almost) free.

The Web changed a lot of things, and one of them was the general understanding of what computers were for. Connection with the Web was now an essential component of a personal computer. The new consensus definition of personal computer demanded this kind of connectivity.

A curious aspect of the Web was its indifference to the details. When you connected, when you visited a Website or sent an e-mail, the internet promised to get the data from here to there — somehow. How it happened was not defined. You didn’t know where the network nodes were or what path your message took to traverse the Web. This was all by design: it was inherent in the structure of the internet. And one effect was to free the user from thinking about distance. Everything was just there.

Eventually, people started talking about something called the cloud. That’s where the data and the processing and all of it were: in the cloud. The term meant something specific to the programmers implementing cloud storage, but to the user it was brilliantly vague. Cloudy, as it were. The cloud wasn’t a thing or a place but a way of thinking about computing. It was a natural extension of this sense of placelessness of cyberspace. The Web had erased many clear lines between here and there, and the metaphor of the cloud took it even farther. You didn’t know where the program was that you were interacting with on your phone. You didn’t know where your data really lived. It was somewhere in the cloud, wherever or whatever that was. Or maybe it was on your computer or your phone or all of the above simultaneously or by turns. It didn’t matter.

A company called Sun Microsystems years earlier had come up with a slogan: The network is the computer. With the arrival of the cloud approach to computing, that was arguably true.

But when the first graphical Web browsers arrived, introducing the world to the World Wide Web, the pundits and columnists missed an important dimension of personalization that the Web allowed. Article after article described this new place called cyberspace, where you could travel — virtually, that had a potentially limitless number of places to visit — virtually. You could go to the British Museum. You could enjoy virtual tours. You could travel online. All those metaphors were more or less the same, portraying the Web as a world of places to visit.

What the articles didn’t pick up on at first was that it was not only the museums and other places that could have virtual representations online. So could people. So could you. You were being told about all the places you could go online, but it soon became clear that you could put yourself online. You could set up your own website to promote and talk about your business or hobby or neuroses. You could join an online discussion group, where people would interact with— well, with you, but not quite you. They were interacting with the online you. And you could shape that version of you. You could use a false name in discussions, you could post a picture of someone else on those dating sites. Clearly, there was the possibility of abuse here, but unquestionably there was something interesting happening regarding personhood. You could put you out there online, and it was a virtual you, crafted as you chose. This was a new way to disassemble the concept of a personal computer. The personal part, the you part, was now online, and malleable.

Everything Is a Computer

Meanwhile, actual physical computers in the von Neumann sense, with a CPU and memory and all, were appearing in the oddest places. And some of these raised the question of whether the idea of the personal computer had any meaning anymore.

Take the car. The automobile has become a computer with wheels. But the car doesn’t get its input from the user. You don’t type onto a keyboard on your dash. It derives its input from sensors and cameras. It is monitoring its own state, and its immediate environment. It knows what its tire pressures are, and the temperature of its exhaust gases. It can recognize objects in its path and tell how close its rear fender is to the curb. Its output can come in the form of warnings and alerts to you, the driver slash user. Or its output can be in the form of actions: actually taking control of the steering and keeping itself in its lane when you start to drift, for example. We are on the road to self-driving cars, but there are many way-points along this road. Assisted driving is the long on-ramp to totally autonomous cars.

Apart from the potential benefits of these developments — fewer collisions and injuries and the resultant lower costs and reduced insurance rates, greater mobility for children and the elderly, the disabled and the poor, more efficient use of vehicles and roads and parking through sharing, and reduction in traffic congestion — they do seem to be steps on the path to eliminating you as a component in the system. Not what you’d call personal. But another trend in automotive computerization has been increased information gathering and management, and this is user-accessible. Your car’s computer system has a programming port. This was designed for the techs at the garage to tap into in order to access your onboard computer and read error and status codes. But it soon became possible for you to plug into that port a cheap device that can read those codes, wirelessly connect with your phone, and trigger an app to look up those codes on the internet to get customer-understandable diagnostic information.

Also, computers in cars do more now than report on the car’s health and, to an increasing degree, drive the car. There’s an operating system in there, probably the QNX operating system, brought to you by those same Blackberry folks who had that popular early smartphone. And the QNX operating system also increasingly supports the kind of information and entertainment services you expect from your phone or laptop computer.

Of course, without a keyboard — even the awkward minimalistic keyboard of a smartphone — it has to get creative about input from you, the user. Increasingly, you can expect it to respond to voice input. Like your phone. And — well, everything.

There’s a computer at work here. But it’s not necessarily entirely in your car. Some of the capability may be the manufacturer’s site. Tesla can download updates to car software via the internet, fixing bugs, adding features, altering the behavior of the software. With that happening, is the computer really in your car, or is it in the cloud? The very ambiguity and the unimportance of answering that question suggests that the computer is sort of all around you, ambient.

And when you add voice input, the fact that you can talk to the computer makes it feel like it’s sort of just there, in the air. We are entering the era of ambient computing.

What Does It All Mean?

So what does that say about the state of the personal computer? Or of personal computing? Is personal computing too important a concept to be tied to the fate of a particular device of certain dimensions and capabilities?

Certainly, the idea of a box on a desk that represents the capabilities of computers is getting less and less meaningful.

In 1991, Mark Weiser wrote about what he called “ubiquitous computing” in a paper titled “The Computer of the 21st Century.” Computers are not just on desks or laptops or in pockets or cars in the 21st Century. They’re everywhere. They are ubiquitous. Or perhaps, since these computers are all connected via the internet, we should say it is ubiquitous. We are surrounded by an Internet of Things, a phrase Kevin Ashton of Procter and Gamble coined in 1999. It may be significant that he prefers the variation “Internet for Things. Of the things, for the things, and eventually, one supposes, by the things.

And the Internet of Things is here. For the first time, around 2008 or 2009, there were more things connected to the internet than people. And the things in this Internet of Things talk to each other. That has to make one feel that this is getting a little impersonal. We no longer have computers as giant machines clanking away in temperature-controlled basements, tended by a white-coated priesthood, but instead, we are moving toward a giant network of little devices, all talking to each other, largely ignoring us, and tended by nobody. If we find our old fear of computers creeping back and feel as though we lost some control that we had during the era of personal computers, who’s to say we’re wrong to feel that way?

On the other hand, some of these connected things might be parts of you. Your heart, your body mass, your movements, your blood pressure, your exercise schedule. Those are very personal items, and the fact that you or your doctor can have real-time access to such data is personally empowering.

You could even argue that the ubiquitousness we have been describing is exactly where things get really personal. Voice input is the key technology here. When you can just say what you want done, put your words out in the air, and it happens? That’s sort of magical. It gives you a sense of power. And the sense is accurate: this is power humans have never had. This is a superpower—or magic, if you prefer. We are reducing the distance between your intentions and the computer’s actions.

Making computers respond to the human voice and to parse the sound stream into recognizable words in a human language requires some advanced technology. But that’s only half the problem. Computers don’t natively understand human languages, so now you have to translate that human talk into something the computer can understand. That requires artificial intelligence.

The front in this movement to use artificial intelligence to make computer capability more personal is in the home. More and more, we are installing devices, like Amazon’s Alexa, in our homes, whose sole purpose is to listen to us and do what we ask.

Alexa and similar devices depend on the Internet of Things. They are our interface to it. They let us control lighting, heating and air conditioning, security systems, and all manner of entertainment and news media. The scope of things we can tell Alexa to do grows daily as the artificial intelligence gets more subtle or gathers more relevant data.

And yet all this artificial intelligence is still in its infancy. We can dimly see the possibilities, beyond those that are real today. We are more and more living in a science fiction present.

Could computing get more personal than this? Well, I suppose we could look at cyborgs. Science fiction today. The idea of integrating technology into our bodies to enhance our natural abilities is sheer science fiction, or maybe comic-book fare — and it is totally real.

Phew.

OK, that’s the limit, right? The computer is you. That’s surely the limit of personalizing the computer. When you have the computer physically connected to and integrated into your body, it can’t get any more personal. Right?

Well … Maybe it can.

When the Computer Reads Your Mind

Remember Mary Lou Jepsen? Who developed the first holographic video system in the world? Who invented the OLPC computer, and launched a billion-dollar business supplying hundred-dollar computers to children in developing countries? Whose OLPC computer is still the most energy-efficient computer ever built?

At age twenty-nine, Mary Lou Jepsen thought she was going to die. She was in a wheelchair, unable to walk or to think clearly — “I couldn’t subtract,” she has said, capturing the degree of her impairment with a typically nerdy example — and she was sleeping twenty hours a day.

Her doctor ordered an MRI, and it turned up a tumor. The tumor was benign, but it was nonetheless causing havoc, pressing on her pituitary gland. Surgery was successful, her symptoms went away, and she got back to pursuing her groundbreaking research. All fine.

Well, not exactly. As a result of the operation, her body was now incapable of producing hormones, and since that time she has kept herself alive with a rigid schedule of hormone supplements.

This health data is interesting and it makes her accomplishments all the more impressive, but the point of telling about it here is that the experience led to her latest project, which may turn out to be the most important work she has ever done.

While Jepsen was undergoing the MRI, while she was inside that claustrophobic giant magnet, she entertained herself by analyzing what was going on. And because that’s how her mind works, she thought about how to improve the technology. It’s really kind of crude, she thought. I wonder how we could increase the resolution — and while we’re at it, make it smaller. A lot smaller.

Her experience with holograms gave her an appreciation for what can be accomplished by narrowing your focus. Focus in on exactly what you want to see or to do, and you can reduce the energy requirements and the overall cost to achieve your actual purpose. MRI involves giant magnets; Jepsen was convinced that giant magnets were not necessary. She started thinking about wearable MRI. A stylish knit cap, say, that could do low-power MRI constantly.

The research she did was promising, and she started a company to pursue her idea. Her focus with the company is precise, but Openwater’s mission sounds wildly ambitious: “a new era of fluid and affordable brain-to-computer communications.” Sorry, what? Brain-to-computer communications? Sounds like mind reading, right? Ha ha.

No, seriously. Can MRIs really read people’s minds?

Yes, Jepsen says. MRIs can, in principle, predict what words you are thinking of and what images are in your mind.

She bases her conclusion on existing public research. For example, researchers at the University of California at Berkeley examined functional MRI data collected while subjects listened to stories for hours at a time. By modeling the three-dimensional structure of the data collected, they were able to explore how semantic categories map to this structure. They found that the semantic system is organized into intricate patterns that appear to be consistent across individuals. Consistency is research gold. Consistency in scientific research results is a sign telling the researchers, “Look here.” Consistency often means predictability. And indeed, the Berkeley results imply that patterns observed in MRI data are predictable from the semantic information presented to the individual and vice versa, and that this mapping, in some sense, is a language. Jepsen thinks we will one day be able to read that language.

Her Openwater work is directed at developing tools for the medical field. One day in the not too distant future, you may be wearing a stylish cap that is monitoring your brain for abnormal activity, processing the raw data with a tiny microprocessor to turn it into medical diagnosis or reassuring text messages, protecting your health in real time.

Or something like that. Jepsen hasn’t released details of just how the devices would work. But that’s the goal, medical applications.

But imagine if, as Jepsen suggests, it turns out that that cap or some similar device can also convey your thoughts to a computer. And those thoughts can direct the computer to do things. That will be nothing less than direct mind control of a computer. And then you will have the most personal computer user interface imaginable.

About the Author

Michael Swaine served as editor of PragPub Magazine and was Editor-in-chief of the legendary Dr. Dobb’s Journal. He is co-author of the seminal computer history book, Fire in the Valley, and an editor at Pragmatic Bookshelf.

--

--

PragPub
The Pragmatic Programmers

The Pragmatic Programmers bring you archives from PragPub, a magazine on web and mobile development (by editor Michael Swaine, of Dr. Dobb’s Journal fame).