Inputs, Processes and Outputs

An introduction to this series

Karl Beecher
Great Moments in Computing History
8 min readNov 12, 2015

--

You can learn a lot about a field by taking a tour of the key moments in its history. All those momentous occasions—like when the field was first born, or how the earliest protagonists blazed a trail, or those rare occasions when a single discovery turned the field on its head — these make great stories.

Computer science has got lots of great stories to tell.

In this series, I’m going to tell you the stories of several key moments in the history of computer science. On this journey you’ll see how breakthroughs were made and new theories were born, meet some of the characters responsible, and you’ll learn about the genesis of numerous ideas that had a direct impact on all the computers we use today.

But before we begin our tour, it will be helpful to quickly get up to speed with a couple of the basics. We need to make sure we understand what computer science is and what computer scientists are actually trying to do.

The definition of computer science

Computer scientists study the science of computation. Yes, I admit it seems embarrassingly obvious to say that…after all it’s right there in the name. Nevertheless, I’m not being flippant; it’s a useful thing to say, but it needs some explanation. Ask yourself: what does it mean to compute?

A model of computing

In its most general form, computation is as simple a concept as that shown in the image above. It involves inputting some data, processing it in some way, and generating some output. Simple as that. It’s like a conveyor belt that carries raw materials into a machine, whereupon the machine thrashes around doing its magic and eventually pushes the finished product out the other end. As a model of computation, it’s widely applicable. From the smallest operation to the biggest computer task imaginable, computing always involves taking some input, doing some work with it and returning some output.

Computation describes all the things you do when you use your computer, including the simplest things like moving the mouse pointer across the screen. During this action, your hand movement is fed via the mouse into your computer as input. The computer must then process it, before outputting the corresponding movement of the mouse pointer on screen. It looks simple and you do it all the time without giving it a thought. But for such a simple action, the computer actually has to do an awful lot of stuff to animate that pointer.

First, let’s talk about the input. When you move the mouse, the distance it’s moved is fed into the computer. In this case, there’s actually more than one piece of information input. Because the computer records the mouse pointer’s position as a pair of coordinates on the screen, the distance is broken down into its horizontal and vertical components. Modern mice sense movement optically, but back in the days when mice had balls — if you’ll pardon the expression — that ball would turn two internal wheels when the mouse was moved: one wheel measured horizontal movement and the other vertical movement. There were therefore two pieces of input to this computation, or — to give them their posh names — two input parameters: distance moved along the horizontal axis and distance moved along the vertical axis.

Next comes the process. In this case, the mouse alerts the computer to a change in its position and passes the parameters along.

“Hey!” says the mouse. “Someone just moved me five millimetres to the right and two millimetres up.”

“OK,” the computer acknowledges, “I’ll get right on it.”

Computations are almost always riddled with hidden traps which can cause errors.

The computer then takes those physical movements made by the mouse and turns them into on-screen movements via some quick computations. The current position of the mouse pointer on the screen is kept by the computer and continuously updated. Now let’s say that each millimetre of movement corresponds to two pixels distance on screen. In this case, the computer would change the value of the mouse pointer’s screen position, increasing it ten pixels further to the right and four pixels further toward the top. Sounds simple enough, but there are a few hidden subtleties in any computer process. If, for example, the user moves the mouse left but the mouse pointer is already at the extreme left of the screen, the computer must not move the pointer any further left. Why, in this case, would the computer essentially ignore the user? Because if the computer didn’t make this check, the x-coordinate would keep decreasing past 0 into negative numbers and cause the mouse pointer to disappear off the left-hand side of the screen! Computations are almost always riddled with hidden traps like these which can cause errors. Sometimes they’re little ones which cause weird side effects, sometimes they’re whoppers which crash a whole system.

The movement of a mouse pointer

After the process has finished comes the output. The updated coordinates are passed to the computer screen, which redraws the whole image showing the new position of the mouse pointer (along with any other parts of the screen that may have also changed). In order to maintain a smooth user experience, the computer will repeat this whole computation about fifty or sixty times every second. The example in the image above shows a mouse pointer on a screen 1024 pixels wide and 768 pixels high. It has moved from coordinates 200 by 100 to 800 by 400. The mouse pointer is thus 600 pixels further to the right and 300 pixels higher than its starting point. However, the rapid repetition of computations results in an apparent smooth motion to the user. During all this, your computer is also working on dozens of other computations simultaneously, most of which are much more complicated than processing your mouse movements. It’s fortunate that today’s computers are extremely fast.

This input-process-output model describes how computers execute programs. However, it’s just as applicable when people write programs. When coming up with a new program, a computer scientist frames it as a series of instructions that accept input, carry out some processing and return output. This model of computation occurs everywhere in computer science. Every computer scientist is involved in an effort to process information according to this basic formula. They are each thinking hard, trying to come up with a series of steps which start with one state and end with another. Each person may be trying to achieve different things, but they all share the same goal of taking input, processing it and generating output.

The ultimate goal is to make a computer perform a task rather than a human being. The study of how best to achieve this is what computer science is all about.

By using the input-process-output model, a computer scientist is basically working out how to solve a problem. The ultimate goal is to make a computer perform a task rather than a human being. This means developing a solution to a problem and developing a computer program to implement that solution. The study of how best to achieve this is what computer science is all about. This work may involve a lot of mathematics, but computer science diverges from its mathematical parent in many ways. Mathematicians seek to understand fundamental things like quantities, structures and change, the aim being to create new proofs and theories. Computer scientists take established mathematical concepts and try to understand how they can be used to solve problems.

A simple example involves calculating square roots. In case you’ve forgotten, squaring a number means multiplying it by itself. Therefore, “three squared” is nine. Reversing this process is called finding the square root. Therefore, “the square root of nine” (√9) is three. In this example, our input is nine, the process is the square root operation, and the output is three. The image below illustrates this. The computation takes in just one input parameter and calculates the square root of it, which it spits it out the other end.

Input, process and output of taking a square root

A computer scientist’s interest in square roots would lead to the development of a program to compute the square root of any number. Mathematics has already provided a wonderful range of methods for humans to perform this particular calculation. The computer scientist’s job would be to prepare one of them as a program for execution by a computer. This gives the scientist all sorts of new worries. Working out a square root is a laborious process that can potentially take a long time — that’s why this computer scientist chose to automate it, I suppose. The usual method requires the repetition of the same series of steps, iteratively building up the result until finally the full number is found. But just like the mouse example when the possibility of a disappearing mouse pointer cropped up, our computer scientist has to worry about things going wrong when a computer tries to follow instructions.

Computers — and I want you to remember this — are dumb. They are exceedingly literal-minded things who will do exactly as you tell them, even if what you told them to do is stupid. For example, if we humans begin to work out the square root of two, we will notice after a while as we construct the result (1.4142135623…) that the numbers never end. That’s because the result is an irrational number and literally goes on forever. Eventually a human would get bored and stop, but computers never tire. If the computer scientist failed to take this eventuality into account, she would end up developing a program that causes a computer to repeat the same steps endlessly when given 2 as a parameter. It would continue until the power was cut off, its circuits rotted away or the universe ended; whichever came first.

To prevent irrational numbers from playing such havoc, our imaginary computer scientist faces a choice. How should the possibility of a never-ending program be dealt with? Should she just impose a maximum size on results, like ten decimal places, and so force the computer to stop calculating upon reaching this limit? This wouldn’t give a strictly accurate answer, and the question remains how many decimal places is enough. Or should she instead analyse the parameter first to see if it would yield an irrational answer and deal with it differently? Is that preferable? Is it even possible? She also faces a lot of other choices, such as how to deal with bad input. What should happen if the parameters are negative numbers? What if they’re not numbers at all?

Questions like these, particularly whether a program will actually finish or not, are fundamental concerns of computer science. The issues raised here are but a tiny sample of those that computer science deals with. Many of these concerns are now well-developed and understood, so that other fields in computer science can build on them. But there was a time when there was no foundational knowledge; a time before computer science when no-one could even conceive of computers, let alone deal with the issues they raise.

The first story on our tour will take us back to such a place.

--

--

Karl Beecher
Great Moments in Computing History

Budding novelist. Recovering academic. Available now: INTERSTELLAR CAVEMAN (http://amzn.to/2X9C9RP)