What is Computer Science?

Brit Cruise
Jan 20, 2016 · 4 min read

Below is an transcription of the above Art of the Problem video with more links and some images.


Around 100 years ago something really exciting was happening.

An age old question, “what is the limit of knowledge?”, collided with a modern mathematical one, “can knowledge be mechanized?”. This gave birth to a new field: Computer Science.

The science of computation.

Image for post
Image for post
Image by: Mark Phillips & Cameron Murray

The dream of mechanizing human knowledge was inspired by developments over 300 years ago during the early stages of the industrial revolution. Factories were being developed as industrialists began to explore mass production and the mechanization of labour.

Image for post
Image for post
http://digitalcollections.nypl.org/

Gradually, machines were designed to replicate human action. But, replicating human mental processes remained the stuff of dreams. This began to change with the development of machines that could add, multiply…and eventually make decisions…

At first even the most advanced machines were limited to one specific task. If you needed something else done, you had to build a new machine. However, around 200 years ago a great industrialist thinker, Charles Babbage, was dreaming of a general symbolic machine. One that answer complex questions, by breaking them down into smaller questions of logic and arithmetic, and braiding them together.

Image for post
Image for post
Babbage’s Difference Engine: https://en.wikipedia.org/wiki/Difference_engine / Image by: Brit Cruise

His dream was never realized, but his driving question remained: Was it possible to build a general machine that could answer any question?

By the 1900’s mathematicians and philosophers were posing this question in different ways. Mathematicians asked: What are mechanical machines capable of? How powerful could they be? Philosophers asked: What are the limitations of mechanical machines? What will machines never be able to do?

Image for post
Image for post
Image by: Brit Cruise

In 1936 Alan Turing bridged this divide with a paper which revolutionized our understanding about what machines could do. Turing outlined blueprints for what he called a universal machine. A machine that could answer anything answerable.

Image for post
Image for post
from this paper: https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf / Image by: Brit Cruise

Part of his great insight was that the power of the universal machine would always reside in the instructions it followed (later known as software), not the physical design (or hardware). Using a simple language his machine would run any instructions that were imaginable.

Image for post
Image for post
http://digitalcollections.nypl.org/

The year after Turing’s paper a young Claude Shannon completed a masters’ thesis describing a clever insight he had about telephone relays. He realized he could arrange electrical switches in various ways to perform the fundamental operations of logic, automatically, using electricity.

Image for post
Image for post
From this paper: http://www.cs.virginia.edu/~robins/Shannon_MS_Thesis.pdf

Suddenly it was theoretically possible to build a universal computer, powered by electrical clocks that buzzed away [at near the speed of light] and followed any instructions you provide.

However, key questions remained: what can these machines do, and what can they never do?

In the decades to follow, computing machines grew in their speed of operation and memory capacity. Suddenly many hard questions faced by humans became easy (or very practical) for computers to answer very quickly.

Image for post
Image for post
https://en.wikipedia.org/wiki/Manchester_Small-Scale_Experimental_Machine

But deeper problems emerged…

There seemed to be a growing set of seemingly easy problems (such as, is a given number prime) that were computable on our machines but took too long when the questions were large (such as, is 10⁹⁸ + 1 prime?). It could take thousands or even millions of years for the computer to give you an answer. Or, halt.

Image for post
Image for post
Image by: Brit Cruise

These problems were practically impossible to solve. Think of these as hard problems.

And people considered drawing a line in the sand between problems that were easy (practical to solve) and problems that were hard (practically impossible to solve). The attempt to precisely define the division of easy and hard (practical and impractical problems) leads to the most important unsolved question in computer science today.

Image for post
Image for post
Image by: Art of the Problem

What makes hard problems, hard?

Is it a result of some underlying mathematical pattern?

OR

Is the perception of hardness merely an illusion?

Will new insights make existing hard problems, easy?

This question is not just an intellectual curiosity, the backbone of the internet depends on a set of problems being “out of reach” of our machines.

This entire series and more can be found on Khan Academy.

But to begin we must step way back and revisit the mythology of ancient oracles….

What is knowledge? And what does it mean to think….or, “to compute?”

Image for post
Image for post
Image by: Brit Cruise

Part 2/10 coming soon...

Subscribe to the rest of EPISODE 3 here

Image for post
Image for post

(Brit Cruise, Cameron Murray, Mark Phillips & various friends)

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store