Definition and Historical Development of Computer
Computer; It is an electronic device that processes the data (information) it receives using logical and arithmetic operations according to certain pre-loaded programs, extracts the result of the information, and can store this data in appropriate environments and retrieve it when desired. These four basic stages in the working principle of the computer have not changed in historical development.
How Does a Computer Process Data?
Computer chips (e.g. microprocessor / CPU / CPU) work with electricity and by performing minimal operations on the negative and positive states of electricity. The load values and results used in the operations are values such as negative and positive, or in logical expressions, open and closed, or true and false. When these values (2 states) are expressed as numbers 0 and 1, one or more of the two states produced produce a number in the binary number system. Therefore, we can say that computers operate with the binary number system. Computers perform operations such as addition and subtraction, which are the most basic operations of the processor, by working on loads that correspond to the expression of the number in the binary system (for example, 110101 = ++-++-+).
How Does a Computer Save Data?
The electrical charge values (hence 0 and 1 values) produced as mentioned above are processed into units called bits, which have different structures depending on the technological structure of the device called the recording unit (hard disk, DVD, etc.). Bits are the units of the recording unit that can constantly maintain its state unless otherwise requested. Depending on their type, bits can hold an electric charge or change direction or shape according to the given charge. The data to be recorded, consisting of the values 0 and 1, is recorded in bits sequentially (For example, if the bit can hold an electric charge, it is charged for 1 and left uncharged for 0). Reading the data is done in the opposite way to the recording process. You can also sample the registration process. For example, imagine there are 2 buttons on an electrical switch. Let each of these buttons be a bit. Each button can only exist in 2 different states and retains its state (like a bit) when you turn it on or off. A light switch is either off or on. If we assume that the on state of the button represents the number 1 and the off state represents the number 0, we turn both buttons to the on position to save the value 11 in the buttons. To record the value 01, we turn the first one to off and the second to open; to save the value 10, we turn the first one to open and the second to off; to express the value 00, we turn both of them to off. When you did this, you were able to record 4 different numbers using 2 buttons (2 bits). These numbers are 00 (the number 0 in the decimal system), 01 (the number 1 in the decimal system), 10 (the number 2 in the decimal system) and 11 (the number 3 in the decimal system). If you need to save a larger value, you need to increase the number of buttons (bits).
Historical Development of Computer
The History of the Computer begins with the abacus, which is considered the first computer and is also described as the first simple calculator. B.C. Dating back to the 1000th year, the abacus began to be used by the Chinese.
In 1642, mathematician Pascal built a calculator called Pascalin. This calculator could only perform addition operations. In 1694, philosopher Charles Colmar developed this calculator and made a calculator that performed addition, subtraction, multiplication and division. This calculator became the first calculator used in commerce.
The real history of the computer begins with Charles Babbage, a professor of mathematics. Since his calculations, studies and drawings formed the basis of today’s calculators and computers, he was called the ancestor of the computer.
The differences machine, which may be considered the first computer, was made by Babbage, but it did not work. In 1854, George Boole created the binary number system, taking as reference the working logic of data operations in computers. The binary number system consists of 1s and 0s. Mark 1 is the first electro-mechanical computer developed in 1937 by Howard Aiken and Browne. Unlike previous works that work with a punched card system, Mark 1 can also perform logarithm and trigonometry functions. Although it was slow, its ability to operate fully automatically was among its advantages.
Computers are examined in 5 periods:
1st Generation (1945–1956): Vacuum Tube Computers
2nd Generation (1956–1963): Transistor computers.
3rd Generation (1964–1971): Integrated Circuit Computers
4th Generation (1972–present): Microprocessor computers
5th Generation Computers: Computers with Artificial Intelligence
First Generation Computers (1945–1956)
With the start of the Second World War, governments increased computer research due to the potential strategic importance of computers. In 1941, German engineer Konrad Zuse developed a computer called Z3 for aircraft and rockets. Allied forces began working towards more powerful computers. In 1944, the British designed the computer called Colossus, which was able to crack secret codes in order to decipher the Germans’ messages. Working with IBM, Howard H. Aiken (1900–1973) produced the fully electronic calculator in 1944. The machine, briefly called Mark I, was a device consisting of electronic relays. Electromagnetic signals were used to move mechanical parts. The machine was slow because a single calculation took 3–5 seconds and nothing could be changed during successive calculations. But it could handle more complex equations. Another computer developed due to the war was the computer called ENIAC (Electronic NumericalIntegratorAndComputer), which was created by the partnership of the American government and the University of Pennsylvania. The computer had 18000 vacuum tubes, 70000 resistors and 5000000 solder points. The machine, which consumed 160 kilowatts of electrical power, caused the lights in Philadelphia to dim. ENIAC was a computer 1000 times faster than the Mark I. In 1945, EDVAC (Electronic DiscreteVariableAutomaticComputer) was designed. In this computer, the data was kept in memory like a program. This gave rise to memory storage, allowing the computer to continue after being stopped at a certain point. It has led to increased versatility in computer programming. The disadvantages of the first generation computers were that they worked with vacuum tubes and data was collected in drum-shaped magnetic things.
Second Generation Computers (1956–1963)
With the discovery of transistors in 1948, the increase in the development of computers increased significantly. Transistors replaced large, bulky vacuum tubes in televisions, radios, and computers. With the use of transistors in computers, second generation computers emerged as models that were smaller, faster, more reliable and consumed less energy than previous models. In the second generation of computers, machine language and assembly language were replaced, and thus long and difficult binary codes were replaced by short programming codes. In the early 1960s, second generation computers began to be used in workplaces and universities. Printers, tape units, disk units, memory, operating system and programs were added to the second generation computers. IBM 1401 is an important example of second generation computers. More advanced high-level languages such as COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) have begun to be used. In these types of languages; Cryptic binary machine codes have been replaced by words, sentences and mathematical formulas, making programming a computer much simpler. With the emergence of second generation computers, new types of professions (programmers, analyzers, computer system experts) and the software industry were born.
Third Generation Computers (1964–1971)
Although transistors have advantages over vacuum tubes, they emit large amounts of heat and can cause sensitive internal parts of the computer to fail. Quartz solved this problem. In 1958, Jack Kilby, an engineer at Texas Instruments, developed the Integrated Circuit (IC). The integrated circuit (IC) consists of 3 electronic components on a small silicon disk made of quartz. Scientists then began to manage it by placing many parts on a small chip called a semiconductor. As a result, computers became smaller by adding more components to a smaller chip. In third generation computers, it became possible to run many programs around the operating system, and computer memory began to be used jointly by these programs.
Fourth Generation Computers (1971- Still evolving)
After integrated circuits, sizes continued to decrease. Hundreds of components are mounted on a chip (Largescale Integration (LSI)). In 1980, hundreds of thousands of components were compressed onto a chip (VeryLargescale Integration (VLSI)). When the number increased to millions (Ultra-Largescale Integration (ULSI)) came into question. The size and price of computers have decreased and continue to decrease. In addition, their power, efficiency and reliability continue to increase. When Intel produced the 4004 chip in 1971, all components of the computer (central processing unit (CPU), memory, input and output management) were collected on a very small chip. In 1981, he introduced the personal computer (PC) for use at home, at work and in schools. The number of PCs, which was 2 million in 1981, reached 5.5 million in 1982. Ten years later, 65 million PCs were in use. As the size of computers continued to shrink, laptop computers (large enough to fit into a bag) and palmtop computers (large enough to fit into a shirt pocket) were designed. In 1984, the race between IBM PC and Apple Machintosh began for the first time. Macintoshes emerged with user-friendly design. The operating system offered by Macintosh; It provided users with the convenience of moving other icons with a simulated icon on the computer screen instead of written commands. In order for nearby computers to be used more effectively, they began to be connected to each other and computer networks began to be established. Each computer on the network began to share the memories, programs and information of other computers. Networks formed by such interconnected computers (LocalArea Network (LAN)) were connected to other computer networks. Thus, computers all over the world were connected to each other, creating the Internet, the network of networks.
Fifth Generation Computers
Defining fifth generation computers is a bit difficult because they are still in their infancy. One of the most famous fifth-generation computers is HAL9000 in Arthur C. Clarke’s novel (2001: A Space Odyssey). HAL is a computer capable of engaging in adequate judgment processes that converse with human operators, use visual inputs, and learn from its own experiences. Unfortunately, HAL is a robot with psychological malfunctions, who commandeers the spaceship and kills many people. The robot types in Isaac Asimov’s science fiction works “I, Robot” and “The Law of Three Robots” provide good examples of how thin the line between humans and robots can be. The examples of learning fifth-generation computers in Hollywood’s Terminator II are very striking. Matrix, where the human-program war is fought between the real world where programs hold people captive and virtual reality, appears as another Hollywood production that brings different approaches to the concept of computers. Although all of this may seem like fantasy, computers that translate from one foreign language to another are now available. Programs are made to guide doctors step by step in disease diagnosis. One of the programs related to artificial intelligence is heard every day.