Introduction to Computer Science

Punitsonke
8 min readSep 19, 2020

--

  1. Introduction to Computer

Computer literally means to compute / calculate, but in modern days of life we see computers are used much more than calculating in fact if we do a formal definition of a computer

‘Computer is an electronic device that receives input, stores or process input as per the instructions & provides output in desired format.’

Basic definition of Computer

In other words,

A computer is a machine that can be programmed to accept data (input), process it into useful information (output), and store it away (in a secondary storage device) for safekeeping or later reuse. The processing of input to output is directed by the software but performed by the hardware.

· Computer input is called data.

· Raw facts & figures which can be processed through Algorithmic or Logical operations to obtain information are called data.

· Output obtained after processing data, based on users instructions is called information.

· Processes that can be applied to data are of following two types:

1. Algorithmic operations 2. Logical operations

Basic parts of computer are:

1. Input unit: devices that used to input the data & instructions.

2. Control unit: all devices or parts of computer interact though control unit.

3. Arithmetic logic unit: all arithmetic operations & logical operations take place.

4. Output unit : devices used to provide information to the user defined or desired format.

5. Memory: all input data, instructions& data interim to the process are stored in the memory.

• Primary memory

• Secondary memory

· Control unit, arithmetic logic unit & memory are together called the central processing unit i.e. CPU.

· Computer devices that we can see & touch are the hardware components of a computer.

· Set of instructions or programs that make computer function using these hardware parts are called software.

What is Computer science ?

Computer science is the study of computation and information. Computer science deals with theory of computation, algorithms, computational problems and the design of computer systems hardware, software and applications.

2. Computational thinking

Computers can be used to help us solve problems. However, before a problem can be tackled, the problem itself and the ways in which it could be solved need to be understood. Computational thinking allows us to do this.

Computational thinking allows us to take a complex problem, understand what the problem is and develop possible solutions. We can then present these solutions in a way that a computer, a human, or both, can understand.

Computational thinking is the process of breaking down a complex problem into easy to understand parts. Essentially, computational thinking helps you break down a problem into bite-sized pieces that a computer could understand and ultimately help solve.

Computational thinking is not programming. Programming tells a computer what to do and how to do it. Whereas computational thinking is the process of figuring out what to tell the computer to do. Computational thinking is the process of thinking like a computer scientist.

The four cornerstones of computational thinking

There are four key techniques (cornerstones) to computational thinking:

Four cornerstones of computational thinking

1. Decomposition — breaking down a problem into smaller parts

2. Pattern Recognition — looking for similarities within a problem

3. Abstraction — ignoring unimportant information and only focusing on important information

4. Algorithms — developing the step-by-step rules to follow in order to solve the problem

Each cornerstone is as important as the others. They are like legs on a table — if one leg is missing, the table will probably collapse. Correctly applying all four techniques will help when programming a computer.

3. BINARY & ASCII

Binary:

In mathematics and digital electronics, a binary number is a number expressed in the base-2 numeral system or binary numeral system, which uses only two symbols: typically, “0” (zero) and “1” (one). IT used to write data such as the computer processor instructions used every day.

How does binary work?

The 0s and 1s in binary represent OFF or ON, respectively. In a transistor, an “0” represents no flow of electricity, and “1” represents electricity being allowed to flow. In this way, numbers are represented physically inside the computing device, permitting calculation. This concept is further explained in our section on how to read binary numbers.

Why do computers use binary?

· Binary is still the primary language for computers for the following reasons.

· It is a simple and elegant design.

· Binary’s 0 and 1 method is quick to detect an electrical signal’s off or on state.

· The positive and negative poles of magnetic media are quickly translated into binary.

· Binary is the most efficient way to control logic circuits.

How to read binary numbers?

The following chart illustrates the binary number 01101001. Each column represents the number two raised to an exponent, with that exponent’s value increasing by one as you move through each of the eight positions. To get the total of this example, read the chart from right to left and add each column’s value to the previous column: (1+8+32+64) = 105. As you can see, we do not count the bits with a 0 because they’re “turned off.”

ASCII:

Short for American Standard Code for Information Interexchange, ASCII is a standard that assigns letters, numbers, and other characters in the 256 slots available in the 8-bit code. The ASCII decimal (Dec) number is created from binary, which is the language of all computers. As shown in the table below, the lowercase “h” character (Char) has a decimal value of 104, which is “01101000” in binary.

ASCII was first developed and published in 1963 by the X3 committee, a part of the ASA (American Standards Association). The ASCII standard was first published as ASA X3.4–1963, with ten revisions of the standard being published between 1967 and 1986.

ASCII sections

The ASCII table is divided into three different sections.

1. Non-printable, system codes between 0 and 31.

2. Lower ASCII, between 32 and 127. This table originates from the older, American systems, which worked on 7-bit character tables.

3. Higher ASCII, between 128 and 255. This portion is programmable; characters are based on the language of your operating system or program you are using. Foreign letters are also placed in this section.

Non printable & Lower ASCII
Higher ASCII

4. Algorithms & its Complexity

Algorithms:

In computer science, programming, and math, an algorithm is a sequence of instructions where the main goal is to solve a specific problem, perform a certain action, or computation. In some way, an algorithm is a very clear specification for processing data, for doing calculations, among many other tasks.

WHAT IS AN ALGORITHM IN COMPUTER SCIENCE?

As we’ve mentioned before, an algorithm (also in computer science) is when you tell your computer not only what to do, but also how to do it. We assume that now it is a bit clearer, but the main goal is to get the job done and an algorithm is a basic technique used to make sure this happens.

Algorithm complexity:

It is a measure which evaluates the order of the count of operations, performed by a given or algorithm as a function of the size of the input data. To put this simpler, complexity is a rough approximation of the number of steps necessary to execute an algorithm.

Algorithm complexity is a rough approximation of the number of steps, which will be executed depending on the size of the input data. Complexity gives the order of steps count, not their exact count.

5. Pseudocode

Pseudo code is a term which is often used in programming and algorithm-based fields. It is a methodology that allows the programmer to represent the implementation of an algorithm. Simply, we can say that it’s the cooked-up representation of an algorithm. Often at times, algorithms are represented with the help of pseudo codes as they can be interpreted by programmers no matter what their programming background or knowledge is. Pseudo code, as the name suggests, is a false code or a representation of code which can be understood by even a layman with some school level programming knowledge.

Algorithm: It’s an organized logical sequence of the actions or the approach towards a particular problem. A programmer implements an algorithm to solve a problem. Algorithms are expressed using natural verbal but somewhat technical annotations.

Pseudo code: It’s simply an implementation of an algorithm in the form of annotations and informative text written in plain English. It has no syntax like any of the programming language and thus can’t be compiled or interpreted by the computer.

Example:

If student’s grade is greater than or equal to 60

Print “passed”

else

Print “failed”.

6. Data storing

Computer data are streams of pulses, known as bits… A bit is composed of either an absence or presence of a pulse that can represent either ones and zeros, a high or a low-level signal, a positive and a zero-voltage level, two distinct contrasting conditions.

The streams of data bits are then converted into very tiny dots of magnets, to be stored in a magnetic media. Streams of pulses representing the binary form of data which are grouped together, like 4 bits are called a byte, etc. to form a binary coded information in the form of very tiny magnetic poles. The magnetic memory storage capacity as correspondingly measured in number of bytes like, kilobytes, megabytes, terabytes, etc. of magnetic media.

7. Data types to store data

In computer science and computer programming, a data type or simply type is an attribute of data which tells the compiler or interpreter how the programmer intends to use the data.

Most programming languages support basic data types of

1. Integer numbers (of varying sizes),

2. floating-point numbers (which approximate real numbers),

3. characters and

4. Booleans.

A data type constrains the values that an expression, such as a variable or a function, might take. This data type defines the operations that can be done on the data, the meaning of the data, and the way values of that type can be stored. A data type provides a set of values from which an expression (i.e. variable, function, etc.) may take its values.

--

--