Full Series: The Rise of Personal Computers & Graphics

By Shreyas K. Rising 8th Grade

Shreyas Kambhampati
12 min readJun 6, 2020

Introduction:

In the past sections, we talked about how computer work but now it’s the rise of personal computers. Personal Computers are some of the most bought electronic devices in the world, and it’s pretty easy to see why. At this pointing time, you basically need a computer for everything. You need it for work, school, and obviously for reading this… maybe? Anyway, PCs are really important and so are the graphics n the PC. The graphics are vital to many people who play video games on their PC. In this section, we’ll be talking about all of that stuff. Enjoy!

Part One: The Cold War and Consumerism

PC Gaming

The rise of the PC era started in the backdrop of The Cold War, and as you may know, the world was pretty tense. Pretty much after World War II concluded in 1945, there was heating tension between the world’s two new superpowers; The U.S. and the USSR. This was mainly due to the U.S. and most of the world disagreeing with the USSR’s economy which was based on communism. The Cold War had begun and with it, massive government spending on science and engineering.

The massive spending boosted rapid advances that simply weren’t possible in the commercial sector alone where projects were generally expected to recoup development costs through sales.

Computing was unlike machines of the past, which generally needed human physical abilities. At that point in time, computing had really just begun to shape the way America functions. Now let’s get in-depth about PCs.

A personal computer (PC) is a multi-purpose computer whose size, capabilities, and price make it feasible for individual use. Personal computers are intended to be operated directly by an end-user, rather than by a computer expert or technician. Unlike large costly minicomputers and mainframes, time-sharing by many people at the same time is not used with personal computers.

The personal computer was made possible by major advances in semiconductor technology. In 1959, the silicon integrated circuit (IC) chip was developed by Robert Noyce at Fairchild Semiconductor, and the metal-oxide-semiconductor (MOS) transistor was developed by Mohamed Atalla and Dawon Kahng at Bell Labs. The MOS integrated circuit was commercialized by RCA in 1964, and then the silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild in 1968.

Faggin later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971. The first microcomputers, based on microprocessors, were developed during the early 1970s. Widespread commercial availability of microprocessors, from the mid-1970s onwards, made computers cheap enough for small businesses and individuals to own.

Since the early 1990s, Microsoft operating systems and Intel hardware dominated much of the personal computer market, first with MS-DOS and then with Microsoft Windows. Alternatives to Microsoft’s Windows operating systems occupy a minority share of the industry. These include Apple’s macOS and free and open-source Unix-like operating systems, such as Linux.

PCs have definitely come a long way. For example, if you search up PC on Google, you get bombarded with PC games and ads.

PC gaming is… exactly what it sounds like! Gaming on PC. PCs have come a long way and can easily play huge games that can be nearly impossible to play on dedicated gaming consoles. For example, Microsoft Flight Simulator (2020) needs 150gb of storage to be playable on PC! That’s insane

Gameplay footage of Microsoft Flight Simulator

Part Two: The Personal Computer Revolution

Dell XPS 13 Laptop

In the previous part, we discussed the history behind the PC revolution. In this part, we’re going to be discussing what made the PC so appealing to the general public and why it was such massive success in America.

The history of the personal computer as mass-market consumer electronic devices effectively began in 1977 with the introduction of microcomputers, although some mainframe and minicomputers had been applied as single-user systems much earlier. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user’s requests are filtered through operating staff or a time-sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers — generally called microcomputers– were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians.

Computer terminals were used for time-sharing access to central computers. Before the introduction of the microprocessor in the early 1970s, computers were generally large, costly systems owned by large corporations, universities, government agencies, and similar-sized institutions. End-users generally did not directly interact with the machine but instead would prepare tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be gathered up and processed in batch mode. After the job had completed, users could collect the results. In some cases, it could take hours or days between submitting a job to the computing center and receiving the output.

A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple computer terminals let many people share the use of one mainframe computer processor. This was common in business applications and in science and engineering. A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor.

In places such as Carnegie Mellon University and MIT, students with access to some of the first computers experimented with applications that would today be typical of a personal computer; for example, computer-aided drafting was foreshadowed by T-square, a program written in 1961, and an ancestor of today’s computer games was found in Spacewar! in 1962.

Some of the first computers that might be called “personal” were early minicomputers such as the LINC and PDP-8, and later on VAX and larger minicomputers from Digital Equipment Corporation (DEC), Data General, Prime Computer, and others. By today’s standards, they were very large (about the size of a refrigerator) and cost prohibitive (typically tens of thousands of US dollars).

However, they were much smaller, less expensive, and generally simpler to operate than many of the mainframe computers of the time. Therefore, they were accessible to individual laboratories and research projects. Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center.

In addition, minicomputers were relatively interactive and soon had their own operating systems. The minicomputer Xerox Alto was a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high-resolution screen, large internal and external memory storage, mouse, and special software.

In 1983 Apple Computer introduced the first mass-marketed microcomputer with a graphical user interface, the Lisa. The Lisa ran on a Motorola 68000 microprocessor and came equipped with 1 megabyte of RAM, a 12-inch (300 mm) black-and-white monitor, dual 5¼-inch floppy disk drives and a 5-megabyte Profile hard drive. The Lisa’s slow operating speed and high price (US$10,000), however, led to its commercial failure. Drawing upon its experience with the Lisa, Apple launched the Macintosh in 1984, with an advertisement during the Super Bowl.

The Macintosh was the first successful mass-market mouse-driven computer with a graphical user interface or ‘WIMP’ (Windows, Icons, Menus, and Pointers). Based on the Motorola 68000 microprocessor, the Macintosh included many of the Lisa’s features at a price of US$2,495. The Macintosh was introduced with 128 kb of RAM and later that year a 512 kb RAM model became available. To reduce costs compared the Lisa, the year-younger Macintosh had a simplified motherboard design, no internal hard drive, and a single 3.5" floppy drive. Applications that came with the Macintosh included MacPaint, a bit-mapped graphics program, and MacWrite, which demonstrated WYSIWYG word processing.

While not a success upon its release, the Macintosh was a successful personal computer for years to come. This is particularly due to the introduction of desktop publishing in 1985 through Apple’s partnership with Adobe. This partnership introduced the LaserWriter printer and Aldus PageMaker (now Adobe PageMaker) to users of the personal computer. During Steve Jobs’ hiatus from Apple, a number of different models of Macintosh, including the Macintosh Plus and Macintosh II, were released to a great degree of success. The entire Macintosh line of computers was IBM’s major competition up until the early 1990s.

That was 37 years ago… and PCs have evolved so much more now!

Part Three: Graphical User Interfaces

My MacBook Air (2017) Interface

In today’s part, we’re talking about G.U.I.S.! You know what we’re just calling it GUI from now on. GUI stands for Graphical User Interface.

A graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and audio indicators such as primary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard.

The actions in a GUI are usually performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices, smartphones, and smaller household, office, and industrial controls. The term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games (where the head-up display is preferred), or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.

Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human-computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks. The visible graphical interface features of an application are sometimes referred to as chrome or GUI (pronounced gooey).

Typically, users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold. The widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent from and indirectly linked to application functions, so the GUI can be customized easily. This allows users to select or design a different skin at will and eases the designer’s work to change the interface as user needs evolve.

Good user interface design relates to users more, and to system architecture less. Large widgets, such as windows, usually provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones usually act as a user-input tool. A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines (ATM), point of sale (POS) touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticketing and check-in, information kiosks in a public space, like a train station or a museum, and monitors or control screens in an embedded industrial application which employ a real-time operating system (RTOS).

By the 1980s, cell phones and handheld game systems also employed application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations.

By the 2000s to now, GUIs advanced a whole lot more to the point where we debatably rely on them in ways. Such as circumnavigation through our devices and many more.

Part Four: 3D Graphics

Welcome to the last part of this section. Today we’re talking about 3D graphics.

3D computer graphics, or three-dimensional computer graphics (in contrast to 2D computer graphics), are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. The resulting images may be stored for viewing later or displayed in real-time.

3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, 2D applications may use 3D techniques to achieve effects such as lighting, and, similarly, 3D may use some 2D rendering techniques. The objects in 3D computer graphics are often referred to as 3D models.

Unlike the rendered image, a model’s data is contained within a graphical data file. A 3D model is a mathematical representation of any three-dimensional object; a model is not technically a graphic until it is displayed. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or it can be used in non-graphical computer simulations and calculations. With 3D printing, models are rendered into an actual 3D physical representation of themselves, with some limitations as to how accurately the physical model can match the virtual model.

William Fetter was credited with coining the term computer graphics in 1961 to describe his work at Boeing. One of the first displays of computer animation was Futureworld, which included an animation of a human face and a hand that had originally appeared in the 1971 experimental short A Computer Animated Hand, created by the University of Utah students, Edwin Catmull and Fred Parke. 3D computer graphics software began appearing for home computers in the late 1970s. The earliest known example is 3D Art Graphics, a set of 3D computer graphics effects, written by Kazumasa Miyazawa and released in June 1978 for the Apple II.

3D computer graphics creation falls into three basic phases:

  1. 3D modeling — the process of forming a computer model of an object’s shape
  2. Layout and animation — the placement and movement of objects within a scene
  3. 3D rendering — the computer calculations that, based on light placement, surface types, and other qualities, generate the image

Modeling:

The model describes the process of forming the shape of an object. The two most common sources of 3D models are those that an artist or engineer originates on the computer with some kind of 3D modeling tool, and models scanned into a computer from real-world objects. Models can also be produced procedurally or via physical simulation. Basically, a 3D model is formed from points called vertices that define the shape and form polygons. A polygon is an area formed from at least three vertexes (a triangle). A polygon of n points is an n-gon. The overall integrity of the model and its suitability to use in animation depend on the structure of the polygons.

Layout & Animation:

Before rendering into an image, objects must be laid out in a scene. This defines spatial relationships between objects, including location and size. Animation refers to the temporal description of an object (i.e., how it moves and deforms over time. Popular methods include keyframing, inverse kinematics, and motion capture). These techniques are often used in combination. As with animation, physical simulation also specifies motion.

Rendering:

Rendering converts a model into an image either by simulating light transport to get photo-realistic images or by applying an art style as in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light gets from one place to another) and scattering (how surfaces interact with light). This step is usually performed using 3D computer graphics software or a 3D graphics API. Altering the scene into a suitable form for rendering also involves 3D projection, which displays a three-dimensional image in two dimensions. Although 3D modeling and CAD software may perform 3D rendering as well (e.g. Autodesk 3ds Max or Blender), exclusive 3D rendering software also exists.

--

--