The problem of high-performance computational systems is one of the most complex scientific and technological tasks. Solving of this problem developed in several directions — raising the efficiency and performance of regular computer systems, creating of specialized supercomputers, forming multi-computer (cluster) systems, using of the GRID-technologies, developing methods of concurrent programming.
Long time ago, supercomputer was half-jokingly defined as a device that reduces calculation problem to input/output problem. This definition means that supercomputer should perform calculations so fast, that the calculation time would be insignificant in comparison with the time required for inputting the data into the program and outputting the results.
A more formal definition is as follows: supercomputer is a specialized computing machine, which drastically exceeds most of the existing computers in terms of its technical parameters and computation speed. The definition of the performance borders, which separates supercomputers from regular computers with high performance, is getting out of date really quickly. Accordingly, the above mentioned definition does not operate the absolute performance values.
The supercomputer development is creating of a more and more powerful computer.
Supercomputers are used in all the spheres which require computational modeling to solve the problem; the spheres which require a vast amount of complex calculations, real-time processing of a vast amount of data or the problem can be solved by a simple enumeration of the range of initial parameters.
Elaboration of the computational modeling methods was taking place simultaneously with enhancement of computing machines: the more complex were the tasks, the higher were the requirements for the machines; the faster the machines were, the more complex were the problem they could solve. At first, supercomputers were used almost exclusively for defense purposes: calculations on nuclear and thermo-nuclear weapons, fission reactors. Afterwards, as both mathematical tools of computational modeling and knowledge in other fields of science developed, supercomputers found usage in “peaceful” calculations, creating new scientific disciplines such as: numerical weather forecasting, computational biology, medicine, chemistry, fluid dynamics, linguistics, etc. — in which achievements of computer science merged with the ones of applied sciences.
However, it should be mentioned that these unique decisions with record properties are usually quite expensive, so they cannot be produced in large quantities and be broadly used in business. Network technologies progress did its part, creating cheap but efficient decisions, based on communication technologies. This was the thing that predetermined the creation of cluster computation systems, which in fact are one of the branches of the development of massively parallel processing (MРР) computers.
Cluster is a group of computers connected by high-rate communication channels, which from the point of view of the user represents single hardware recourse. Cluster is a loosely bound combination of several computational systems which cooperate to execute common applications and is seen by the user as a single system. One of the first architects of cluster technologies, Gregory Pfister defined cluster as follows: “Cluster is a type of parallel or distributed system which:
- Consists of several interconnected computers;
- Is used as a single, unified computer resource”.
Computational cluster is a combination of computers, interconnected within a certain network to solve a large computational problem. Affordable uniprocessor computers and dual- or quad-processor SMP (Symmetric Multi Processor) servers are usually used as nodes. Every node operates under control of its own copy of operating system which usually is one of standard systems: Linux, NT, Solaris, etc. Considering opposite points of view clusters can be defined as both a couple of PC’s, connected by the local Ethernet 10mb-network, and a vast computational system, created as a part of a large project. Such a project unites thousands of operational stages based on the Alpha processors, connected by the Myrinet high-speed network, which is used to support parallel applications and also used by the Gigabit Ethernet and Fast Ethernet network for administrating and business purposes.
Structure and computation capacity of the nodes can vary even within a single cluster giving an opportunity for creation of vast heterogeneous systems with prescribed computation capacity. The choice of the specific communicational media is determined by various factors: peculiarities of the class of the problems which need to be solved, financing, the necessity of further expansion of the cluster, etc. There is a possibility of including of specialized computers (such as file server) in the configuration and, as a rule, a possibility of the remote access to the cluster via the Internet.
Generally, server clusters function on separate computers. This allows raising the performance by means of distributing the workload to the hardware resources and provides fall-over protection on the hardware level.
While a regular supercomputer contains lots of processors connected to the local high-speed bus, distributed, or GRID calculations, in general are a type of parallel calculations based on regular computers (with standard processors, data storage devices, power sources etc.), connected to a network (local or global) with the help of regular protocols, for instance, Ethernet.
The main advantage of the distributed calculations is that a separate cell of the computational system might be purchased as a regular undedicated computer. In such a way, one may have almost the same computational capabilities as regular supercomputers, but for a much lower price.
Currently, there are three main types of the GRID systems:
- scientific GRID — well vectorizable applications are programmed in a specific way (for instance, using Globus Toolkit);
- GRID based on the allocation of computational resources on the demand (enterprise GRID) — regular enterprise applications work on a virtual computer, which in its turn consists of several physical computers, interconnected using the GRID technologies;
- free-will GRID — GRID based on the usage of voluntarily provided free resources of PCs.
The Elige.re project is based on the third type of the GRID systems. However, we will discuss the free-will distributed calculations in our next article.