Computer clustering involves the use of multiple hardware, typically
personal computers (PCs) or UNIX workstations, storage devices, and
redundant interconnections, to form what appears to users as a single
integrated system (Cluster computing). Clustering has been available since
the 1980s when it was used in Digital Equipment Corp.'s VMS system. Today,
most leading hardware and software companies support clustering because of
its demand for parallel processing, batch processing, load balancing and
Parallel processing is the processing of program instructions by
dividing them among multiple processors with the objective of running a
program in less time. Parallel processing is normally applied for
rendering and high computational based applications. Rather than using
expensive specialized supercomputers for parallel processing, implementers
have begun using a large cluster of small commodity servers. Each server
runs its own operating system, to take a number of jobs, process them, and
send the output to the primary system (Grama, 2003). Clusters provide the
ability to handle a large task in small bits, or lots and lots of small
tasks across an entire cluster, making an entire system more affordable and
The first PC cluster to be described in scientific literature was
named Beowulf and was developed in 1994 at the NASA Goddard Space Flight
Center (Beowulf clusters compared to Base One's batch job servers).
Beowulf initially consisted of sixteen PCs, standard Ethernet, and Linux
with modifications and achieved seventy million floating point operations
per second. For only $40,000 in hardware, Beowulf produced the processing
power of a small supercomputer costing about $400,000 at that time. By
1996, researchers had achieved one billion floating point operations per
second at a cost of less than $50,000. Later, in 1999, the University of
...