I worked as systems engineer in a supercomputer company for a while in US. The company size is small, but built several supercomputers in the top 500 list at the time. Nowadays, almost all supercomputers (with execiption of IBM) are built by using the standard computers.
For example, a supercomputer consists of 5000 3GHz dual core computers connected together by high speed (10Gb) interconnect (Infiniband, or Myrinet). Assume each CPU has one FPU (Floating Point Unit). The peak Tflops (Tera Floating Point Operations Per Second) is:
5000 computers * (3G * 2 CPU speed) * 2 FPU = 60 Tflops
If the efficiency is 75%, the Max (by Linpack) result is 60 * .75 = 45 Tflops
The concept of supercomputer (also called HPC - high performance computer) is very simple. What you need is:
*standard computers
*network switches/cables
*Operating System (Linux is a good choice)
*cooling systems
*disk storages
*enough electricity
How to make HPC more efficient is another issue.
For HPC, the relibility is not the top priority, one average, 1 out 250 computers per day will go down.
The most import part is management software. For example, a good software allows to install Linux on 5000 computers in 90 minutes.