HPCx homepage
Services User support Projects Research About us Sitemap Search  
Hardware Software Storage Machine status The Grid Service Policies
home > services > hardware

HPCx Hardware


The HPCx system is located at the UK's STFC's Daresbury Laboratory and operated by the HPCx Consortium.

The HPCx system uses IBM eServer 575 nodes for the compute and IBM eServer 575 nodes for login and disk I/O. Each eServer node contains 16 processors. At present there are two service nodes. The main HPCx service provides 160 nodes for compute jobs for users, giving a total of 2560 processors. There is a separate partition of 12 nodes reserved for certain projects. The peak computational power of the HPCx system is 15.3 Tflops peak. The complete new platform gave a value of 12,940 Gflops for the Rmax value of the Linpack benchmark. The service can thus provide 12,940 AUs per hour, 310,560 AUs per day.

Each eServer system frame consists of 16 1.5 GHz POWER5 processors. In the POWER5 architecture, a chip contains two processors, together with the Level 1 (L1) and Level 2 (L2) cache. Each processors has its own L1 instruction cache of 32 kB and L1 data cache of 64 kB integrated onto one chip. Also on board the chip is the L2 cache (instructions and data) of 1.9 MByte, which is shared between the two processors. Four chips (8 processors) are integrated into a multi-chip module (MCM). Two MCMs (16 processors) comprise one frame. Each MCM is configured with 128 MB of L3 cache and 16 GB of main memory. The total main memory of 32 GB per frame is shared between the 16 processors of the frame.

The frames in the HPCx system are connected via IBM's High Performance Switch (HPS). Each logical partition (LPAR) runs its own copy of the AIX operating system. Each frame is one 16-way LPAR. The names LPAR and frame are synonyms for computer node on HPCx phase3.

POWER5 Architecture Overview

The eServer compute nodes utilise IBM Power5 processors. The Power5 is a 64-bit RISC processor implementing the PowerPC instruction set architecture. It has a 1.5 GHz clock rate, and has a 8-way super-scalar architecture with a 20 cycle pipeline. There are two floating point multiply-add units each of which can deliver one result per clock cycle, giving a theoretical peak performance of 6.0 Gflop/s. There is one divide and one square root unit, which are not pipelined.

The processor has 120 integer and 120 floating-point registers. There is extensive hardware support for branch prediction, and both out-of-order and speculative execution of instructions. There is a hardware prefetch facility: loads to successive cache lines trigger prefetching into the level 1 cache. Up to 8 prefetch streams can be active concurrently.

The level 1 data cache has 128-byte lines, in 2-way set associative and write-through. The instructions cache is 4-way associative.

The level 2 cache is a 1.9 Mbyte combined data and instruction cache, with 128 byte lines and is 10-way set associative and write-back.

The level 3 cache has 256 byte lines, and is 12-way set associative and write-back.

Inter node communication is provided by an IBM's HPS. Each eServer frame has two network adapters and there are two links per adapter, making a total of four links between each of the frames and the switch network.

Further links:

http://www.hpcx.ac.uk/services/hardware/ contact email - www@hpcx.ac.uk © UoE HPCX Ltd