Clusters OEM/Vendor Interconnect CPU‐Core / Nodes Memory Description
C6‐Cluster SupperMicro/Netweb 1‐gigabit 4‐Core per node /
55nodes
16GB Grid  Mathematica
jobs
C8‐Cluster HP /Technet 1‐gigabit 12‐Core per node/
49 nodes
48 GB Sequential / Distributed
memory jobs
C9‐Cluster IBM /Wipro QDR‐Infiniband
1‐gigabit
16‐Core per node/
49nodes
128 GB Shared Memory /
MPI jobs
C10‐Cluster IBM /Wipro 1‐gigabit 20‐Core per node/
14 nodes
64 GB Sequential / Distributed
memory jobs
C11‐Cluster Fujitsu/Locuz QDR‐Infiniband
1‐gigabit
24‐Core per node/
40 nodes
96 GB Shared Memory /
MPI jobs
C12‐Cluster Fujitsu/Micropoint QDR‐Infiniband
1‐gigabit
24‐Core per node/
44 nodes
96 GB Shared Memory /
MPI jobs
C13‐Cluster Fujitsu/Locuz OPA‐Infiniband
100‐gigabit
32‐Core per node/
32 nodes
192 GB Shared Memory /
MPI jobs

This facility has been funded through the five year plan grants received in response to proposals from faculty members at HRI, starting with the X-plan (2002-2007), continued with XI-plan (2008-2013) and ongoing XII-plan (2013-2018).

The first cluster was setup in Aug.2000 using twelve desktop machines as computer nodes. Each node was a Pentium-3 computer (CPU speed: 550 MHz, Memory 256 MB) and these were connected to each other via an Ethernet switch. This cluster was used more for learning parallel programming and administering a cluster more than anything else. This cluster was retired in April 2002 and the machines were used as desktops for another three years.

The second cluster we set up used sixteen Pentium-4 computers (CPU speed: 1.6 GHz, Memory 1GB) and we continued to use Ethernet as interconnect. Peak performance of each node in this case was 2.2 Gflops. This cluster was used very heavily by users as each node was more powerful than any other machine available in the HRI network at the time. This cluster was retired in late 2005.

The third Cluster was Kabir, a 42 node cluster of dual processor servers with two 2.4 GHz Intel Xeon processors having 2 GB RAM. And the journey continues......