System Hardware Specifications
This system is based on processor Intel Xeon Platinum 8268 and Intel Xeon Gold 6248 and with a total peak performance of 1.6 PFLOPS. The Cluster consists of compute nodes connected with BullSequana XH2000 HDR 100 InfiniBand interconnect Network. The system uses the Lustre parallel file system.
Total number of nodes: 332 (20 + 312)
Service nodes: 20**(Master+ Login+ Service+ Management Nodes)
CPU only nodes: 150
GPU ready nodes: 64
GPU nodes: 20
High Memory nodes: 78
GPU Compute Nodes
GPU compute nodes are the nodes that have CPU cores along with accelerators cards. For some applications, GPUs get markedly high performance. For exploiting these, one has to make use of special libraries that maximum computations on the Graphical Processing Units (Typically one has to make use of CUDA or OpenCL).
GPU Compute Nodes: 20
2* Intel Xeon G-6248
Cores = 40, 2.5 GHz Total Cores = 800 cores
Memory= 192 GB, DDR4 2933 MHz Total Memory= 3840 GB
SSD = 480 GB (local scratch) per node
2*nVidia V100 per node
GPU Cores per node= 2*5120= 10240
GPU Memory = 16 GB HBM2 per nVidia V100