High Energy Physics
Home  |  Research  |  People  |  Publications  |  Events  |  Travel  |  Jobs  |  
  

The Bielefeld GPU Cluster

 

some technical details:

complete system:

  • total number of nodes: 28
  • number of GPUs: 224
  • number of CPUs: 56
  • number of CPU cores: 560
  • total amount of GPU-memory:  7.2 TB
  • total amount of CPU-memory: 10.8 TB
  • peak performance GPUs (single): 3.52 PFlops
  • peak performance GPUs (double): 1.75 PFlops
  • peak performance CPUs: 19.25 TFlops
  • 14x19“ Racks incl. cold aisle containment

 

 

28x GPU Nodes:

  • 8x NVIDIA Tesla V100 (32GB ECC)
    • 7.8 TFlops Peak double precision
    • 15.7 TFlops Peak single precision
    • Memory Bandwidth: 900 GB/s
    • NVLink Interconnect Bandwidth 300 GB/s
  • Dual 10-core Intel Xeon CPUs
  • 384 GB Memory
  • total number of GPUs: 228


5x Head Nodes:

  • Dual 10-core Intel Xeon CPUs
  • 384 GB Memory

Storage System:

  • 4 Storage servers and 4 JBODs
  • 2 PetaByte Parallel Filesystem
    • BeeGFS distributed on 4 Servers
    • Infiniband connection to Cluster-Nodes
    • 4x 10 TB Metadata on SSD

Backup System:

  • 1 backup server and 4 JBODs
  • 2 PetaByte Filesystem

 

Network:

  • High speed EDR Infiniband network
  • Modular Gigabit administration network
  • IPMI Remote-Management



Software:

  • Operating system CentOS
  • Batch Queueing System SLURM
  • BeeGFS high-performance parallel file system
  • NVIDIA CUDA parallel programming platform
  • High Availability Cluster tools
  • Performance and system Monitoring tools

Support:

For support and further information concerning access to the cluster and high performance computing in general write to gpucluster@physik.uni-bielefeld.de

We are also part of the Competence Network HPC.NRW and can help you to gain excess to computing resources of TIER-2 and TIER-3 centers across the state of NRW.



  • @ 2011 Universität Bielefeld
  • | 17.10.2022
  •  Gudrun Eickmeyer + Olaf Kaczmarek
  • | Contact
  • | Imprint