Hintergrund-oben
High Energy Physics
Home  |  Research  |  People  |  Publications  |  Events  |  Travel  |  Jobs  |  
  

complete system:

  • total number of nodes: 152
  • number of GPUs: 400
  • number of CPUs: 304
  • number of CPU cores: 1216
  • total amount of CPU-memory: 7296 GB
  • total amount of GPU-memory: 1824 GB
  • peak performance CPUs: 11,7 TFlops
  • peak performance GPUs single precision: 518 TFlops
  • peak performance GPUs double precision: 145 TFlops
  • 14x19“ Racks incl. Cold aisle containment
  • 1x19“ Storage Server Rack

 

 

104x Tesla-Nodes:

  • Dual Quadcore Intel Xeon CPUs
  • 48 GB Memory
  • 2x NVIDIA Tesla M2075-GPU (6GB ECC)
    • 515 GFlops Peak double precision
    • 1030 GFlops Peak single precision
    • Memory Bandwidth: 150 GB/s
  • total number of GPUs: 208


48x GTX580 Nodes:

  • Dual Quadcore Intel Xeon CPUs
  • 48 GB Memory
  • 4x NVIDIA GTX-580 (3GB)
    • 198 GFlops Peak double precision
    • 1581 GFlops Peak single precision
    • Memory Bandwidth: 192 GB/s
  • total number of GPUs: 192


2x Head Nodes:

  • Dual Quadcore Intel Xeon CPUs
  • 48 GB Memory
  • HA-Cluster

7x Storage Server:

  • Dual Quadcore Intel Xeon CPUs
  • 20TB-/home on 2-Server HA-Cluster
  • 230 TB-/work Parallel Filesystem
    • FhGFS distributed on 5 Servers
    • Infiniband connection to Cluster-Nodes
    • 3 TB Metadata on SSD


 

Network:

  • High speed QDR Infiniband network
  • Modular Gigabit administration network
  • IPMI Remote-Management



Software:

  • Operating system CentOS
  • Batch Queueing System SLURM
  • FraunhoferFS (FhGFS) high-performance parallel file system
  • NVIDIA CUDA parallel programming platform
  • High Availability Cluster tools
  • Performance and system Monitoring tools


  • @ 2011 Universität Bielefeld
  • | 16.09.2016
  •  Gudrun Eickmeyer + Olaf Kaczmarek
  • | Contact
  • | Imprint