You are here: Home NEMO

About

The bwForCluster NEMO is a high-performance compute resource with high speed interconnect. It is intended for compute activities related to research in for researchers from the fields Neuroscience, Elementary Particle Physics and Microsystems Engineering (NEMO).

NEMO Logo

Figure: The bwForCluster NEMO for Elementary Particle Physics, Neuroscience and Microsystems Engineering

Schema Cluster NEMO

Figure: bwForCluster NEMO Schematic

 

NEMO documentation in the central wiki.

Hardware and Architecture

Software and Operating System

Operating System: CentOS Linux 7 (similar to RHEL 7)
Queuing System: MOAB / Torque (see Batch Jobs for help)
(Scientific) Libraries and Software: Environment Modules

Compute Nodes

For researchers from the scientific fields Neuroscience, Elementary Particle Physics and Microsystems Engineering the bwForCluster NEMO offers 748 compute nodes plus several special purpose nodes for login, interactive jobs, etc.

Special Purpose Nodes

Besides the classical compute node several nodes serve as login and preprocessing nodes, nodes for interactive jobs and nodes for creating virtual environments providing a virtual service environment.

Storage Architecture

The bwForCluster NEMO consists of two separate storage systems, one for the user's home directory $HOME and one serving as a work space. The home directory is limited in space and parallel access but offers snapshots of your files and Backup. The work space is a parallel file system which offers fast and parallel file access and a bigger capacity than the home directory. This storage is based on BeeGFS and can be accessed parallel from many nodes. Additionally, each compute node provides high-speed temporary storage on the node-local solid state disk (SSD) via the $TMPDIR environment variable.

High Performance Network

The compute nodes all are interconnected through the high performance network Omni-Path which offers a very small latency and 100 Gbit/s throughput. The parallel storage for the work spaces is attached via Omni-Path to all cluster nodes. For non-blocking communication 17 islands with 44 nodes and 880 cores each are available. The islands are connected with a blocking factor of 1:11 (or 400 Gbit/s for 44 nodes).