You are here: Home News 2015 HPC Newsletter 02/15

HPC Newsletter 02/15

HPC Newsletter 02/15

Dear colleagues,

Welcome to the 2nd HPC newsletter in 2015. Our cluster “NEMO”, which was initially just meant as a training and exploration ground has stabilized much faster than anticipated. We already have two groups from the physics field using the cluster as a productive environment.

We warmly invite other users and groups to make use of the NEMO cluster. Even the legacy hardware is still a significant HPC resource. And best of all - it comes with no strings attached. Please let us know if a certain software package is missing, so that we can take care of it.

With best regards,

Your HPC Team, Rechenzentrum, Universität Freiburg
http://www.hpc.uni-freiburg.de 

bwHPC-C5 Project
http://www.bwhpc-c5.de

To subscribe to our mailinglist, please send an e-mail to to hpc-news-subscribe@hpc.uni-freiburg.de
If you would like to unsubscribe, please send an e-mail to hpc-news-unsubscribe@hpc.uni-freiburg.de

For questions and support, please use our support address enm-support@hpc.uni-freiburg.de

Table of Contents

NEMO: Cluster ready for production

Virtualization Success Story: AG Schumacher

Xeon Phi acquisition

bwGRiD Lustre file server hardware recycling

Domain Specific Consultation with Microsystems Engineering Community

Upcoming Events

05.03.2015 Openstack/Docker seminar in Tübingen – possibility to meet Freiburg HPC team

07.03.2015 Decommissioning of bwGRiD Lustre file server – to be reused as BeeGFS testbed

25.03.2015 Training course for NEMO and bwUniCluster

NEMO: Cluster ready for production

NEMO is the codename for our preliminary bwForCluster ENM. Technically, this is the legacy HPC hardware of the former bwGRiD running in a new infrastructure model.

Originally, we planned NEMO to be a training and exploration ground for our new HPC setup. However, it matured much faster than we anticipated, so we deem it now ready to run production quality jobs. Even the legacy hardware still constitutes a significant HPC resource. You might also have noted that the bwUniCluster in Karlsruhe can be quite crowded at times, so this may be a viable alternative in such cases.

Working on NEMO right now is an ideal way to ensure that your jobs and applications will be ready to run from the day the new hardware for the bwForCluster ENM is in operation.

If you are a returning bwGRiD user, you already have all the necessary requirements to make use of NEMO. Just login via ssh to login.bwfor.uni-freiburg.de. Please note that you will start with an empty home directory.

New users will have to go through a simple registration process.

For further details, please see http://www.hpc.uni-freiburg.de/nemo/access

Virtualization Success Story: AG Schumacher

The software and infrastructure setup of the Black Forest Grid cluster is incredibly impressive and complex. It took years to mature, and it therefore would be an almost herculean effort to reproduce this environment in our forthcoming bwForCluster ENM. Still, some work groups of our physicist community need this environment for their work, and without this special environment, the forthcoming bwForCluster ENM would be of no use to them.

We are therefore very glad to report that we found a solution for this problem, using the virtualization infrastructure of our NEMO cluster. First, we managed to get the BFG environment to boot in a virtual machine in our OpenStack infrastructure. After sorting out various details involving firewalls and access paths to BFG infrastructure servers, the environment became actually usable. Since then we have been running intense tests. As an intermediate result, we can conclude that the loss in performance due to virtualization for this specific environment is typically less than 5% and the virtual machines of the NEMO cluster run as reliable as the bare metal hardware of the BFG.

Currently, the partitioning of the cluster has to be manually adapted. This task has to be taken over by the scheduler in the future. We are working closely together with Adaptive Computing (the company behind the Moab/Torque) to make this possible.

Xeon Phi acquisition

We had to return the Xeon Phi compute server which was given to us for evaluation and testing purposes. The initial experience was quite encouraging but Xeon Phi cards are expensive. However, just before Christmas, we were made aware of a special promotion by Intel. In fact, we got an excellent price for two compute servers with two Xeon Phi cards each.

We will soon make these four Xeon Phi processors available in the NEMO cluster environment for further testing. The compute nodes hosting the Xeon Phi processor cards are currently installed individually, i.e. not using our provisioned software environment.

At the time of this writing, there are no further investments planned in the Xeon Phi architecture for the forthcoming bwForCluster ENM procurement procedure. The new Xeon Phi generation (“Knights Landing”) will not be available for the estimated delivery and setup period. However, along with the initial procurement, we are already planning for future possible extensions, and Xeon Phi is an option there. We would like to encourage our communities to evaluate the Xeon Phi processor architecture. If you need help, please let us know. We also have the option to form a Tiger-Team of interested people: “A Tiger Team is associated with the corresponding Competence Center (CC) and is briefed by the CC with the concrete Support- and Optimization-tasks. In difference to the corresponding Competence Center the team building of a Tiger Team is time flexible and orientated on the specific support requirements.”

bwGRiD Lustre file server hardware recycling

The official shutdown date for the bwGRiD Lustre file server in Freiburg was on December 18th 2014. However, due to some extra tests we wanted to perform, retrieving files was still possible until now. This grace period will definitely be over on Friday, March 6th. We will put the hardware out of commission and reuse it to test and evaluate the BeeGFS parallel file system, which is an interesting alternative to the Lustre file system.

You will not be able to retrieve files from the former bwGRiD Lustre file system after Friday, March 6th 2015.

Domain Specific Consultation with Microsystems Engineering Community

On February 23rd, we met with scientists from our Microsystems Engineering Community. The meeting was kindly hosted by Dr. Andreas Greiner at IMTEK in Freiburg. We discussed the ongoing preparations for the procurement procedure and the specific needs of the Microsystems Engineering community with respect to the software (free and commercial) and the hardware configuration. We verified that our current plan for the cluster configuration for the most part matches with what the colleagues from Microsystems Engineering are expecting. We had some extra input which we will feed back into the procurement documents.

We are planning to have another round of domain specific consultations with our three communities before the actual acquisition processes is launched.

NEMO and bwUniCluster workshop in Freiburg

On Wednesday, March 25th 2015, the HPC Competence Center in Freiburg will offer an introductory course / workshop on using the bwUniCluster in Karlsruhe and on using NEMO, the preliminary bwForCluster ENM in Freiburg.

The exact location and schedule are yet to be determined, but there will be a morning session for HPC newbies and an afternoon session for more experienced users. If you would like to see a specific topic to be covered in either session, please give us an advance notice and we will try to include it.

Further details will follow, please see www.hpc.uni-freiburg.de/training for further announcements.

Application Benchmarks for Procurement

We need application benchmarks from our scientific communities to tailor the hardware configuration of the forthcoming cluster to their needs. We have already received application benchmarks or feedback from the following people and groups:

  • AG Prof. Markus Schumacher (Freiburg), Physics, HEPSpec

  • AG Prof. Günther Quast (Karlsruhe): Physics, HEPSpec

  • AG Prof. Gerhard Stock: (Freiburg): Physics, benchmark provided by Florian Still

  • AG Prof. Carsten Mehring (Freiburg): Neuroscience, Matlab benchmark provided by Dr. Tobias Pistohl

  • AG Prof. Stefan Rotter (Freiburg): Neuroscience, NEST benchmark

  • AG Dr. Andreas Greiner (Freiburg): Microsystems Engineering, Lattice Boltzmann method

Our thanks go to the aforementioned people for their feedback and continuing support.

If you think you can turn a typical use case in your work into a relevant application benchmark, please contact us as soon as possible.

bwVisu Remote Visualization Project

bwVisu (http://www.urz.uni-heidelberg.de/forschung/bwvisu.html) is a state-sponsored project to provide established visualization tools and powerful hardware with an interface to HPC clusters. The visualization tools are designed for interactive use with low latencies, notwithstanding trade-offs with respect to available devices.

One central goal of the project is the development of a remote-visualization technology that is scalable considering the abilities of the locally used hardware and data communication capacities. There are two classic approaches for remote visualization, either one defining the extreme end of the possible solution spectrum: Remote visualization through image streaming on the one hand, and raw data transmission with local post processing and rendering on the other hand.

If you have need for remote visualization solutions in your scientific workflows, please contact us. We are interested in identifying use cases and provide feedback to the bwVisu project. Our HPC cluster NEMO can be used to provide the necessary HPC compute power.


HPC-Team Rechenzentrum, Universität Freiburg
http://www.hpc.uni-freiburg.de

bwHPC-C5 Project
http://www.bwhpc-c5.de

To subscribe to our mailinglist, please send an e-mail to to hpc-news-subscribe@hpc.uni-freiburg.de
If you would like to unsubscribe, please send an e-mail to hpc-news-unsubscribe@hpc.uni-freiburg.de

For questions and support, please use our support address enm-support@hpc.uni-freiburg.de

Filed under: