HPC Newsletter 05/16
Welcome to our final edition of the Freiburg HPC newsletter in 2016.
It has been an eventful year, marked by the delivery of the bwForCluster NEMO in early summer. Looking back, we would like to thank our users for their continued feedback and support. In our view, the procurement and commissioning procedure was a "by-the-book" community effort in supercomputer building. It is worthwhile to note that NEMO, as of November 2016, is still among the top 300 supercomputers in the world.
We wish you a Merry Christmas and a Happy New Year.
Motto for 2017: Let's rock this baby :-)
Best wishes, your HPC team
Table of Contents
Upcoming events and important dates
14.12.2016: NEMO advisory board meeting
The NEMO advisory board ("Cluster-Beirat") will hold its first meeting on Wednesday 14.12.2016. For the initial meeting, the board members are composed of the shareholders and those members of the communities who were deeply involved in the NEMO grant application and the procurement phase. In the future, the board members should be selected by the communities.
The first point on the agenda will be to gather feedback from the three scientific communities. Therefore, if you can find the time, please contact your representative on the advisory board. Any feedback is welcome.
The NEMO cluster board is composed of the following members (starred members are shareholders):
- Neurosience: Stefan Rotter (*), Carsten Mehring (*), Ad Aertsen
- Elementary Particle Physics: Stefan Dittmaier (*), Markus Schumacher, Günter Quast
- Microsystems Engineering: Andreas Greiner, David Kauzlaric
- Shareholder: Carsten Dormann (*)
- Rechenzentrum: Gerhard Schneider, Dirk von Suchodoletz, Bernd Wiebelt, Michael Janczcyk
The load on NEMO has been steadily increasing. Currently, there are still times with less than 50% load, but we had periods with complete utilization as well. If you or your work group have submitted few jobs or no jobs at all in the past, you will profit from a good fair-share value, meaning that your jobs will have priority. We would also ask all veteran users of the Black Forest Grid not belonging to the physics community to migrate to NEMO, the bwUniCluster or another corresponding bwForCluster.
At the end of this year, NEMO will be extended by two new shareholders. The first extension is by the Bioinformatics de.NBI community that is currently in the process of building the de.NBI cloud. In the case of Freiburg, the de.NBI cloud is a separate part of NEMO, employing the experience that we have built up in various projects (bwHPC-C5, bwCloud and bwViCE) in providing virtual research environments. The second extension is by the LHC/ATLAS community and the nodes will become part of the LHC-Grid. In principle, LHC-Grid jobs could also run in virtual machines. Apart from an acceptable loss in performance, the overall benefits would make this a worthwhile endeavor. Investigation the feasibility of this approach is therefore a sub-project in the bwViCE project.
In both cases, the hardware is almost identical to the standard NEMO cluster nodes, thus simplifying operation and maintenance. Additionally, for mutual benefit, both extensions could in principle opportunistically use NEMO cluster resources or be used as opportunistic resources for NEMO cluster jobs.
NEMO offers four Xeon Phi Knights Landing compute nodes for evaluation of the platform. We encourage running experiments, tests and benchmarks on these machines, so we have enough information at a later time to decide whether the Xeon Phi platform is worth investing in.
The Xeon Phi architecture is x86-64 compatible, so your code should just runs as is. We would be interested in how well your non-optimized works on this architecture. The first optimization step would then be the recompilation of the source code with the Intel compiler (available on NEMO). Some software packages (commercial and free) already have optimizations for Xeon Phi available.
To further evaluate the Knights Landing platform, the bwHPC-C5 competence center has established a Tiger-Team together with members from the Microsystems Engineering community. Here, the LAMMPS application code is already partially optimized for Knights Landing. Unfortunately, as we found out, not yet for the specific model needed. Furthermore, the Tiger-Team is investigating the performance of Lattice-Boltzmann codes on Knights Landing.
We would be glad if other workgroups would join participate in the evaluation of the Xeon Phi platform. Please contact us for further information if you are interested.
- The proceedings of the ZKI conference held September last year in Freiburg have been published as a book.
- bwGRiD liquidation: The old hardware has been scrapped. For historic reference, please consult concluded projects.
- In preparation for "bwHPC-2", there will be a questionnaire early next year for bwHPC-users.
- NEMO/bwUniCluster: Reactivation of TurboVNC for remote visualization
- The bwUniCluster operating system has been upgraded to RedHat 7.2 along with updates to the application software modules where needed. Please consult the documentation wiki page to see whether your own code needs recompilation.
- Bernd Wiebelt has given an invited talk ("When HPC meets Cloud") at the conference Journeeś Succes in Paris.
HPC Team, Rechenzentrum, Universität Freiburg
bwHPC initiative and bwHPC-C5 project
Previous newsletters: http://www.hpc.uni-freiburg.de/news/newsletters
For questions and support, please use our support address email@example.com