Wait a minute? In Dallas? Didn’t you say the SC17 conference is in Denver? Yes, this is correct. Like for last year’s August meeting in 2016, where my team met in Denver, even when the SC16 conference was in Salt Lake City, we met in Dallas on August 8 and 9. Logical thinking people can now conclude where SC18 will be located 😉

As we get closer to the actual conference in November (only 13 weeks left!) the topics discussed in the meeting are less big strategic issues but more lots and lots of nitty-gritty details which need to be decided to ensure a smooth running conference.  One afternoon of the meeting is the so-called “logistics fair”: service providers (catering, student volunteers, audio/visual, electrics, housing, networking, etc) are located at various tables around the room, while groups responsible for organizing specific technical program or students program events are moving from service provider table to table, discussing, deciding and documenting the various needs from the services for their events.

SC17 August Logistics Fair in Dallas — Picture by Bernd Mohr

Looking for the perfect location for an event in the Colorado Convention Center in Denver — Picture by Bernd Mohr

Overall, the preparations for the conference are in good shape. The bulk of the technical program is selected (workshops, tutorials, technical papers, and panels) and you can browse them in the online program. Submissions for the rest of the technical program (posters, bofs, doctoral showcase, scientific visualization showcase, exhibitor forum, HPC impact showcase and much more) are currently peer-reviewed and once selected (early September) will be included in the online program as well. Very early registration numbers are looking very promising compared to past years.

I am also very proud to be able to announce that we found the perfect speakers and topic for the SC17 keynote, namely Prof Diamond and Dr Bolton from the Square Kilometer Array (SKA) project. It is a prime example for our #HPC connects conference theme, connecting brilliant minds, diverse systems, and science areas truly all around the globe (and not just only in the northern hemisphere!). Read the full story here.

On the way back from dinner, I came across this German restaurant (yes, it is a restaurant despite the name and there is no garden anywhere), but trying it out has to wait until 2018 😉

German restaurant near Dallas Convention Center — Picture by Bernd Mohr

Besides the SC17 conference logo, tag line and preview video, which were introduced at past year’s conference, we (that means my communication team 😉 ) is also producing a series of short videos around the “#HPC connects” conference tag line. They will showcase five large science projects which are “connecting people, systems, and science”. Once produced, the videos will be published at the SC Youtube channel and of course will also be shown at the conference in November.

Yesterday, a film team visited Forschungszentrum Jülich, to shoot material for one of the five videos which will feature the Human Brain Project. This is a prime example of “#HPC connects”: the international project brings together scientists from 117 institutions from all over Europe. Scientists from Neuro-science, computer science, medicine, robotics, mathematics, ethics and many more are creating and operating a European scientific Research Infrastructure for brain research, cognitive neuroscience, and other brain-inspired sciences, simulate the brain, and will build multi-scale scaffold theory and models for the brain. This requires connecting large-scale computers with large storage and data analytics systems and visualization systems.

I took the opportunity and followed the team all day long taking some pictures for you along the way. Besides interviewing key scientists of the projects for the video, the film team also took some shots of our supercomputers and of some laboratory assistants in white coats — I guess there is an unwritten rule somewhere which says that this is the way to depict science 😉 However, I was quite impressed by the effort necessary just to get a few seconds of nice video.

The shooting started in our supercomputer machine hall:

To create the right atmosphere and lighting, the machine hall was flooded with blue light — Picture by Bernd Mohr

View down an aisle of the JURECA cluster. The final video will show a tracking shot, i.e. the camera moves through the aisle — Picture by Bernd Mohr

Next was the neuro-science laboratory:

The film crew in a typical(?) laboratory environment — when you see the scene like this, it is hard to imagine that this will look nice or interesting in the video — Picture by Bernd Mohr

BUT, this is how it will look like in the final video (captured from the control monitor of the producer) — much nicer, isn’t it? — Picture by Bernd Mohr

 

Shooting the interview with Prof. Katrin Amunts in the hallway – couldn’t they find a better location? — Bernd Mohr

But again, looking through the camera it does not look like a hallway at all 😉 — Picture by Bernd Mohr

It was very interesting to see the difference between how and where a scene was shot and the video sequence which resulted from it. As you can see from the two examples above, the actual location was quite ordinary and boring, but with the right lighting and selecting the right viewpoint and clipping, the results looked amazing. The film crew certainly knew what they were doing. By the way, the lady in the red dress in Jennifer Boyd who we hired as producer and director of the #HPC connects video series. People reading this blog regularly will remember that she already produced our amazing SC17 preview video.

The day ended with producing of the “hero shot” or Prof. Dirk Pleiter. The hero shot is used in the video to introduce a person. Watching it was actually quite funny, but I guess it was kind of stressful for the crew: It starts with Dirk standing and looking sidewards. While the camera moves towards him, Dirk has to turn to the camera, fold his arms, and look straight into the camera. At the same time, a scientific animation was shown in the background on the display wall. What made the shot so tricky, was to synchronize these four movements: moving the camera, Dirk turning, Dirk folding his arms, and the scientific animation in the background.

Making-of the “Hero shot” with Prof. Dirk Pleiter — Picture by Bernd Mohr

I think the film crew got some funny “outtakes” in the process but we will not show them here in fairness to Dirk. You will have to wait for the publication of the final video to see the result. For my part, I can wait to see the final clip!

[Note: This is an article I originally wrote for TOP500 Blog. It is reproduced with permission here.]

While there is always a lot of buzz about the latest HPC hardware architecture developments or exascale programming methods and tools, everyone agrees that in the end the only thing that counts are the results and societal impact produced by the technology. Results and impacts are coming from the scientific and industrial applications running on HPC systems. The application space is diverse ranging from astrophysics (A) to zymology (Z). So the question arises of how to effectively fund development and optimization of HPC applications to make them suitable for current petascale and future exascale systems.

The answer was provided in the European Union (EU) Horizon 2020 (H2020) e-Infrastructures call, Centres of Excellence for computing applications, which was initiated in September 2014. The work would establish a limited number of Centres of Excellence (CoE) necessary to ensure EU competitiveness in the application of HPC for addressing scientific, industrial or societal challenges. The Centres were conceived to be user-focused, develop a culture of excellence, both scientific and industrial, and place computational science and the harnessing of “big data” at the center of scientific discovery and industrial competitiveness. Centres could be thematic, addressing specific application domains such as medicine, life science or energy; transversal, focused on computational science (e.g., algorithms, analytics, and numerical methods); challenge-driven, addressing societal or industrial challenges (e.g., aging, climate change, and clean transport); or a combination of these approaches.

Eight Centres of Excellence for computing applications were subsequently selected for funding and established before the end of 2015. They cover important areas like renewable energy, materials modeling and design, molecular and atomic modeling, climate change, global system science, and bio-molecular research, as well as tools to improve HPC applications performance. Now, nine months later, these Centres are up and running and it is worth to have a closer look at the different ones:

  • CoeGSS – CoE for Global Systems Science will address the emerging scientific domain of Global Systems Science (GSS), which is a vital challenge for modern societies to understand global systems and related policies. The field will use high performance computing as a critical tool to help overcome extremely complex societal and scientific obstacles. Due to the nature of the problems addressed in typical GSS applications, the relevant data sets are usually very large, highly heterogeneous in nature, and are expected to grow tremendously over time.  Bridging HPC with high performance data analysis is thus the key to the success of GSS in the next decade.
  • EoCoE – Energy Oriented CoE is helping the EU transition to a reliable and low-carbon energy supply using HPC. The Centre is focusing on applications in (a) meteorology as a means to predict variability of solar and wind energy production; (b) materials employed for photovoltaic cells, batteries and super capacitors for energy storage; (c) water as a vector for thermal or kinetic energies, focusing on geothermal and hydropower; and (d) fusion for electricity plants as a long-term alternative energy source. These four areas will be anchored within a strong transversal multidisciplinary basis providing expertise in advanced mathematics, linear algebra, algorithms, and HPC tools.
  • E-CAM – Supporting HPC Simulation in Industry and Academia is an e-infrastructure for software, training and consultancy in simulation and modeling. It will identify the needs of its 12 industrial partners and build appropriate consultancy services. E-CAM plans to create over 150 new, robust software modules, directed at industrial and academic users, in the areas of electronic structure calculations, classical molecular dynamics, quantum dynamics, and mesoscale and multi-scale modeling.
  • MaX – Materials design at the eXascale CoE  is supporting developers and end users in materials simulations, design and discovery. It is enabling the best use of HPC technologies by creating an ecosystem of codes, data workflows, analysis, and services in material science to sustain this effort. At the same time, it will enable the exascale transition in the materials domain by developing advanced programming models, novel algorithms, domain-specific libraries, in-memory data management, software/hardware co-design and technology-transfer actions.
  • NOMAD – The Novel Materials Discovery Laboratory is developing a materials encyclopedia and big data analytics toolset for materials science and engineering. The Centre will integrate the leading codes and make their results comparable by converting (and compressing) existing inputs and outputs into a common format, thus making this valuable data accessible (as the NOMAD Repository) to academia and industry. It currently contains over three million entries.
  • BioExcel – CoE for Biomolecular Research is operating towards advancement and support of the HPC software ecosystem in the life sciences domain. Research and expertise covers structural and functional studies of the main building blocks of living organisms (proteins, DNA, membranes, etc.) and techniques for modeling their interactions, ranging from quantum to coarse-grained models, up to the level of a single cell. The Centre will improve the performance, efficiency and scalability of key codes in biomolecular science, make ICT technologies and workflows easier to use, promote best practices, and train end users.
  • POP — Performance Optimisation and Productivity CoE gathers leading experts in performance tools/analysis and programming models in Europe. It is the only transversal CoE. The Centre offers services to the academic and industrial communities to help them better understand the behavior of their applications, suggests the most productive directions for optimizing the performance of the codes, and helps implementing those transformations in the most productive way. The consortium includes academic and supercomputing centers with a long track record of world-class research, as well as service companies and associations with leading expertise in high performance support services and promotion.

Teams from the Jülich Supercomputing Centre are involved in four of the CoE: EoCoE, E-CAM, MaX, and POP (where my team is participating).

What do you do if you have two meetings to attend in the U.S. within two weeks (one in Salt Lake City, the other in Denver) with just a weekend in between and flying separately to the two meetings across the Atlantic is about four times more expensive than a single trip combining both? You spent a nice extended weekend driving through the Rocky Mountains!

However, someone should have told me in advance that March is the month with the most snow in Colorado 😉

Rest area on Interstate Highway 70 in the Rocky Mountains -- Picture by Bernd Mohr

Rest area on Interstate Highway 70 in the Rocky Mountains — Picture by Bernd Mohr

Visiting the Dinosaur Quarry at the Dinosaur National Monument near Vernal, Utah. The bones from over 500 dinosaurs have been found there. -- Picture by Bernd Mohr

Visiting the Dinosaur Quarry at the Dinosaur National Monument near Vernal, Utah. The bones from over 500 dinosaurs have been found there. — Picture by Bernd Mohr

Black Canyon of the Gunnison National Park near Montrose, Colorado. -- Picture by Bernd Mohr

Black Canyon of the Gunnison National Park near Montrose, Colorado. — Picture by Bernd Mohr

Grand Sanddunes National Park near Alamosa, Colorado. -- Picture by Bernd Mohr

Grand Sanddunes National Park near Alamosa, Colorado. — Picture by Bernd Mohr

Highway 285 in Central Colorado. -- Picture by Bernd Mohr

Highway 285 in Central Colorado. — Picture by Bernd Mohr

I love to work on the Forschunsgzentrum Jülich campus which was build in the middle of a forrest, especially in Fall. Today is a beautiful day and it is hard to stay inside and work 😉

[Jülich Lake Casino]

View of Jülich Lake Casino across the lake – Picture by Bernd Mohr

View from Terrace of the Lake Casino - Picture by Bernd Mohr

View from Terrace of the Lake Casino – Picture by Bernd Mohr

Parking Lot behind JSC - Picture by Bernd Mohr

Parking Lot behind JSC – Picture by Bernd Mohr

Parking Lot behind JSC - Picture by Bernd Mohr

Parking Lot behind JSC – Picture by Bernd Mohr

[My Office Window View]

View from my Office Window – Picture by Bernd Mohr

Wow! My own “official” blog! I never thought I would do this one time, but here we go.

[Bernd Mohr]

Me in front of our Jugene supercomputer (2009-2011) – Picture by Ralf-Uwe Limbach

For those who do not know me so well (yet), I am a scientist at the Jülich Supercomputing Centre (JSC) working on Supercomputing, High-Performance Computing and especially performance tools for parallel computing. Besides being researcher, I am also deputy head of the JSC division “Application support”.

I plan (for now) to blog about my research activities and projects, my visits to workshops, conferences and colleagues all over the world, and about my quest to organize SC17.

I am working on performance tools for High-Performance Computing (HPC) for almost 30 years  now, at a time the term “HPC” had not even been invented yet. I have been involved in the development of many open-source performance tools among them TAU, Vampir, KOJAK and currently Score-P and Scalasca. Supercomputers, the biggest and largest computer systems used to solve the world’s toughest problems, are a fascinating research area. The HPC computer hardware architectures, system software and programming models develop so quickly that my work never becomes boring and many exciting research challenges are still ahead of me. If you are interested to learn more about this, and you have some time, you could listen to my podcast about HPC – be aware it is 2.5 hours long and unfortunately, it is in German 🙁

End of 2014, I also got elected the be the General Chair of SC17, the world-largest international conference on high performance computing, networking, storage and analysis attended by over 10,000 people every year. As I am the first non-American after 28 years to organize this conference, it created quite some buzz, for example I made it into the 2015 list “People to Watch” from the online magazine HPCwire. As you can imagine, it is quite an effort to organize a 10,000 attendee multimillion U.S. dollar conference with the help of about 600 volunteers. I will write in the next three years about this effort in a series of blog articles tentatively called “Things you never wanted to learn about SC, but I tell you anyhow!” 😉 If you are interested in this topic, check out the SC15 blog article “10 Questions with SC17 General Chair Bernd Mohr”.

P.S. In case you wonder why the blog is called “Do you know Bernd Mohr?”: The story is that one of our lab directors (name known to the author ;-)) told me once that many times he visits new places or meets new persons, and tells them that he is from Jülich, they often ask him “So you know Bernd Mohr?”.