The 2019 European Pangeo Meeting
The first European event bringing together the local Pangeo community was held at CNES headquarters and Inria in Paris May 23–24. The goals of this event was that Pangeo developers, users and other curious persons can meet to share experience, knowledge, needs and ideas about Pangeo. It brought together people from Met Office, ECMWF, Ifremer, CNES, Geomar and other institutions or private companies. The detailed agenda can be found here, we try to sum up several outcomes of the meeting below.
European users are more dependent of HPC systems
This is especially true for France, and we should probably remove UK from this statement. It’s probably due to the fact that major cloud provider are US based, we don’t want them to take our data, but also we don’t have organization big enough at the national level (like NASA for instance) to become real partners and we lack administrative and financial means.
Further efforts are required in order to smooth the use of the Pangeo ecosystem on these platforms. A first one is to reach to Big HPC system admins to evangelize Pangeo and make them install and use it. Discussions have been initiated with CINES for OCCIGEN, one of the first tier French HPC system. See also #614 for a good opportunity. The results obtained by CNES and Ifremer researchers and engineers should help to show how cool and helpful Pangeo is, be it for the analysis of big simulation that are run on these platforms, or in order to manipulate the massive amount of scientific data produced on a daily basis.
Other actions to help the HPC community to adopt Pangeo:
- Improve the user experience of dask-jobqueue: both the library and its documentation. For example, scientists have sometimes difficulties when configuring it on their system ; we’ve got some important lingering issues: how to handle worker exceeding walltimes, especially with as_completed ; how to tune adaptive clusters correctly…
- Deploy a tutorial on the cloud using binder to launch an HPC setup for dask_jobqueue tutorial on the cloud as proposed by @lesteve?
- Develop Pangeo HPC deployment guidelines: how to best tune it? What about Zarr chunking and worker spilling? How to debug distributed computations?
Ifremer and CLS are gathering experience on HPC platform
This experience needs to be shared with the community (again, give best practices, but also some results and lessons learned), here are several elements:
Three potentially important aspects to consider for Pangeo on HPC platforms
- Filesystem constraints:
- What File System (FS) am I on (GPFS, Lustre, NFS)? this is important to best tune your processing.
- Chunk size (dask.config.get(‘array.chunk-size’) = 128MiB): what is the optimal file size for this filesystem. You probably need to ask admins about this, and maybe do some benchmarking.
- Use of user-defined FS options to improve the performance. For example Lustre striping should be 1 (lfs setstripe -c 1 zarrfile
) for zarr directories. - CPU time and communication time loss and optimization:
- Optimal chunk size (easy to modify),
- Optimal chunk order when rechunking (less easy to modify, depends on calculation and potentially internal dask mechanisms),
- Optimal number of threads per dask processes to maximise the usage of each HPC node. - Memory load:
- Depends on chunk size,
- Existence and size of a local temporary storage (enable or disable spill in Dask config)
- Will I be able to do everything in memory or should I manually create intermediate temporary Zarr files?
Zarr vs NetCDF matters on HPC systems too
We’ve touched on the question of how to best configure netCDF4 (chunking, deflation, record dims, classic vs. full nc4, shuffling, …) for performance. @willirath’s takeaway on this is: There’s a lot you can do wrong and we do not know how to do this right yet.
From my experience, NetCDF is often a burden: scientists tend to use it as a kind of file database, meaning people computing something along a satellite swath will issue a file open, lookup and read at each step on some NetCDF file with values along a grid. This leads to tons of IO operations on our file system, which is bad for any distributed file system as far as I know.
One of the principal lesson learned from the experiments performed on CNES cluster is that what makes Zarr performant in an object store makes it also really performant on an HPC system, an order of magnitude faster than NetCDF. In the simple tests below, Zarr is more than ten times faster for reading Data! We need to complete the work with a consolidated Zarr vs NetCDF benchmark on HPC.
Reach out to the Copernicus Data and Information Access Service (DIAS)
Even if we’re still relying a lot on HPC systems, European Union And European Space Agency are trying to provide Cloud environments to analyze open or private data incomming from Satellite. Showing the power of Pangeo and the Pangeo stack in these environments could bring a huge boost for both Pangeo and scientists.
We’ve already started to talk (thanks to ECMWF) with WekEO “institutional” DIAS. They are eager to work with us, so we will need to find some bandwidth to deploy a platform and develop use cases there. CNES is also in close relation with Onda, Sobloo and Mundi DIAS.
Share and define optimal ways of doing distributed stuff
We’ve talked a lot about the fact that the historically grown ways of doing science often are not ideal for parallel computing. Some examples:
- Explicitly creating spectra via FFT is not necessary when all you’re interested in is an integral diagnostic like kinetic energy broken down to only a few frequency bands.
- Usual explicit interpolations are not necessary when binning is eventually sought.
- How to chunk data, and not split the computation/logic part. Make data chunk processing independent, with as little scommunication needed as possible.
- Interpolation is still a pain point: there is no library that can interpolate geo-referenced data in Python simply and very quickly, see @fbriol issue on this point.
- How to efficiently interface with C or fortran code.
We’ll need the real scientific use case presented at the meeting for providing good ways of doing things. One question asked was also: Can we work on domain libraries? Which ones?
The Met Office’s Informatic lab talk was awesome!
They’re just doing extremly cool things with the cloud and Pangeo, we’ve got a lot to learn from them! 👏 @jacobtomlinson and @DPeterK! The operational implementation of Pangeo ecosystem may be taken as a demonstration of Pangeo’s maturity and trust we may put in it for the future.
Contribute, contribute, contribute !
Beware of re-inventing the same thing as your neighbor:
- If you have a problem, just share it.
- If you have a solution, just share it!
- We need people to be doing presentations/tutorials. Or even a tutorial on how to do a tutorial, as proposed by @DPeterK! A lot of this could follow the way of the https://carpentrieso.org.
So let’s meet on the official Pangeo tracker to discuss all this.