7 HPC user libraries #
Many HPC clusters need to accommodate multiple compute applications, each of which has its own very specific library dependencies. Multiple instances of the same libraries might exist, differing in version, build configuration, compiler, and MPI implementation. To manage these dependencies, you can use an environment module system. Most HPC libraries provided with SUSE Linux Enterprise High Performance Computing are built with support for environment modules. This chapter describes the environment module system Lmod, and a set of HPC compute libraries shipped with SLE HPC.
7.1 Lmod — Lua-based environment modules #
Lmod is an advanced environment module system that allows the installation
of multiple versions of a program or shared library, and helps configure the
system environment for the use of a specific version. It supports
hierarchical library dependencies and makes sure that the correct versions of
dependent libraries are selected. Environment module-enabled library
packages supplied with the HPC module support parallel installation of
different versions and flavors of the same library or binary and are
supplied with appropriate lmod
module files.
7.1.1 Installation and basic usage #
To install Lmod, run zypper in lua-lmod
.
Before you can use Lmod, you must source
an
init
file into the initialization file of your
interactive shell. The following init files are available for various
common shells:
/usr/share/lmod/lmod/init/bash /usr/share/lmod/lmod/init/ksh /usr/share/lmod/lmod/init/tcsh /usr/share/lmod/lmod/init/zsh /usr/share/lmod/lmod/init/sh
Pick the appropriate file for your shell, then add the following line into your shell's init file:
source /usr/share/lmod/lmod/init/INIT-FILE
The init script adds the command module
.
7.1.2 Listing available modules #
To list all the available modules, run module spider
.
To show all modules which can be loaded with the currently loaded modules,
run module avail
. A module name consists of a name and
a version string, separated by a /
character. If more
than one version is available for a certain module name, the default
version is marked by a *
character. If there is no default,
the module with the highest version number is loaded. To reference a specific module
version, you can use the full string
NAME/VERSION
.
7.1.3 Listing loaded modules #
module list
shows all currently loaded modules. Refer to
module help
for some short help on the module command,
and module help MODULE-NAME
for help on the particular module. The module
command is
only available when you log in after installing
lua-lmod.
7.1.4 Gathering information about a module #
To get information about a particular module, run module whatis
MODULE-NAME
. To load a module, run
module load MODULE-NAME
. This
will ensure that your environment is modified (that is, the
PATH
and LD_LIBRARY_PATH
and other
environment variables are prepended) so that binaries and libraries
provided by the respective modules are found. To run a program compiled
against this library, the appropriate module load
commands must be issued beforehand.
7.1.5 Loading modules #
The module load MODULE
command must be run in the shell from which the module is to be used.
Some modules require a compiler toolchain or MPI flavor module to be loaded
before they are available for loading.
7.1.6 Environment variables #
If the respective development packages are installed, build-time
environment variables like LIBRARY_PATH
,
CPATH
, C_INCLUDE_PATH
, and
CPLUS_INCLUDE_PATH
are set up to include the
directories containing the appropriate header and library files. However,
some compiler and linker commands might not honor these. In this case, use
the appropriate options together with the environment variables -I
PACKAGE_NAME_INC
and -L
PACKAGE_NAME_LIB
to add the include
and library paths to the command lines of the compiler and linker.
7.1.7 For more information #
For more information on Lmod, see https://lmod.readthedocs.org.
7.2 GNU Compiler Toolchain Collection for HPC #
In SUSE Linux Enterprise High Performance Computing, the GNU compiler collection version 7 is provided as the base compiler toolchain. The gnu-compilers-hpc package provides the environment module for the base version of the GNU compiler suite. This package must be installed when using any of the HPC libraries enabled for environment modules.
7.2.1 Environment module #
This package requires lua-lmod to supply environment module support.
To install gnu-compilers-hpc, run the following command:
>
sudo zypper in gnu-compilers-hpc
To make libraries built with the base compilers available, you must set up the environment appropriately and select the GNU toolchain. To do so, run the following command:
>
module load gnu
7.2.2 Building High Performance Computing software with GNU Compiler Suite #
To use the GNU compiler collection to build your own libraries and applications, gnu-compilers-hpc-devel must be installed. It ensures that all compiler components required for HPC (that is, C, C++, and Fortran compilers) are installed.
The environment variables CC
, CXX
,
FC
and F77
will be set correctly and
the path will be adjusted so that the correct compiler version can be
found.
7.2.3 Later versions #
The Development Tools Module might provide later versions of the GNU compiler suite. To determine the available compiler suites, run the following command:
>
zypper search '*-compilers-hpc'
If you have more than one version of the compiler suite installed, Lmod picks the latest one by default. If you require an older version, or the base version, append the version number:
>
module load gnu/7
For more information, see Section 7.1, “Lmod — Lua-based environment modules”.
7.3 High Performance Computing libraries #
Library packages that support environment modules follow a distinctive
naming scheme. All packages have the compiler suite and, if built with MPI
support, the MPI flavor included in their name:
*-[MPI_FLAVOR-]COMPILER-hpc*
.
To allow the parallel installation of multiple versions of a library,
the package name contains the version number (with dots .
replaced by underscores _
). master-
packages are supplied to
ensure that the latest version of a package is installed. When these master
packages are updated, the latest version of the respective packages is
installed, while leaving previous versions installed. Library packages are
split between runtime and compile-time packages. The compile-time packages
typically supply include
files and .so
files for shared libraries. Compile-time package names end with
-devel
. For some libraries, static
(.a
) libraries are supplied as well. Package names for
these end with -devel-static
.
As an example, these are the package names of the ScaLAPACK library version 2.0.2 built with GCC for Open MPI v2:
library package: libscalapack2_2_1_0-gnu-openmpi2-hpc
library master package: libscalapack2-gnu-openmpi2-hpc
development package: libscalapack2_2_1_0-gnu-openmpi2-hpc-devel
development master package: libscalapack2-gnu-openmpi2-hpc-devel
static library package: libscalapack2_2_1_0-gnu-openmpi2-hpc-devel-static
The digit 2
appended to the library name denotes the
.so
version of the library.
To install a library package, run zypper in
LIBRARY-MASTER-PACKAGE
. To install a
development file, run zypper in
LIBRARY-DEVEL-MASTER-PACKAGE
.
The GNU compiler collection version 7 as provided with SLE HPC and the MPI flavors Open MPI v.3, Open MPI v.4, MPICH, and MVAPICH2 are currently supported.
The Development Tools Module might provide later versions of the GNU compiler suite. To view available compilers, run the following command:
>
zypper search '*-compilers-hpc'
7.3.1 boost
— Boost C++ Libraries #
Boost
is a set of portable C++ libraries that provide a
reference implementation of "existing practices". See the full release
notes for Boost 1.71 at
https://www.boost.org/users/history/version_1_71_0.html.
To load the highest available serial version of this module, run the following command:
>
module load TOOLCHAIN boost
To use MPI-specific boost libraries, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR boost
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information about available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
boost-gnu-hpc
boost-gnu-hpc-devel
Most Boost libraries do not depend on MPI flavors. However, Boost contains a set of libraries to abstract interaction with MPI. These libraries depend on the MPI flavor used.
List of master packages:
boost-gnu-MPI_FLAVOR-hpc
boost-gnu-MPI_FLAVOR-hpc-devel
boost-gnu-MPI_FLAVOR-hpc-python3
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
7.3.2 FFTW HPC library — discrete Fourier transforms #
FFTW
is a C subroutine library for computing the
Discrete Fourier Transform (DFT) in one or more dimensions, of both real
and complex data, and of arbitrary input size.
This library is available as both a serial and an MPI-enabled variant. This module requires a compiler toolchain module loaded. To select an MPI variant, the respective MPI module must be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN fftw3
To load the MPI-enabled variant, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR fftw3
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information about available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
libfftw3-gnu-hpc
fftw3-gnu-hpc-devel
libfftw3-gnu-MPI_FLAVOR-hpc
fftw3-gnu-MPI_FLAVOR-hpc-devel
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
7.3.3 NumPy Python library #
NumPy is a general-purpose array-processing package designed to efficiently manipulate large multi-dimensional arrays of arbitrary records without sacrificing too much speed for small multi-dimensional arrays.
NumPy is built on the Numeric code base and adds features introduced by the discontinued NumArray project, as well as an extended C API, and the ability to create arrays of arbitrary type, which also makes NumPy suitable for interfacing with general-purpose database applications.
There are also basic facilities for discrete Fourier transform, basic linear algebra, and random number generation.
This package is available both for Python 2 and 3. The specific compiler toolchain module must be loaded for this library. The correct library module for the Python version used needs to be specified when loading this module. To load this module, run the following command:
>
module load TOOLCHAIN pythonVERSION-numpy
For information about the toolchain to load see: Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
pythonVERSION-numpy-gnu-hpc
pythonVERSION-numpy-gnu-hpc-devel
7.3.4 SciPy Python Library #
SciPy is a collection of mathematical algorithms and convenience functions built on the NumPy extension of Python. It provides high-level commands and classes for manipulating and visualizing data. With SciPy, an interactive Python session becomes a data-processing and system-prototyping environment.
This package is available both for Python 2 (up to version 1.2.0 only) and 3. The specific compiler toolchain modules must be loaded for this library. The correct library module for the Python version used must be specified when loading this module. To load this module, run the following command:
>
module load TOOLCHAIN pythonVERSION-scipy
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
pythonPYTHON_VERSION-scipy-gnu-hpc
pythonPYTHON_VERSION-scipy-gnu-hpc-devel
7.3.5 HYPRE — scalable linear solvers and multigrid methods #
HYPRE is a library of linear solvers that are designed to solve large and detailed simulations faster than traditional methods at large scales.
The library offers a comprehensive suite of scalable solvers for large-scale scientific simulation, featuring parallel multigrid methods for both structured and unstructured grid problems. HYPRE is highly portable and supports several languages. It is developed at Lawrence Livermore National Laboratory.
For this library, a compiler toolchain and an MPI flavor needs to be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR hypre
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
hypre-gnu-MPI_FLAVOR-hpc-devel
libHYPRE-gnu-MPI_FLAVOR-hpc
7.3.6 METIS — serial graph partitioning and fill-reducing matrix ordering library #
METIS is a set of serial programs for partitioning graphs, partitioning finite element meshes, and producing fill-reducing orderings for sparse matrices. The algorithms implemented in METIS are based on the multilevel recursive-bisection, multilevel k-way, and multi-constraint partitioning schemes.
For this library, a compiler toolchain must be loaded beforehand. To load METIS, run the following command:
>
module load TOOLCHAIN metis
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
metis-gnu-hpc
metis-gnu-hpc-devel
metis-gnu-hpc-doc
metis-gnu-hpc-examples
libmetis-gnu-hpc
7.3.7 GSL — GNU Scientific Library #
The GNU Scientific Library (GSL) is a numerical library for C and C++ programmers.
The library provides a wide range of mathematical routines such as random number generators, special functions, and least-squares fitting. There are over 1000 functions in total with an extensive test suite.
It is free software under the GNU General Public License.
For this library, a compiler toolchain must be loaded beforehand. To load GSL, run the following command:
>
module load TOOLCHAIN gsl
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
gsl-gnu-hpc
gsl-gnu-hpc-devel
gsl-gnu-hpc-doc
libgsl-gnu-hpc
libgslcblas-gnu-hpc
7.3.8 OCR — Open Community Runtime (OCR) for shared memory #
The Open Community Runtime project is an application-building framework that explores methods for high-core-count programming with focus on HPC applications.
This first reference implementation is a functionally-complete implementation of the OCR 1.0.0 specification, with extensive tools and demonstration examples, running on both single nodes and clusters.
This library is available both without and with MPI support. For this library, a compiler toolchain, and if applicable an MPI flavor, must be loaded beforehand. To load ocr, run the following command:
>
module load TOOLCHAIN ocr
To load ocr with MPI support, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR ocr
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
ocr-gnu-hpc
ocr-gnu-hpc-devel
ocr-gnu-hpc-doc
ocr-gnu-hpc-examples
ocr-gnu-MPI_FLAVOR-hpc
ocr-gnu-MPI_FLAVOR-hpc-devel
ocr-gnu-MPI_FLAVOR-hpc-doc
ocr-gnu-MPI_FLAVOR-hpc-examples
7.3.9 memkind — heap manager for heterogeneous memory platforms and mixed memory policies #
The memkind library is a user-extensible heap manager built on top of jemalloc. It enables control over memory characteristics and a partitioning of the heap between kinds of memory. The kinds of memory are defined by operating system memory policies that have been applied to virtual address ranges. Memory characteristics supported by memkind without user extension include control of NUMA and page size features.
For more information, see:
the man pages
memkind
andhbwallow
This tool is only available for AMD64/Intel 64.
7.3.10 MUMPS — MUltifrontal Massively Parallel sparse direct Solver #
MUMPS (not to be confused with the database programming language with the same acronym) solves a sparse system of linear equations (A x = b) using Gaussian elimination.
This library requires a compiler toolchain and an MPI flavor to be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR mumps
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
libmumps-gnu-MPI_FLAVOR-hpc
mumps-gnu-MPI_FLAVOR-hpc-devel
mumps-gnu-MPI_FLAVOR-hpc-doc
mumps-gnu-MPI_FLAVOR-hpc-examples
7.3.11 Support for PMIx in Slurm and MPI libraries #
PMIx abstracts the internals of MPI implementations for workload managers
and unifies the way MPI jobs are started by the workload manager. With
PMIx, there is no need to use the individual MPI launchers on Slurm,
because srun
will take care of this. In addition, the
workload manager can determine the topology of the cluster, so you do not
need to specify topologies manually.
7.3.12 OpenBLAS library — optimized BLAS library #
OpenBLAS is an optimized BLAS (Basic Linear Algebra Subprograms) library
based on GotoBLAS2 1.3, BSD version. It provides the BLAS API. It is
shipped as a package enabled for environment modules, so it requires
using Lmod to select a version. There are two variants of this library: an
OpenMP-enabled variant, and a pthreads
variant.
OpenMP-Enabled Variant#
The OpenMP variant covers the following use cases:
Programs using OpenMP. This requires the OpenMP-enabled library version to function correctly.
Programs using pthreads. This requires an OpenBLAS library without pthread support. This can be achieved with the OpenMP-version. We recommend limiting the number of threads that are used to 1 by setting the environment variable
OMP_NUM_THREADS=1
.Programs without pthreads and without OpenMP. Such programs can still take advantage of the OpenMP optimization in the library by linking against the OpenMP variant of the library.
When linking statically, ensure that libgomp.a
is
included by adding the linker flag -lgomp
.
pthreads Variant#
The pthreads variant of the OpenBLAS library can improve the performance of
single-threaded programs. The number of threads used can be controlled with
the environment variable OPENBLAS_NUM_THREADS
.
Installation and Usage#
This module requires loading a compiler toolchain beforehand. To select the latest version of this module provided, run the following command:
Standard version:
>
module load TOOLCHAIN openblasOpenMP/pthreads version:
>
module load TOOLCHAIN openblas-pthreads
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
libopenblas-gnu-hpc
libopenblas-gnu-hpc-devel
libopenblas-pthreads-gnu-hpc
libopenblas-pthreads-gnu-hpc-devel
7.3.13 PETSc HPC library — solver for partial differential equations #
PETSc is a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
This module requires loading a compiler toolchain and an MPI library flavor beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR petsc
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information about available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
libpetsc-gnu-MPI_FLAVOR-hpc
petsc-gnu-MPI_FLAVOR-hpc-devel
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
7.3.14 ScaLAPACK HPC library — LAPACK routines #
The library ScaLAPACK (short for Scalable LAPACK) includes a subset of LAPACK routines designed for distributed memory MIMD-parallel computers.
This library requires loading both a compiler toolchain and an MPI library flavor beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR scalapack
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information about available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
libscalapack2-gnu-MPI_FLAVOR-hpc
libscalapack2-gnu-MPI_FLAVOR-hpc-devel
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
7.3.15 SCOTCH — static mapping and sparse matrix reordering algorithms #
SCOTCH is a set of programs and libraries that implement the static mapping and sparse matrix reordering algorithms developed in the SCOTCH project.
SCOTCH applies graph theory to scientific computing problems such as graph and mesh partitioning, static mapping, and sparse matrix ordering, in application domains ranging from structural mechanics to operating systems or bio-chemistry.
For this library a compiler toolchain and an MPI flavor must be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR scotch
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information about available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
libptscotch-gnu-MPI_FLAVOR-hpc
ptscotch-gnu-MPI_FLAVOR-hpc
ptscotch-gnu-MPI_FLAVOR-hpc-devel
7.3.16 SuperLU — supernodal LU decomposition of sparse matrices #
SuperLU is a general-purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations. The library is written in C and can be called from C and Fortran programs.
This library requires a compiler toolchain and an MPI flavor to be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN superlu
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
libsuperlu-gnu-hpc
superlu-gnu-hpc-devel
superlu-gnu-hpc-doc
superlu-gnu-hpc-examples
7.3.17 Trilinos — object-oriented software framework #
The Trilinos Project is an effort to develop algorithms and enabling technologies within an object-oriented software framework for the solution of large-scale, complex multi-physics engineering and scientific problems. A unique design feature of Trilinos is its focus on packages.
This library needs a compiler toolchain and MPI flavor to be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR trilinos
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
libtrilinos-gnu-MPI_FLAVOR-hpc
trilinos-gnu-MPI_FLAVOR-hpc-devel
7.4 File format libraries #
7.4.1 Adaptable IO System (ADIOS) #
The Adaptable IO System (ADIOS) provides a flexible way for scientists to describe the data in their code that might need to be written, read, or processed outside of the running simulation. For more information, see https://www.olcf.ornl.gov/center-projects/adios/.
To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR adios
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information about available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
adios-gnu-MPI_FLAVOR-hpc
adios-gnu-MPI_FLAVOR-hpc-devel
Replace MPI_FLAVOR with one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
7.4.2 HDF5 HPC library — model, library, and file format for storing and managing data #
HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of data types, and is designed for flexible and efficient I/O and for high-volume and complex data. HDF5 is portable and extensible, allowing applications to evolve in their use of HDF5.
There are serial and MPI variants of this library available. All flavors require loading a compiler toolchain module beforehand. The MPI variants also require loading the correct MPI flavor module.
To load the highest available serial version of this module, run the following command:
>
module load TOOLCHAIN hdf5
When an MPI flavor is loaded, you can load the MPI version of this module by running the following command:
>
module load TOOLCHAIN MPI_FLAVOR phdf5
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information about available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
hdf5-hpc-examples
hdf5-gnu-hpc-devel
libhdf5-gnu-hpc
libhdf5_cpp-gnu-hpc
libhdf5_fortran-gnu-hpc
libhdf5_hl_cpp-gnu-hpc
libhdf5_hl_fortran-gnu-hpc
hdf5-gnu-MPI_FLAVOR-hpc-devel
libhdf5-gnu-MPI_FLAVOR-hpc
libhdf5_fortran-gnu-MPI_FLAVOR-hpc
libhdf5_hl_fortran-MPI_FLAVOR-hpc
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
For general information about Lmod and modules, see Section 7.1, “Lmod — Lua-based environment modules”.
7.4.3 NetCDF HPC library — implementation of self-describing data formats #
The NetCDF software libraries for C, C++, Fortran, and Perl are sets of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
netcdf
Packages#
The packages with names starting with netcdf
provide C
bindings for the NetCDF API. These are available with and without MPI
support.
There are serial and MPI variants of this library available. All flavors require loading a compiler toolchain module beforehand. The MPI variants also require loading the correct MPI flavor module.
The MPI variant is available after the MPI module is loaded. Both
variants require loading a compiler toolchain module beforehand. To load
the highest version of the non-MPI netcdf
module, run
the following command:
>
module load TOOLCHAIN MPI_FLAVOR netcdf
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
netcdf-gnu-hpc
netcdf-gnu-hpc-devel
netcdf-gnu-hpc
netcdf-gnu-hpc-devel
netcdf-gnu-MPI_FLAVOR-hpc
netcdf-gnu-MPI_FLAVOR-hpc-devel
netcdf-gnu-MPI_FLAVOR-hpc
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
netcdf-cxx
Packages#
netcdf-cxx4 provides a C++ binding for the NetCDF API.
This module requires loading a compiler toolchain module beforehand. To load this module, run the following command:
>
module load TOOLCHAIN netcdf-cxx4
For information about the toolchain to load, see: Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
libnetcdf-cxx4-gnu-hpc
libnetcdf-cxx4-gnu-hpc-devel
netcdf-cxx4-gnu-hpc-tools
netcdf-fortran
Packages#
The netcdf-fortran
packages provide Fortran bindings for
the NetCDF API, with and without MPI support.
There are serial and MPI variants of this library available. All flavors require loading a compiler toolchain module beforehand. The MPI variants also require loading the correct MPI flavor module.
The MPI variant is available after the MPI module is loaded. Both
variants require loading a compiler toolchain module beforehand. To load
the highest version of the non-MPI netcdf-fortran
module, run the following command:
>
module load TOOLCHAIN netcdf-fortran
To load the MPI variant, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR netcdf-fortran
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”, For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
libnetcdf-fortran-gnu-hpc
libnetcdf-fortran-gnu-hpc-devel
libnetcdf-fortran-gnu-hpc
libnetcdf-fortran-gnu-MPI_FLAVOR-hpc
libnetcdf-fortran-gnu-MPI_FLAVOR-hpc-devel
libnetcdf-fortran-gnu-MPI_FLAVOR-hpc
7.4.4 HPC flavor of pnetcdf
#
NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.
Parallel netCDF (PnetCDF
) is a library providing
high-performance I/O while still maintaining file-format compatibility with
NetCDF by Unidata.
The package is available for the MPI Flavors Open MPI 2 and 3, MVAPICH2, and MPICH.
To load the highest available serial version of this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR pnetcdf
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of MPI master packages:
libpnetcdf-gnu-MPI_FLAVOR-hpc
pnetcdf-gnu-MPI_FLAVOR-hpc
pnetcdf-gnu-MPI_FLAVOR-hpc-devel
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
7.5 MPI libraries #
Three different implementation of the Message Passing Interface (MPI) standard are provided standard with the HPC module:
Open MPI (version 3 and version 4)
MVAPICH2
MPICH
These packages have been built with full environment module support (LMOD).
The following packages are available:
For Open MPI:
user programs:
openmpi3-gnu-hpc
andopenmpi4-gnu-hpc
shared libraries:
libopenmpi3-gnu-hpc
andlibopenmpi4-gnu-hpc
development libraries, headers and tools required for building:
openmpi3-gnu-hpc-devel
andopenmpi4-gnu-hpc-devel
documentation:
openmpi3-gnu-hpc-docs
andopenmpi4-gnu-hpc-docs
.
For MVAPICH2
user programs and libraries:
mvapich2-gnu-hpc
development libraries, headers and tools for building:
mvapich2-gnu-hpc-devel
documentation:
mvapich2-gnu-hpc-doc
For MPICH:
user programs and libraries:
mpich-gnu-hpc
development libraries, headers and tools for building:
mpich-gnu-hpc-devel
The different MPI implementations and versions are independent of each other, and can be installed in parallel.
Use environment modules to pick the version to use:
For Open MPI v.3:
>
module load TOOLCHAIN openmpi/3For Open MPI v.4:
>
module load TOOLCHAIN openmpi/4For MVAPICH2:
>
module load TOOLCHAIN mvapich2For MPICH:
>
module load TOOLCHAIN mpich
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
7.6 Profiling and benchmarking libraries and tools #
SUSE Linux Enterprise High Performance Computing provides tools for profiling MPI applications and benchmarking MPI performance.
7.6.1 IMB — Intel* MPI benchmarks #
The Intel* MPI Benchmarks package provides a set of elementary benchmarks that conform to the MPI-1, MPI-2, and MPI-3 standards. You can run all of the supported benchmarks, or a subset specified in the command line, using a single executable file. Use command line parameters to specify various settings, such as time measurement, message lengths, and selection of communicators. For details, see the Intel* MPI Benchmarks User's Guide: https://software.intel.com/en-us/imb-user-guide.
For the IMB binaries to be found, a compiler toolchain and an MPI flavor must be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR imb
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
imb-gnu-MPI_FLAVOR-hpc
7.6.2 PAPI HPC library — consistent interface for hardware performance counters #
PAPI provides a tool with a consistent interface and methodology for the performance counter hardware found in most major microprocessors.
This package works with all compiler toolchains and does not require a compiler toolchain to be selected. Load the latest version provided by running the following command:
>
module load TOOLCHAIN papi
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”.
List of master packages:
papi-hpc
papi-hpc-devel
For general information about Lmod and modules, see Section 7.1, “Lmod — Lua-based environment modules”.
7.6.3 mpiP — lightweight MPI profiling library #
mpiP is a lightweight profiling library for MPI applications. Because it only collects statistical information about MPI functions, mpiP generates considerably less overhead and much less data than tracing tools. All the information captured by mpiP is task-local. It only uses communication during report generation, typically at the end of the experiment, to merge results from all of the tasks into one output file.
For this library a compiler toolchain and MPI flavor must be loaded beforehand. To load this module, run the following command:
>
module load TOOLCHAIN MPI_FLAVOR mpip
For information about the toolchain to load, see Section 7.2, “GNU Compiler Toolchain Collection for HPC”. For information on available MPI flavors, see Section 7.5, “MPI libraries”.
List of master packages:
mpiP-gnu-MPI_FLAVOR-hpc
mpiP-gnu-MPI_FLAVOR-hpc-devel
mpiP-gnu-MPI_FLAVOR-hpc-doc
MPI_FLAVOR must be one of the supported MPI flavors described in Section 7.5, “MPI libraries”.
7.7 Creating environment containers with Singularity #
You can deploy environments with preconfigured environment variables by using environment containers. Environment containers include only the components that are part of the environment, plus any required user applications. To create a container from the current HPC environment, use the container platform Singularity. Singularity is available from SUSE Package Hub. You can also use Spack to configure the environment to use with Singularity.
For more information, see the following documentation:
Enabling the SUSE Package Hub extension: https://packagehub.suse.com/how-to-use/.
Using Spack to configure the environment: https://spack.readthedocs.io/en/latest/containers.html#.
Singularity documentation: https://apptainer.org/docs-legacy.