5/15/2023 0 Comments Download osu mobileIt is optimized for MPI communication over InfiniBand and works with many MPI implementations such as OpenMPI and MPICH. Unified Communication X (UCX) is a framework of communication APIs for HPC. The following figure illustrates the architecture for the popular MPI libraries. Similarly, Intel MPI, MVAPICH, and MPICH are ABI compatible. Additionally, HPC-X and OpenMPI are ABI compatible, so you can dynamically run an HPC application with HPC-X that was built with OpenMPI. Overall, the HPC-X MPI performs the best by using the UCX framework for the InfiniBand interface, and takes advantage of all the Mellanox InfiniBand hardware and software capabilities. If you have flexibility regarding which MPI you can choose, and you want the best performance, try HPC-X. If an HPC application recommends a particular MPI library, try that version first. We recommend using the latest stable versions of the packages, or referring to the azhpc-images repo. More examples for setting up other MPI implementations on others distros is on the azhpc-images repo. Though the examples here are for RHEL/CentOS, but the steps are general and can be used for any compatible Linux operating system such as Ubuntu (16.04, 18.04 19.04, 20.04) and SLES (12 SP4 and 15). These VM images come optimized and pre-loaded with the OFED drivers for RDMA and various commonly used MPI libraries and scientific computing packages and are the easiest way to get started. Later versions (2017, 2018) of the Intel MPI runtime library may or may not be compatible with the Azure RDMA drivers.įor SR-IOV enabled RDMA capable VMs, CentOS-HPC VM images version 7.6 and later are suitable. Hence, only Microsoft MPI (MS-MPI) 2012 R2 or later and Intel MPI 5.x versions are supported. On non-SR-IOV enabled VMs, supported MPI implementations use the Microsoft Network Direct (ND) interface to communicate between VMs.The SR-IOV enabled VM sizes on Azure allow almost any flavor of MPI to be used with Mellanox OFED.HPC workloads on the RDMA capable HB-series and N-series VMs can use MPI to communicate over the low latency and high bandwidth InfiniBand network. It is commonly used across many HPC workloads. The Message Passing Interface (MPI) is an open library and de-facto standard for distributed memory parallelization. Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
0 Comments
Leave a Reply. |