Message Passing Interface (MPI)
Last updated
Last updated
The Message Passing Interface (MPI) is an Application Program Interface that defines a model of parallel computing where each parallel process has its own local memory, and data must be explicitly shared by passing messages between processes.
Using MPI allows programs to scale beyond the processors and shared memory of a single compute server, to the distributed memory and processors of multiple compute servers combined together.
An MPI parallel code requires some changes from serial code, as MPI function calls to communicate data are added, and the data must somehow be divided across processes.
Definition and Purpose: MPI, which stands for Message Passing Interface, is not a library but a specification that dictates how message-passing libraries should be constructed. Its objective is to standardize the way data is transferred between different processes for parallel programming, aiming for practicality, portability, efficiency, and flexibility.
Programming Model: Initially designed for distributed memory architectures in the late 1980s to early 1990s, MPI has adapted to also serve shared and hybrid memory systems. Despite the architectural changes, the programming model retains its focus on distributed memory, and all parallelism must be explicitly coded by the programmer.
Language Support: Interface specifications have been defined primarily for C and Fortran. While earlier versions had C++ bindings, they were removed in MPI-3, which also supports advanced Fortran features.
Functionality and Portability: MPI has become a standard in high-performance computing, offering over 430 routines as of MPI-3. The specification's portability allows for easy migration of code across platforms that support MPI, and vendors can optimize implementations for native hardware features.
Historical Context and Evolution: Originating from a need for a standard in parallel computing in the early 1990s, MPI has undergone various iterations. It started with MPI-1, evolved into MPI-2, and as of the latest data, was at MPI-3.1, with MPI-4 under development.
MPI Point-to_Point communication is the most used communication method in MPI. It involves the transfer of a message from one process to a particular process in the same communicator. MPI provides blocking (synchronous) and non-blocking (asynchronous) Point-to-Point communication. With blocking communication, an MPI process sends a message to another MPI process and waits until the receiving process completely and correctly receives the message before it continues its work. On the other hand, a sending process using non-blocking communication sends a message to another MPI process and continues its work without waiting to ensure that the message has been correctly received by the receiving process.
With this type of MPI communication method, a process broadcasts a message is to all processes in the same communicator including itself.
With the MPI One-sided communication method, a process can directly access the memory space of another process without involving it.