Introduction to MPI and OpenMP

Table of Contents

Part 1: MPI (Message Passing Interface) - A Hands-On Guide

Introduction

Parallel programming is a powerful technique for solving complex problems by breaking them into smaller tasks that can be executed simultaneously. One widely used standard for parallel programming is MPI (Message Passing Interface). In this hands-on guide, we’ll explore the basics of MPI and learn how to write a simple MPI program.

What is MPI?

MPI is a standardized and portable message-passing system designed for parallel computing. It allows processes to communicate with each other in a parallel application. MPI is commonly used in high-performance computing (HPC) environments to harness the power of multiple processors or nodes.

Installing MPI

Before we begin, let’s set up an MPI environment. The following steps are for a Linux environment:

  1. Install MPI:

    sudo apt-get update
    sudo apt-get install mpich
    
  2. Verify Installation:

    mpicc --version
    

    You should see the version information if the installation was successful.

Hello, MPI World!

Now, let’s create a simple MPI program to print “Hello, MPI World!” using multiple processes.

// hello_mpi.c
#include <stdio.h>
#include <mpi.h>

int main(int argc, char** argv) {
    MPI_Init(&argc, &argv);

    int rank, size;
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    printf("Hello, MPI World! I am process %d of %d.\n", rank, size);

    MPI_Finalize();
    return 0;
}

Save this code in a file named hello_mpi.c.

Compiling and Running MPI Programs

  1. Compile the program:

    mpicc -o hello_mpi hello_mpi.c
    
  2. Run the MPI program:

    mpiexec -n 4 ./hello_mpi
    

    This command runs the MPI program with 4 processes. Adjust the -n parameter based on the number of processes you want.

You should see output similar to:

Hello, MPI World! I am process 0 of 4.
Hello, MPI World! I am process 1 of 4.
Hello, MPI World! I am process 2 of 4.
Hello, MPI World! I am process 3 of 4.

MPI Concepts

  • MPI_Init(): Initializes the MPI environment.
  • MPI_Comm_rank(): Obtains the rank (ID) of the calling process.
  • MPI_Comm_size(): Obtains the total number of processes in the communicator.
  • MPI_Finalize(): Finalizes the MPI environment.

MPI Communication

MPI supports various communication operations, such as point-to-point communication and collective communication. To delve deeper, explore topics like MPI_Send, MPI_Recv, and collective operations like MPI_Bcast and MPI_Reduce.


Part 2: OpenMP - A Hands-On Guide

Introduction

OpenMP is a set of directives and library functions that enables parallel programming in shared-memory architectures. It simplifies the development of parallel applications by adding parallelism to existing code through compiler directives. In this section, we’ll explore the basics of OpenMP and demonstrate its usage in a simple program.

What is OpenMP?

OpenMP (Open Multiprocessing) is an API that supports multi-platform shared-memory multiprocessing programming. It allows developers to write parallel code that can take advantage of multiple processors.

Hello, OpenMP World!

Let’s create a simple OpenMP program to print “Hello, OpenMP World!” using multiple threads.

// hello_openmp.c
#include <stdio.h>
#include <omp.h>

int main() {
    #pragma omp parallel
    {
        int thread_id = omp_get_thread_num();
        printf("Hello, OpenMP World! I am thread %d.\n", thread_id);
    }

    return 0;
}

Save this code in a file named hello_openmp.c.

Compiling and Running OpenMP Programs

  1. Compile the program:

    gcc -fopenmp -o hello_openmp hello_openmp.c
    

    The -fopenmp flag enables OpenMP support.

  2. Run the OpenMP program:

    ./hello_openmp
    

    You should see output similar to:

    Hello, OpenMP World! I am thread 0.
    Hello, OpenMP World! I am thread 1.
    Hello, OpenMP World! I am thread 2.
    

OpenMP Concepts

  • #pragma omp parallel: Begins a parallel region.
  • omp_get_thread_num(): Returns the thread ID.
  • omp_get_num_threads(): Returns the number of threads in the current team.

Conclusion

This hands-on guide has introduced the basics of MPI and OpenMP separately. By combining the strengths of MPI and OpenMP, you can create high-performance parallel applications that leverage both distributed and shared-memory parallelism.

As you delve deeper into parallel programming, consider exploring more advanced features of MPI and OpenMP, such as communication patterns, task parallelism, and optimization techniques.

Happy parallel coding!

Mustafa Arif
Mustafa Arif
HPC | Cloud | DevOps | AI

My research interests include HPC, Cloud Computing