HAVE FUN WITH MPI (in C language) a new interactive book available on Tech.io


An interactive tutorial playground available on Tech.io.

MIT license Open Source Love svg3

Maintenance GitHub issues

Donate Ask Me Anything ! Tweet

Licenza Creative Commons
This work is shared under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


Hi guys! Either you are a Computer Science student, or just a brave programmer who wants to start learning the basic of parallel programming in shared and/or distributed memory systems, this (play)book will light your way on (hopefully!). First, why a playground on Tech.io? Generally, among others, a playground is a useful tool to explain both general concepts or more specific topics. What is amazing, however, it is the possibility to add runnable code samples that every reader can hack. Basically, you can play with snippets of code: just type a few lines and see what will change the next time you run that piece. This playground is focused on Programming with MPI. Starting from what is (M)essage (P)assing (I)nterface, we will then approach and work with OpenMPI, an open-source MPI implementation. You will find a bunch of runnable snippets for each newly introduced concept, along with end-chapter questions. Nothing is mandatory, but you are strongly encouraged to try things out. The examples are in the C language, so knowing the Ritchie’ language is mandatory (we hope that you already know, if you have reached this book). All we need to do now, it is just taking off. Seat back, relax, and code. Alessia Antelmi, PhD Student. Department of Computer Science, Università degli Studi di Salerno

Book outline

  1. Introduction. A brief introduction to distributed computing using distributed memory paradigm and MPI.
    • Let’s start to have fun with MPI
    • Take the first steps, Hello world
    • The OpenMPI Architecture
    • MPI Programming
    • Chapter Questions
  2. Point-to-Point communication. This chapter introduces synchronous and asynchronous communications of the MPI standard.
    • MPI Memory model
    • Blocking Communication
    • Communication Modes
    • Non-Blocking Communication
    • Chapter Questions
  3. Datatypes. This chapter introduces Datatypes of the MPI standard.
    • Communicate noncontiguous data
    • Derived Datatypes
    • Chapter Questions
  4. Collective communications. This chapter introduces collective communications of the MPI standard.
    • Collective communications Overview
    • MPI Groups
    • MPI Communicators
    • Collective Communications Routines
    • Chapter Questions
  5. Communication Topologies. A brief introduction to MPI topologies.
    • MPI Process Topologies
    • Chapter Questions
  6. HPC Environment for all. This chapter introduces how to create an MPI cluster machine on Amazon AWS.
    • MPI Amazon AWS Cluster
    • Docker MPI Environment

Book features and recommendations

Book Execution Environment

In this book is used a Docker container that enables to execute in browser MPI program. The Docker container is available on public repository on GitHub. The execution environment provides an Ubuntu 18.04 linux machine and several softwares. The execution environments provide the last version of OpenMPI, the MPI implementation used in this book.

You can build your local docker to experiment on your local machine varying the number of MPI processes, by pull from the official Docker registry the image: docker pull spagnuolocarmine/docker-mpi:latest. Or you can build the docker image by yourself:

git clone https://github.com/spagnuolocarmine/docker-mpi.git
cd docker-mpi
docker build --no-cache -t dockermpi .
docker run -it -t dockermpi:latest


  1. Peter Pacheco. 2011. An Introduction to Parallel Programming (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
  2. Kai Hwang, Jack Dongarra, and Geoffrey C. Fox. 2011. Distributed and Cloud Computing: From Parallel Processing to the Internet of Things (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
  3. Czech, Z. (2017). Introduction to Parallel Computing. Cambridge: Cambridge University Press.
  4. Blaise Barney, Lawrence Livermore National Laboratory, Message Passing Interface (MPI) – https://computing.llnl.gov/tutorials/mpi/#What
  5. MPI: A Message-Passing Interface Standard – https://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf
  6. MPI: A Message-Passing Interface Standard – https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf
  7. Wilson Greg, Kristian Hermansen. 2011. The Architecture of Open Source Applications, Volume II.
  8. https://www.rookiehpc.com/mpi/docs/index.php
  9. Beginning MPI (An Introduction in C)
  10. Virtual Workshop Cornell – https://cvw.cac.cornell.edu/MPIP2P
  11. MPI by Blaise Barney, Lawrence Livermore National Laboratory – https://computing.llnl.gov/tutorials/mpi/
  12. https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/
  13. https://mpi.deino.net

Suggested readings


I wish to show my gratitude to Alessia Antelmi for reviewing this manuscript and helping to improve the quality by providing ideas and active support during the drawing up.