HAVE FUN WITH MPI (in C language) a new interactive book available on Tech.io
An interactive tutorial playground available on Tech.io.
This work is shared under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Preface
Hi guys! Either you are a Computer Science student, or just a brave programmer who wants to start learning the basic of parallel programming in shared and/or distributed memory systems, this (play)book will light your way on (hopefully!). First, why a playground on Tech.io? Generally, among others, a playground is a useful tool to explain both general concepts or more specific topics. What is amazing, however, it is the possibility to add runnable code samples that every reader can hack. Basically, you can play with snippets of code: just type a few lines and see what will change the next time you run that piece. This playground is focused on Programming with MPI. Starting from what is (M)essage (P)assing (I)nterface, we will then approach and work with OpenMPI, an open-source MPI implementation. You will find a bunch of runnable snippets for each newly introduced concept, along with end-chapter questions. Nothing is mandatory, but you are strongly encouraged to try things out. The examples are in the C language, so knowing the Ritchie’ language is mandatory (we hope that you already know, if you have reached this book). All we need to do now, it is just taking off. Seat back, relax, and code. Alessia Antelmi, PhD Student. Department of Computer Science, Università degli Studi di Salerno
Book outline
- Introduction. A brief introduction to distributed computing using distributed memory paradigm and MPI.
- Let’s start to have fun with MPI
- Take the first steps, Hello world
- The OpenMPI Architecture
- MPI Programming
- Chapter Questions
- Point-to-Point communication. This chapter introduces synchronous and asynchronous communications of the MPI standard.
- MPI Memory model
- Blocking Communication
- Communication Modes
- Non-Blocking Communication
- Chapter Questions
- Datatypes. This chapter introduces Datatypes of the MPI standard.
- Communicate noncontiguous data
- Derived Datatypes
- Chapter Questions
- Collective communications. This chapter introduces collective communications of the MPI standard.
- Collective communications Overview
- MPI Groups
- MPI Communicators
- Collective Communications Routines
- Chapter Questions
- Communication Topologies. A brief introduction to MPI topologies.
- MPI Process Topologies
- Chapter Questions
- HPC Environment for all. This chapter introduces how to create an MPI cluster machine on Amazon AWS.
- MPI Amazon AWS Cluster
- Docker MPI Environment
Book features and recommendations
- All arguments are discussed and experimented during the reading by using simple examples in C.
- By using this book you are able to learn in a more dynamic way.
- You can change the example and integrate it with your code to directly experiment with the topic that you have studied.
- Each example run with a fixed number of processors takes in mind this if you change the code.
- Do (and re-do) the chapter questions.
Book Execution Environment
In this book is used a Docker container that enables to execute in browser MPI program. The Docker container is available on public repository on GitHub. The execution environment provides an Ubuntu 18.04 linux machine and several softwares. The execution environments provide the last version of OpenMPI, the MPI implementation used in this book.
You can build your local docker to experiment on your local machine varying the number of MPI processes, by pull from the official Docker registry the image: docker pull spagnuolocarmine/docker-mpi:latest
. Or you can build the docker image by yourself:
git clone https://github.com/spagnuolocarmine/docker-mpi.git
cd docker-mpi
docker build --no-cache -t dockermpi .
docker run -it -t dockermpi:latest
References
- Peter Pacheco. 2011. An Introduction to Parallel Programming (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
- Kai Hwang, Jack Dongarra, and Geoffrey C. Fox. 2011. Distributed and Cloud Computing: From Parallel Processing to the Internet of Things (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
- Czech, Z. (2017). Introduction to Parallel Computing. Cambridge: Cambridge University Press.
- Blaise Barney, Lawrence Livermore National Laboratory, Message Passing Interface (MPI) – https://computing.llnl.gov/tutorials/mpi/#What
- MPI: A Message-Passing Interface Standard – https://www.mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf
- MPI: A Message-Passing Interface Standard – https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf
- Wilson Greg, Kristian Hermansen. 2011. The Architecture of Open Source Applications, Volume II.
- https://www.rookiehpc.com/mpi/docs/index.php
- Beginning MPI (An Introduction in C)
- Virtual Workshop Cornell – https://cvw.cac.cornell.edu/MPIP2P
- MPI by Blaise Barney, Lawrence Livermore National Laboratory – https://computing.llnl.gov/tutorials/mpi/
- https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/
- https://mpi.deino.net
Suggested readings
- Peter Pacheco. 2011. An Introduction to Parallel Programming (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
- Czech, Z. (2017). Introduction to Parallel Computing. Cambridge: Cambridge University Press.
- Maurice Herlihy and Nir Shavit. 2008. The Art of Multiprocessor Programming. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
- Andy Oram, Greg Wilson, 2008, Beautiful Code, Leading Programmers Explain How They Think, O’Reilly Media.
- Idiomatic Expressions in C By Adam Petersen.
- Mandel Cooper, Advanced Bash Scripting, 2010.
Acknowledgement
I wish to show my gratitude to Alessia Antelmi for reviewing this manuscript and helping to improve the quality by providing ideas and active support during the drawing up.
Enjoy Reading This Article?
Here are some more articles you might like to read next: