General information
Course type | AMUPIE |
Module title | Parallel Processing In Distributed Systems |
Language | English |
Module lecturer | prof. UAM dr hab. Grzegorz Musiał |
Lecturer's email | gmusial@amu.edu.pl |
Lecturer position | professor |
Faculty | Faculty of Physics |
Semester | 2023/2024 (summer) |
Duration | 30 |
ECTS | 5 |
USOS code | 04-W-PPDS-45 |
Timetable
lecture 15h + computer lab 15h = 30h
Tuesday, from 5:15 pm until 6:45 pm , room 42 (Collegium Physicum, Linux Lab)
The date and the hour of the classes may be adjusted to the students' abilities if they report such a problem via the lecturer's e-mail address provided above
Module aim (aims)
• describe basic schemes of modeling in science; explain advantages and disadvantages, good and bad conditioning of basic methods in computing programs
• write simple programs in C/C++ programming language with parallel processing based on message passing and using the MPI library; parallelize the processing in simplest applications; explain and properly arrange the synchronization within a program with parallel processing based on the MPI library
• set the standard computing cluster including the one of the Beowulf type; characterize the basic cluster parameters: throughput, load balancing, system scalability
• identify the processing parallelization capabilities within a given algorithm; determine the speedup and efficiency of this parallelization; describe the basic architectures of parallel systems, latencies, the importance of bandwidth and Flynn classification
• introduce parallel processing in simple programs written in C/C++ programming language using MPI library; apply an appropriate communication to parallelize the processing within these programs; use different modes of synchronization within the program with parallel processing based on the MPI library
Pre-requisites in terms of knowledge, skills and social competences (where relevant)
The basic knowledge and practice with UNIX/Linux operating systems and with C/C++ or Fortran programming
Syllabus
Week 1: Modeling, building of computing programs, sequential versus parallel processing
Week 2: From many sequential operating systems to one parallel system
Week 3: Parallelization of processing, some practice in C/C++ or Fortran, MPI library
Week 4: Clustering and parallel systems, HPC, throughput, load balancing, system scalability
Week 5: Parallelization of processing and its speedup, concurrency, latencies, grid computing
Week 6: Setting up an MPICH2 cluster in Ubuntu operating system
Week 7: The message-passing model, basics of MPI message passing
Week 8: Parallel processing based on the MPI library - point to point communication
Week 9: Parallel processing based on the MPI library - collective communication
Week 10: Modes of synchronization of parallel processes, their latencies, timing of computations
Week 11: Speedup, efficiency and scalability, some programs for heterogeneous environment
Week 12: The more complex program for homogeneous environment, modularity advantages
Week 13: Completing and explaining of own parallelized programs, Verification of earned knowledge and practice
veek 14: Elements of programming with threads in OpenMP and Pthread environments in computer systems with shared memory.
Week 15: Assigning of grades, supplements for the most interesting parts in the laboratory
Reading list
- Gropp, E. Lusk, A. Skjellum, Using MPI – 2nd Edition: Portable Parallel Programming with the Message-Passing Interface, MIT Press, Cambridge 1999
- M.J. Quinn, Parallel programming in C with MPI and OpenMP, McGrow-Hill, New York 2004
- D. Tansley, Linux & Unix Shell Programming, Addison-Wesley, 2000