Course syllabus
Course-PM
DAT400 / DIT431 High-performance parallel programming lp1 HT23 (7.5 hp)
The course is offered by the Department of Computer Science and Engineering
Course meetings
All lectures, problem sessions, workshops and labs will be held on campus. Check timeedit for the rooms.
Staff contact details
- Miquel Pericàs <miquelp@chalmers.se> (Examiner + Lecturer).
- Sonia Rani Gupta <soniar@chalmers.se> (Teaching Assistant)
- Hari Abram <hariv@chalmers.se> (Teaching Assistant)
Student representatives contact details
Workshop links
Instructions for connecting remotely to lab machines: https://chalmers.topdesk.net/tas/public/ssp/content/detail/knowledgeitem?unid=304967f9ad004d3293b986a976e39833
Course purpose
This course looks at parallel programming models, efficient programming methodologies and performance tools with the objective of developing highly efficient parallel programs.
Course Schedule
The below schedule is in construction. Check back on Aug 28th for a more complete schedule. The first lecture will be on Tuesday, Aug 29th, at 13:15h
Note that not all slots in Timedit are used.
Aug 29th, 13:15h-16h | Miquel | ||||
Aug 31st, 13:15-16h | Miquel | ||||
Sept 1st & Sept 5th, 8h-12h |
Friday: Hari, Tuesday: Sonia |
||||
Sept 5th, 13:15h-16h | Miquel | ||||
Sept 7th, 13:15h-16h | Miquel | ||||
Sept 8th & Sept 12th, 8-12h |
Friday: Sonia, Tuesday: Hari |
||||
Sept 12th, 13:15h-15h | Hari | ||||
Sept 14th, 13:15h-15h | Sonia | ||||
Sept 15th & Sept 19th, 8h-12 |
Friday: Sonia, Tuesday: Hari |
||||
Sept 19th, 13:15h-16h | Miquel | ||||
Sept 21st, 13:15h-16h | Miquel | ||||
Sept 22nd & Sept 26th, 8h-12h |
Friday: Sonia, Tuesday: Hari |
||||
Sept 26th, 13:15h-16h | Miquel | ||||
Sept 28th, 13:15h-16h | Miquel | ||||
Sept 29th & Oct 3rd, 8h-12h |
Friday: Sonia, Tuesday: Hari |
||||
Oct 3rd, 13:15h-16h | Miquel | ||||
Oct 5th, 13:15h-16h | Miquel | ||||
Oct 10th & Oct 13th, 8h-12h |
Tuesday: Hari, Friday: Sonia |
||||
Oct 10th | Miquel | ||||
20 | Oct 12th | High-performance computing using Python | Nikela | ||
Oct 17th & Oct 20th, 8h-12h |
Tuesday: Hari, Friday: Sonia |
||||
Oct 17th, 13:15h-15h | Miquel | ||||
Oct 19th, 13:15h-15h | Miquel | ||||
Oct 27th, 14h-18h |
Course literature
The theory part (part #1) of the course loosely follows the following book: "Parallel Programming for Multicore and Cluster Systems", Thomas Rauber and Gudula Rünger (2nd edition, 2013). This book can be accessed through the Chalmers Library: link to the coursebook (accessible via Chalmers library).
The practical part (part #2) which covers various programming models and libraries is based on several online resources that will be published at a later point
Course design
The course consists of a set of lectures and laboratory sessions. The lectures start with an overview of parallel computer architectures and parallel programming models and paradigms. An important part of the discussion is mechanisms for synchronization and data exchange. Next, performance analysis of parallel programs is covered. The course proceeds with a discussion of tools and techniques for developing parallel programs in shared address spaces. This section covers popular multithreaded programming environments such as OpenMP. Next, the course discusses the development of parallel programs for distributed address space. The focus in this part is on the Message Passing Interface (MPI). Finally, we discuss programming approaches for executing applications on accelerators such as GPUs. This part introduces the CUDA (Compute Unified Device Architecture) programming environment.
The lectures are complemented with a set of laboratory sessions in which participants explore the topics introduced in the lectures. During the lab sessions, participants parallelize sample programs over a variety of parallel architectures and use performance analysis tools to detect and remove bottlenecks in the parallel implementations of these programs.
Changes made since the last occasion
- TBA
Learning objectives and syllabus
Learning objectives:
Knowledge and Understanding
- List the different types of parallel computer architectures, programming models and paradigms, as well as different schemes for synchronization and communication.
- List the typical steps to parallelize a sequential algorithm
- List different methods for analysis methodologies of parallel program systems
Competence and skills
- Apply performance analysis methodologies to determine the bottlenecks in the execution of a parallel program
- Predict the upper limit to the performance of a parallel program
Judgment and approach
- Given a particular software, specify what performance bottlenecks are limiting the efficiency of parallel code and select appropriate strategies to overcome these bottlenecks
- Design resource-aware parallelization strategies based on a specific algorithms structure and computing system organization
- Argue which performance analysis methods are important given a specific context
Link to the syllabus on Studieportalen.
Assessment
The exam (4.5c)
The final exam is in written form and accounts for 4.5 credits. More information about the final exam will be released later.
The labs (3.0c)
Successful completion of the labs accounts for 3.0 credits.
The final grade is the same grade as the exam. To be given a pass grade on the whole course, both components need to have a pass.
Course summary:
Date | Details | Due |
---|---|---|