Short Course: Introduction to Parallel Computing

Short Course Description

Parallel computing is a foundation of modern high-performance computing, enabling the execution of complex computations by dividing tasks across multiple processors. This two-day course provides a comprehensive introduction to parallel computing, covering both shared and distributed memory paradigms. Participants will explore the fundamental principles of parallelism, learn programming models such as OpenMP and MPI, and gain hands-on experience in designing parallel applications. Whether you’re a beginner or looking to broaden your understanding of parallel computing, this course will equip you with the tools and techniques to harness the power of multi-core and distributed systems effectively.

Course Content Overview

Day 1: Foundations of Parallel Computing

  • Session 1: Introduction to Parallel Computing 
    •  Overview of parallel computing concepts 
    •  Types of parallelism: task, data, and pipeline 
    •  Shared vs. distributed memory architectures 
  • Session 2: Programming Models for Parallel Computing 
    • Overview of OpenMP (shared memory) and MPI (distributed memory) 
    • Key differences and use cases for each model 
  • Session 3: Basics of Shared Memory Programming with OpenMP 
    • Introduction to OpenMP directives and syntax 
    • Parallel regions, work sharing, and synchronization constructs 
  • Session 4: Basics of Distributed Memory Programming with MPI 
    • Introduction to MPI communication model 
    • Point-to-point communication (send/receive) basics 

Day 2: Advanced Topics in Parallel Computing

  • Session 1: Advanced OpenMP Techniques 
    • Tasking model, nested parallelism, and thread management 
    • Performance tuning with scheduling policies 
  • Session 2: Advanced MPI Techniques 
    • Collective communication operations (broadcast, scatter, gather) 
    • Non-blocking communication and process topologies 
  • Session 3:  Best Practices in Parallel Computing
    • Designing scalable, efficient, and maintainable parallel programs
  • Session 4: Case Studies
    • Real-world applications of parallel computing in various domains   

Learning Outcomes

By the end of this course, participants will be able to:

  • Understand the fundamental concepts of parallel computing and its importance in high-performance systems.
  • Differentiate between shared memory (OpenMP) and distributed memory (MPI) programming models.
  • Write basic parallel applications using both OpenMP for shared memory systems and MPI for distributed memory systems.
  • Design scalable parallel solutions for real-world problems using best practices in parallel programming.