Course : Deep Learning with High-Performance Computing (HPC)

Course Description

The NCC team at UDG offers a comprehensive training program focused on Deep Learning (DL) techniques and their integration with High-Performance Computing (HPC). Developed through consultations with industry leaders and academic professionals, this course addresses the computational challenges and opportunities in deploying scalable AI solutions. The program emphasizes practical implementations and real-world applications, preparing participants to leverage AI advancements in research and industry.

Introduction to Deep Learning and HPC

Deep Learning and HPC represent a transformative approach in artificial intelligence, enabling systems to learn and make predictions from large datasets. However, training and deploying DL models requires significant computational power, particularly for tasks like computer vision, natural language processing, and reinforcement learning. This course bridges the gap between theoretical concepts and practical implementation by leveraging HPC systems, enabling participants to design, train, and deploy complex models at scale. Whether preparing for careers in AI research, industry innovation, or enterprise-level AI deployment, this course delivers essential knowledge and hands-on experience. Real-World applications and industry relevance as AI technologies reshape industries, this course prepares participants to tackle complex challenges in healthcare, agriculture, education, autonomous systems, finance, and natural sciences. By combining theoretical knowledge with hands-on experimentation and project development, students gain the expertise needed to lead AI-driven digital transformation initiatives.

Course Content Overview (12 Modules)

  1. Introduction to AI, ML, and DL – Historical context, evolution, and fundamentals of artificial intelligence.
  2. Mathematical Foundations – Linear algebra, calculus, probability, and optimization techniques essential for ML and DL. Math background for learning algorithms.
  3. Introduction to HPC for DL – Fundamentals of high-performance computing and its role in scaling AI applications.
  4. Computer Vision and Convolutional Neural Networks (CNNs) – Principles of image processing, Roboflow, and YOLO models.
  5. Natural Language Processing (NLP) – Basics of NLP, transformer architectures (GPT, BERT), and advanced language models.
  6. Introduction to Graph Neural Networks (GNNs) – Fundamentals and applications in graph-based data analysis.
  7. Reinforcement Learning and Deep Reinforcement Learning – Concepts, algorithms, and real-world use cases.
  8. Generative AI – Generative AI models, fine-tuning, and transfer learning approaches.
  9. MLOps and Software Requirements – Building scalable ML pipelines, deployment strategies, and model monitoring.
  10. Model Optimization and Parallelization – Techniques for improving efficiency and scalability using HPC. From Google Colab to HPC.
  11. Ethics and Responsible AI – Addressing bias, transparency, and accountability in AI systems.
  12. Capstone Project Development – End-to-end project covering design, implementation, and evaluation.

Learning Outcomes

  • Develop and implement deep learning models using frameworks like TensorFlow, PyTorch, Ultralytics.
  • Understand and leverage HPC infrastructure for large-scale AI applications.
  • Design and optimize neural networks for computer vision, NLP, and graph-based tasks.
  • Apply reinforcement learning techniques to dynamic and interactive problems.
  • Build and manage ML pipelines, addressing deployment challenges through MLOps practices.
  • Evaluate AI models for performance, scalability, and ethical considerations.