digatus_logo
Search
Generic filters
Filter by Standorte
Filter by Funktionen
Search
Generic filters
Filter by Standorte
Filter by Funktionen

Fundamentals of Accelerated Computing with CUDA Python

3

Lessons

8h

Duration

English

Language

NVIDIA digatus Preferred Partner
Nvidia Deep Learning Institute

Prerequisites:

  • Basic Python competency, including familiarity with variable types, loops, conditional statements, functions, and array manipulations
  • NumPy competency, including the use of ndarrays and ufuncs
  • No previous knowledge of CUDA programming is required

Technologies:Numba, NumPy

Certificate: Upon successful completion of the assessment, participants will receive an NVIDIA DLI certificate to recognize their subject matter competency and support professional career growth.

Hardware Requirements: Desktop or laptop computer capable of running the latest version of Chrome or Firefox. Each participant will be provided with dedicated access to a fully configured, GPU-accelerated server in the cloud.

This workshop teaches you the fundamental tools and techniques for running GPU-accelerated Python applications using CUDA®GPUs and the Numba compiler.

Learning Objectives

At the conclusion of the workshop, you’ll have an understanding of the fundamental tools and techniques for GPU-accelerated Python applications with CUDA and Numba:
  • GPU-accelerate NumPy ufuncs with a few lines of code.
  • Configure code parallelization using the CUDA thread hierarchy.
  • Write custom CUDA device kernels for maximum performance and flexibility.
  • Use memory coalescing and on-device shared memory to increase CUDA kernel bandwidth.

Learning Path

  • Meet the instructor.
  • Create an account at courses.nvidia.com/join

15 mins

  • Begin working with the Numba compiler and CUDA programming in Python.
  • Use Numba decorators to GPU-accelerate numerical Python functions.
  • Optimize host-to-device and device-to-host memory transfers.

120 mins

60 mins

  • Learn CUDA’s parallel thread hierarchy and how to extend parallel program possibilities.
  • Launch massively parallel custom CUDA kernels on the GPU.
  • Utilize CUDA atomic operations to avoid race conditions during parallel execution.

15 mins

  • Learn multidimensional grid creation and how to work in parallel on 2D matrices.
  • Leverage on-device shared memory to promote memory coalescing while reshaping 2D matrices.
  • Review key learnings and wrap up questions.
  • Complete the assessment to earn a certificate.
  • Take the workshop survey.
Request on further information and pricing

More Courses

You might also be interested in these courses

Course 2

Fundamentals of Deep Learning

In this workshop, you’ll learn how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results.

Fundamentals of Deep Learning

3 lessons - 8 hours
View Course

Course 3

Accelerating Data Engineering Pipelines

In this workshop, we’ll explore how GPUs can improve data pipelines and how using advanced data engineering tools and techniques can result in significant performance acceleration.

Accelerating Data Engineering Pipelines

3 lessons - 8 hours
View Course