Graphical processing units (GPUs), which were once specialized for graphics applications, have evolved into an integral part of high performance computing (HPC). GPUs can provide significant acceleration (up to two orders of magnitude) over all-purpose central processing units (CPUs). CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs. With CUDA, developers can harness the power of GPUs and speed up computing applications across a wide range of domains from image processing to deep learning, numerical analytics to molecular modeling. In this workshop, the essential concepts of CUDA will be introduced through examples which will serve as an entry point for researchers who would like benefit from GPU acceleration. Some basic knowledge about C/C++ languages are preferred as most of the code examples will be in C.
Things you’ll learn in this workshop
- Difference between GPU and CPU
- Key concepts in CUDA, including: memory allocation, data communication between CPU and GPU, and launching kernels on GPU
- Writing basic CUDA programs
- Optimizing CUDA programs by efficient usage of GPU memory hierarchy
- Knowledge of C/C++ programming languages with the ability to read and modify code examples in C.
- Familiarity with the concept of parallel processing.
- Laptop with a text editor for writing code and an SSH client