Computational Physics with GPUs
Applications that require substantial computational resources today cannot avoid the use of heavily parallel machines. Embracing the opportunities of parallel computing and especially the possibilities provided by a new generation of massively parallel accelerator devices such as GPUs, Intel's Xeon Phi or even FPGAs enables applications and studies that are inaccessible to serial programs. In this lecture series, I will give an introduction to parallel programming with a focus on GPUs and related architectures. I will specifically discuss the use of CUDA (and, to a lesser degree, other approaches such as OpenCL and OpenACC) for programming GPUs with applications in computational science in mind. The goal of the short course is to equip students with a general understanding of the design principles of code that performs well on GPUs and related devices. Many of the examples will be drawn from the field of Monte Carlo simulations in statistical physics, with a focus on the simulation of systems exhibiting phase transitions and critical phenomena. While the examples discussed are for simulations of spin systems, many of the methods are more general and moderate modifications allow them to be applied to other lattice and off-lattice problems including polymers and particle systems. We discuss important algorithmic requirements for such highly parallel simulations, such as the challenges of random-number generation for such cases, and outline a number of general design principles for parallel Monte Carlo codes to perform well.