Porting of Lattice QCD simulation software to GPUs


Prof. Dr. Tilo Wettig
High Energy Physics
Department of Physics
University of Regensburg

Project Overview

The proposed project focuses on Lattice Quantum Chromodynamics (QCD). QCD is the theory of the strong interactions and one of the four fundamental theories in the standard model of particle physics. It describes the interaction of quarks and gluons and the formation of bound states such as the proton or the neutron. From a mathematical point of view, QCD is a rather complicated theory, and many of its phenomenologically relevant features, such as confinement and spontaneous chiral symmetry breaking, are nonperturbative phenomena. The lattice regularization of QCD is the only known systematic approach to compute such nonperturbative features and allows for the systematic control of discretization and finite-volume errors. In the almost fifty years since the invention of Lattice QCD, new and improved simulation algorithms, as well as advances in hardware, have led to enormous progress. We can now compute many important quantities, such as the masses of the ground-state hadrons, decay constants, hadronic matrix elements, form factors, structure functions and many more to unprecedented precision. Lattice QCD input is indispensable for the interpretation of experiments at CERN, Fermilab and other accelerator facilities. Moreover, in the search for new physics beyond the standard model the errors are often dominated by hadronic contributions that can only be obtained from Lattice QCD.

Lattice QCD has been a very early adopter of high-performance computing (HPC) and has contributed significantly to the development of algorithms and of specialized hardware over the past five decades. It has traditionally been one of the major users of supercomputing facilities all over the world and has pushed the envelope of sustained performance on many different architectures. On the software side there are a number of Lattice QCD frameworks that have been developed by the various collaborations in the US, UK, Germany, Italy, France, Japan, China and others. One of the major challenges for these frameworks is to keep up with the hardware developments. Nowadays many supercomputers at Tier-1 and Tier-2 centers include GPUs, which makes it mandatory to port the Lattice QCD simulation frameworks to GPUs.

In the proposed project we will focus on the Grid framework, which is widely used in the Lattice QCD community and under active development. Grid isolates the details of the hardware in a manageable number of files at the lowest layer of the software stack and thus reduces the burden of porting and optimizing Lattice QCD code for a new hardware architecture. Porting simulation code that was originally optimized for CPU architectures to GPUs is nontrivial for a number of reasons. Perhaps the most important reason is the need to carefully control the data movement between CPU(s) and GPU(s), which could otherwise constitute a performance bottleneck.
The main author of Grid, Peter Boyle, with whom we are in close contact, has already done much of the work to port Grid to Nvidia GPUs (using CUDA), Intel GPUs (using SYCL) and AMD GPUs (using HIP). However, he focuses his efforts on fermion formulations and algorithms that are particularly important for his line of research, whereas our group and the larger Lattice QCD community use other fermion formulations and algorithms for which much of the porting and optimization work for GPUs still needs to be done. This is the focus of the proposed project.