Dr. Martin Schreiber
Chair of Computer Architecture and Parallel Systems
Technische Universität München
The advancement of weather and climate simulations faces new challenges due to stagnation in processor speed and the memory wall. Here, the so-called dynamical cores represent the numerical implementation of atmospheric models and simulate the behavior of the atmosphere over one week for weather simulations or up to thousands of years and beyond for climate simulations. The development of such dynamical cores is an interdisciplinary task with some challenges as follows: On the one hand, a certain quality ( low error in the most trivial case) of the forecast is required. To improve the quality of such forecasts, this typically incorporates an increase in resolution every couple of years. Such a higher resolution is indeed one of the main driving factors for improved accuracy of weather forecasts.
This also leads to the requirement of spreading this additional workload across additional compute nodes, similar to weak scaling. On the other hand, the forecasting requirement itself poses hard wallclock restrictions with the simulation required to be finished within a particular time frame . Otherwise, the simulation results are less valuable or even obsolete. However, an increase in resolution automatically leads to a mathematically constrained reduction of the time step size, which in turn requires the computation of more time steps. Hence, the increase in the number of time steps also requires to halve the workload per node, similar to strong scaling. Alltogether, an increase in resolution of dynamical cores results in a mixed scalability model, located between strong and weak scaling . Although not suering of typical strong scaling issues, the scalability limitation is clearly visible in studies of existing dynamical cores. The mixed scalability model already points out limitations, however, there is a stricter limitation which we face by taking the trends of computing architectures into account as well: Over the last 50 years, dynamical cores could always rely on an increase in performance due to Moore’s law which was directly related to the computational performance.
Indeed, 20 years ago, the main bottleneck of computer architectures was the computation itself rather than the data movement. Due to hardware changes, data movements got the main limiting factor with this trend continuing. Therefore, dynamical cores for weather forecasts cannot take limited by the data movements, they benets anymore from the increase in compute performance, but are hit the memory wall. Therefore, the increase in performance is not further given by the increase of computing performance, but limited by the slowly increasing memory bandwidth.
This proposal takes up these challenges from the perspective of high-performance computing as well as of applied mathematics, researching ways to mitigate the expected lack of per-year-increased processor performance by generating additional degrees of freedom in the time dimension by the development of new ways to time integrate and to assess these novel ways in detail on the current LRZ supercomputer.