Friday 11:15 a.m.–11:30 a.m.

Massively parallel implementation in Python of a pseudo-spectral DNS code for turbulent flows

Mikael Mortensen

Audience level:
Intermediate

Description

Direct Numerical Simulations (DNS) of the Navier Stokes equations is a valuable research tool in fluid dynamics, but there are very few publicly available codes and, due to heavy number crunching, codes are usually written in low-level languages. In this talk I will describe a pure Python DNS code that nearly matches the performance of pure C for thousands of processors and billions of unknowns.

Abstract

Direct Numerical Simulations (DNS) is a term reserved for computer simulations of turbulent flows that are fully resolved in both time and space. DNS are usually conducted using numerical methods of such high quality that numerical dispersion and diffusion errors are negligible compared to their actual physical counterparts. To this end, DNS has historically been carried out with extremely accurate and efficient spectral methods, and in the fluid dynamics community DNS enjoys today the same status as carefully conducted experiments. DNS can provide detailed and highly reliable data not possible to extract from experiments, which in recent years have driven a number of discoveries regarding the very nature of turbulence.

Because of the extremely heavy number crunching implied by DNS, researchers aim at highly optimized implementations running on massively parallel computing platforms. The largest known DNS simulations performed today are using hundreds of billions of degrees of freedom. Normally, this demands a need for developing tailored, hand-tuned codes in low-level languages like Fortran, C or C++, and with spectral methods the implementations have a reputation for being difficult, resulting in few DNS codes being openly available and easily accessible to the public and the common fluid mechanics researcher.

In this talk I will describe a ~100 line pseudo-spectral DNS solver developed from scratch in Python, using nothing more than numpy and mpi4py, possibly optimized with pyfftw and Cython. It is important to stress that the entire solver is written in Python, this in not simply a wrapper of a low-level number cruncher. The mesh is created and decomposed in Python and MPI communications are implemented using mpi4py. Two popular strategies, slab and pencil, for MPI communications of the three-dimensional Fast Fourier Transform (FFT), required by the pseudo-spectral method, will be described. In fact, I will show the entire 5 lines of Python code required to perform a 3D FFT on a massively parallel computer.

Performance tests of the pseudo-spectral DNS solver has been conducted on a BlueGene/P supercomputer at KAUST supercomputing laboratory and weak scaling results are shown in the figure below. Two MPI decompositions, slab and pencil, are shown respectively in blue and red and a pure C++ solver using the FFTW library with slab decomposition is shown in green. Ideal scaling is shown as a black line. Scaling is evidenced all the way up to the largest problem in this test, which is for a substantial computational box of size 1024^3.

The pseudo-spectral solver is openly available at github, and there are also solvers available for MagnetoHydroDynamics (MHD), Boussinesq equations and MHD with Boussinesq.

Weak scaling

Sponsors