Implementing a neural network interatomic model with performance portability for emerging exascale architectures
S Desai and ST Reeve and JF Belak, COMPUTER PHYSICS COMMUNICATIONS, 270, 108156 (2022).
DOI: 10.1016/j.cpc.2021.108156
The two main thrusts of computational science are increasingly accurate predictions and faster calculations; to this end, the zeitgeist in molecular dynamics (MD) simulations is pursuing machine learned and data driven interatomic models, e.g. neural network potentials, and novel hardware architectures, e.g. GPUs. Current implementations of neural network potentials are orders of magnitude slower than traditional interatomic models and while looming exascale computing offers the ability to run large, accurate simulations with these models, achieving portable performance for MD with new and varied exascale hardware requires rethinking traditional algorithms, using novel data structures, and library solutions. We re-implement a neural network interatomic model in CabanaMD, an MD proxy application, built on libraries developed for performance portability. Our implementation shows significantly improved thread scaling in this complex kernel as compared to a current LAMMPS implementation, across both strong and weak scaling. Our single- source solution enables simulations up to 20 million atoms on a single CPU node and 4 million atoms with improved performance on a single GPU. We also explore parallelism and data layout choices (using flexible data structures called AoSoAs) and their effect on performance, seeing up to similar to 50% and similar to 5% improvements in performance on a GPU by choosing the right level of parallelism and data layout respectively. Program summary Program title: CabanaMD-NNP CPC Library link to program files: https://doi .org /10 .17632 /x948kyy7jh .1 Developer's repository link: https://github.com/ECP-CoPA/CabanaMD, https://github.com/CompPhysVienna/n2p2 Licensing provisions: BSD3-Clause, GPL-3.0 Programming Language: C++ Nature of problem: Developing a performance portable implementation of a neural network potential for exascale architectures. Solution method: CabanaMD-NNP uses algorithms and data-structures from the Kokkos 1 and Cabana 2 libraries to re-implement the computations in Behler-Parrinello neural network potentials 3, 4 for performance portability across hardware. All molecular dynamics data is stored in performance portable data structures, with atomic properties in array-of-structs-of-arrays (Cabana::AoSoAs), and auxiliary values including potential parameters in arrays (Kokkos::Views). All computation is also done in a performance portable way: neural network propagation uses Kokkos parallel kernels (Kokkos::parallel_for), while calculations performed for each atom and neighbor, evaluation of descriptors (symmetry functions) and forces, use Cabana extensions of Kokkos constructs (Cabana::neighbor_parallel_for). These choices provide our implementation with significant speedups both on CPUs and GPUs for large systems, additionally allowing flexibility for parallelism and data layout for further optimizations. Additional comments including restrictions and unusual features: The previously developed n2p2 package 4 contains an interface to LAMMPS 5, which we compare to throughout the paper. We primarily extend the n2p2 library directly (https://github .com /CompPhysVienna /n2p2) and also add an interface to that extension within CabanaMD (https://github .com /ECP- CoPA /CabanaMD), to obtain the main results with an identical LAMMPS input file.
Return to Publications page