Exploring Complex Brain-Simulation Workloads on Multi-GPU Deployments
MA van der Vlag and G Smaragdos and Z Al-Ars and C Strydis, ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 16, 53 (2019).
DOI: 10.1145/3371235
In-silico brain simulations are the de-facto tools computational neuroscientists use to understand large-scale and complex brain-function dynamics. Current brain simulators do not scale efficiently enough to large-scale problem sizes (e.g., >100,000 neurons) when simulating biophysically complex neuron models. The goal of this work is to explore the use of true multi-GPU acceleration through NVIDIA's GPUDirect technology on computationally challenging brain models and to assess their scalability. The brain model used is a state-of-the-art, extended Hodgkin-Huxley, biophysically meaningful, three-compartmental model of the inferior-olivary nucleus. The Hodgkin-Huxley model is the most widely adopted conductance-based neuron representation, and thus the results from simulating this representative workload are relevant for many other brain experiments. Not only the actual network-simulation times but also the network-setup times were taken into account when designing and benchmarking the multi-GPU version, an aspect often ignored in similar previous work. Network sizes varying from 65K to 2M cells, with 10 and 1,000 synapses per neuron were executed on 8, 16, 24, and 32 GPUs. Without loss of generality, simulations were run for 100 ms of biological time. Findings indicate that communication overheads do not dominate overall execution while scaling the network size up is computationally tractable. This scalable design proves that large- network simulations of complex neural models are possible using a multi- GPU design with GPUDirect.
Return to Publications page