Parallel computing experiences with CUDA
M Garland and S Le Grand and J Nickolls and J Anderson and J Hardwick and S Morton and E Phillips and Y Zhang and V Volkov, IEEE MICRO, 28, 13-27 (2008).
DOI: 10.1109/MM.2008.57
The CUDA programming model provides a straightforward means of describing inherently parallel computations, and NVIDIA's Tesla GPU architecture delivers high computational throughput on massively parallel problems. This article surveys experiences gained in applying CUDA to a diverse set of problems and the parallel speedups over sequential codes running on traditional CPU architectures attained by executing key computations on the GPU.
Return to Publications page