计算机应用 ›› 2011, Vol. 31 ›› Issue (03): 851-855.DOI: 10.3724/SP.J.1087.2011.00851

• 典型应用 • 上一篇    下一篇

LU分解和Laplace算法在GPU上的实现

陈颖1,林锦贤2,吕暾3   

  1. 1. 福州大学 数学与计算机科学学院,福州350108
    2. 福州大学 数学与计算机科学学院,福州350108; 福州大学 福建省超级计算中心,福州350108
    3. 福州大学 福建省超级计算中心,福州350108;福州大学 生物科学与工程学院,福州350108
  • 收稿日期:2010-09-06 修回日期:2010-10-27 发布日期:2011-03-03 出版日期:2011-03-01
  • 通讯作者: 陈颖
  • 作者简介:陈颖(1983-),男,福建宁德人,硕士研究生,主要研究方向:分子动力学并行算法;林锦贤(1957-),男,福建福州人,副教授,主要研究方向:高性能计算;吕暾(1973-),男,福建厦门人,研究员,主要研究方向:计算生物学。
  • 基金资助:
    福建省高校科研专项重点项目(JK2009002);福建省科技厅青年人才基金资助项目(2008F306010107)

Implementation of LU decomposition and Laplace algorithms on GPU

CHEN Ying1,LIN Jin-xian2,LV Tun3   

  1. 1. College of Mathematics and Computer Science, Fuzhou University, Fuzhou Fujian 350108, China
    2. College of Mathematics and Computer Science, Fuzhou University, Fuzhou Fujian 350108, China; Fujian Supercomputing Center, Fuzhou University, Fuzhou Fujian 350108, China
    3. Fujian Supercomputing Center, Fuzhou University, Fuzhou Fujian 350108, China; College of Biological Science and Technology, Fuzhou University, Fuzhou Fujian 350108, China
  • Received:2010-09-06 Revised:2010-10-27 Online:2011-03-03 Published:2011-03-01
  • Contact: CHEN Ying

摘要: 随着图形处理器(GPU)性能的大幅度提升以及可编程性的发展,已经有许多算法成功地移植到GPU上。LU分解和Laplace算法是科学计算的核心,但计算量往往很大,由此提出了一种在GPU上加速计算的方法。使用Nvidia公司的统一计算设备架构(CUDA)编程模型实现这两个算法,通过对CPU与GPU进行任务划分,同时利用GPU上的共享存储器提高数据访问速度,对GPU程序进行分支消除,并且对矩阵分段计算以达到加速计算的目的。实验结果表明,随着矩阵规模的增大,基于GPU的算法相对于基于CPU的算法具有良好的加速效果。

关键词: 图形处理器, LU分解, Laplace算法, CUDA, 统一计算设备架构, 共享存储器

Abstract: With the advancement of Graphics Processing Unit (GPU) and the creation of its new feature of programmability, many algorithms have been successfully transferred to GPU. LU decomposition and Laplace algorithms are the core in scientific computation, but computation is usually too large; therefore, a speedup method was proposed. The implementation was based on Nvidia's GPU which supported Compute Unified Device Architecture (CUDA). Dividing tasks on CPU and GPU, using shared memory on GPU to increase the speed of data access, eliminating the branch in GPU program and stripping the matrix were used to speed up the algorithms. The experimental results show that with the size of matrix increasing, the algorithm based on GPU has a good speedup compared with the algorithm based on CPU.

Key words: Graphics Processing Unit (GPU), LU decomposition, Laplace algorithm, CUDA, Compute Unified Device Architecture (CUDA), shared memory

中图分类号: