计算机应用 ›› 2016, Vol. 36 ›› Issue (7): 1793-1796.DOI: 10.11772/j.issn.1001-9081.2016.07.1793

• 先进计算 • 上一篇    下一篇

基于切片原理的海量点云并行简化算法

官亚勤, 赵学胜, 王鹏飞, 李大朋   

  1. 中国矿业大学(北京) 地球科学与测绘工程学院, 北京 100083
  • 收稿日期:2015-12-10 修回日期:2016-04-10 出版日期:2016-07-10 发布日期:2016-07-14
  • 通讯作者: 赵学胜
  • 作者简介:官亚勤(1990-),男,湖北赤壁人,硕士研究生,主要研究方向:并行计算、海量三维模型的渲染及简化;赵学胜(1967-),男,山东菏泽人,教授,博士,主要研究方向:三维地理信息系统、数字地球空间建模;王鹏飞(1991-),男,山东德州人,硕士研究生,主要研究方向:三维建模、海量三维模型可视化;李大朋(1991-),男,河南南阳人,硕士研究生,主要研究方向:并行计算、海量三维模型的渲染及简化。
  • 基金资助:
    高等学校博士学科点专项科研基金资助项目(20130023110001)。

Parallel algorithm for massive point cloud simplification based on slicing principle

GUAN Yaqin, ZHAO Xuesheng, WANG Pengfei, LI Dapeng   

  1. College of Geoscience and Surveying Engineering, China University of Mining and Technology (Beijing), Beijing 100083, China
  • Received:2015-12-10 Revised:2016-04-10 Online:2016-07-10 Published:2016-07-14
  • Supported by:
    This work is partially supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China (20130023110001).

摘要: 针对传统点云简化算法效率低且处理点数少的缺陷,结合快速成型领域的切片原理顾及特征计算复杂度低的特点,设计并实现了适合千万级海量激光雷达(LiDAR)点云的并行切片简化算法。该算法根据切片原理对点云模型分层并按照角度排序,利用NVIDA的统一计算设备架构(CUDA)和可编程图形处理器(GPU)高度并行的性能优势,使用GPU多线程高效并行地执行单层切片点云简化,提高了算法效率。最后,应用3组不同数量级点云模型分别进行简化对比实验。实验结果表明:在保持模型特征与压缩比不变的情况下,所提算法效率高出传统基于CPU的串行切片算法1~2个量级。

关键词: 海量点云, 简化, 切片法, 计算设备架构, 图形处理器, 并行计算

Abstract: Concerning the problems of low efficiency and less processing points of the traditional algorithm for point cloud simplification, according to the slicing principle in the rapid prototyping with feature-preserving and low computational complexity, a parallel slicing algorithm was designed and implemented for more than ten millions point cloud of Light Detection And Ranging (LiDAR) data. The point cloud model was layed with the slicing principle and every layer was sorted according to the angle. Incorporating the parallel computation framework of Compute Unified Device Architecture (CUDA) proposed by NVIDA and taking the highly parallel performance advantages of the programmable Graphics Processing Unit (GPU), and parallel execution of the single slice point cloud simplification with the multi-thread of GPU was done, which improved the algorithm efficiency. Finally, a comparing experiment was done with three groups of point cloud data in different order of magnitudes. The experimental results show that the efficiency of the proposed algorithm has 1-2 order of magnitude higher than that of traditional algorithm under the condition of keeping the model characteristics and not changing the compression ratio.

Key words: massive point cloud, simplification, slicing algorithm, Compute Unified Device Architecture (CUDA), Graphics Processing Unit (GPU), parallel computing

中图分类号: