计算机应用 ›› 2017, Vol. 37 ›› Issue (12): 3586-3591.DOI: 10.11772/j.issn.1001-9081.2017.12.3586

• 计算机软件技术 • 上一篇    下一篇

基于学习的容器环境Spark性能监控与分析

皮艾迪1,2, 喻剑1,2, 周笑波1,2   

  1. 1. 同济大学 计算机科学与技术系, 上海 201804;
    2. 嵌入式系统与服务计算教育部重点实验室(同济大学), 上海 201804
  • 收稿日期:2017-05-16 修回日期:2017-07-14 出版日期:2017-12-10 发布日期:2017-12-18
  • 通讯作者: 周笑波
  • 作者简介:皮艾迪(1993-),男,上海人,硕士研究生,主要研究方向:大数据处理、云计算;喻剑(1975-),男,浙江义乌人,讲师,博士,主要研究方向:物联网、大数据处理;周笑波(1973-),男,浙江台州人,教授,博士生导师,博士,主要研究方向:云计算、大数据并行处理、分布式系统、数据中心。

Learning-based performance monitoring and analysis for Spark in container environments

PI Aidi1,2, YU Jian1,2, ZHOU Xiaobo1,2   

  1. 1. Department of Computer Science and Technology, Tongji University, Shanghai 201804, China;
    2. Key Laboratory of Embedded System and Service Computing, Ministry of Education(Tongji University), Shanghai 201804, China
  • Received:2017-05-16 Revised:2017-07-14 Online:2017-12-10 Published:2017-12-18

摘要: Spark计算框架被越来越多的企业用作大数据分析的框架,由于通常部署在分布式和云环境中因此增加了该系统的复杂性,对Spark框架的性能进行监控并查找导致性能下降的作业向来是非常困难的问题。针对此问题,提出并编写了一种针对分布式容器环境中Spark性能的实时监控与分析方法。首先,通过在Spark中植入代码和监控Docker容器中的API文件获取并整合了作业运行时资源消耗信息;然后,基于Spark作业历史信息,训练了高斯混合模型(GMM);最后,使用训练后的模型对Spark作业的运行时资源消耗信息进行分类并找出导致性能下降的作业。实验结果表明,所提方法能检测出90.2%的异常作业,且其对Spark作业性能的影响仅有4.7%。该方法能减轻查错的工作量,帮助用户更快地发现Spark的异常作业。

关键词: Spark, 容器, 分布式监控系统, 高斯混合模型, 机器学习

Abstract: The Spark computing framework has been adopted as the framework for big data analysis by an increasing number of enterprises. However, the complexity of the system is increased due to the characteristic that it is typically deployed in distributed and cloud environments. Therefore, it is always considered to be difficult to monitor the performance of the Spark framework and finding jobs that lead to performance degradation. In order to solve this problem, a real-time monitoring and analysis method for Spark performance in distributed container environment was proposed and compiled. Firstly, the resource consumption information of jobs at runtime was acquired and integrated through the implantation of code in Spark and monitoring of Application Program Interface (API) files in Docker containers. Then, the Gaussian Mixture Model (GMM) was trained based on job history information of Spark. Finally, the trained model was used to classify the resource consumption information of Spark jobs at runtime and find jobs that led to performance degradation. The experimental results show that, the proposed method can detect 90.2% of the abnormal jobs and it only introduces 4.7% degradation to the performance of Spark jobs. The proposde method can lighten the burden of error checking and help users find the abnormal jobs of Spark in a shorter time.

Key words: Spark, container, distributed monitoring system, Gaussian Mixture Model (GMM), machine learning

中图分类号: