Journal of Computer Applications ›› 2013, Vol. 33 ›› Issue (03): 730-733.DOI: 10.3724/SP.J.1087.2013.00730

• Information security • Previous Articles     Next Articles

HDFS optimization program based on GE coding

ZHU Yuanyuan*, WANG Xiaojing   

  1. Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu Sichuan 610041, China
  • Received:2012-09-17 Revised:2012-10-26 Online:2013-03-01 Published:2013-03-01
  • Contact: yuanyuan zhu

基于GE码的HDFS优化方案

朱媛媛*,王晓京   

  1. 中国科学院 成都计算机应用研究所,成都 610041
  • 通讯作者: 朱媛媛
  • 作者简介:朱媛媛(1987-),女,河南商丘人,硕士研究生,主要研究方向:网络编码、分布式存储; 王晓京(1953-),男,四川成都人,研究员,博士生导师,CCF会员,主要研究方向:编码与信息安全、符号计算、自动推理。
  • 基金资助:

    国家863计划项目(2008AAO1Z402)。

Abstract: Concerning Hadoop Distributed File System (HDFS) data disaster recovery efficiency and small files, this paper presented an improved solution based on coding and the solution introduced a coding module of erasure GE to HDFS. Different from the multiple-replication strategy adopted by the original system, the module encoded files of HDFS into a great number of slices, and saved them dispersedly into the clusters of the storage system in distributed fashion. The research methods introduced the new concept of the slice, slice was classified and merged to save in the block and the secondary index of slice was established to solve the small files issue. In the case of cluster failure, the original data would be recovered via decoding by collecting any 70% of the slice, the method also introduced the dynamic replication strategies, through dynamically creating and deleting replications to keep the whole cluster in a good load-balancing status and settle the hotspot issues. The experiments on analogous clusters of storage system show the feasibility and advantages of new measures in proposed solution.

Key words: Hadoop Distributed File System (HDFS), erasure code, data disaster recovery, secondary index

摘要: 针对Hadoop分布式文件系统(HDFS)数据容灾效率和小文件问题,提出了基于纠删码的解决方案。该方案引用了新型纠删码(GE码)的编码和译码模块,对HDFS中的文件进行编码分片,生成很多个Slice并随机均匀的分配保存到集群中,代替原来HDFS系统的多副本容灾策略。该方法中引入了Slice的新概念,将Slice进行分类合保存在block中并然后通过对Slice建立二级索引来解决小文件问题; 该研究方法中抛弃了三备份机制,而是在集群出现节点失效的情况下,通过收集与失效文件相关的任意70%左右的Slice进行原始数据的恢复。通过相关的集群实验结果表明,该方法在容灾效率、小文件问题、存储成本以及安全性上对HDFS作了很大的优化。

关键词: Hadoop分布式文件系统, 纠删码, 数据容灾, 两级索引

CLC Number: