[1] CAO M,BHATTACHARYA S,TSO T. Ext4:the next generation of Ext2/3 filesystem[C/OL]//Proceedings of the 2007 Linux Storage Filesystem Workshop. Berkeley:USENIX,2007[2020-01-10]. https://www.usenix.org/legacy/event/lsf07/tech/cao_m.pdf. [2] BOYER E B,BROOMFIELD M C,PERROTTI T A. GlusterFS one storage server to rule them all[R]. Los Alamos,NM:Los Alamos National Lab,2012. [3] GHEMAWAT S,GOBIOFF H,LEUNG S T. The Google file system[C]//Proceedings of the 19th ACM Symposium on Operating Systems Principles. New York:ACM,2003:29-43. [4] HALCROW M A. eCryptfs:an enterprise-class encrypted filesystem for Linux[C]//Proceedings of the 2005 Linux Symposium. Ottawa:[s. n.],2005:201-218. [5] KHASHAN O A,ZIN A M,SUNDARARAJAN E A. ImgFS:a transparent cryptography for stored images using a filesystem in userspace[J]. Frontiers of Information Technology and Electronic Engineering,2015,16(1):28-42. [6] OUYANG X,RAJACHANDRASEKAR R,BESSERON X,et al. CRFS:a lightweight user-level filesystem for generic checkpoint/restart[C]//Proceedings of the 2011 International Conference on Parallel Processing. Piscataway:IEEE,2011:375-384. [7] SZEREDI M. Fuse:filesystem in userspace[EB/OL].[2020-01-25]. http://fuse.sourceforge.net. [8] REN K,GIBSON G. TABLEFS:enhancing metadata efficiency in the local file system[C]//Proceedings of the 2013 USENIX Annual Technical Conference. Berkeley:USENIX,2013:145-156. [9] HOLUPIREK A,GRÜN C,SCHOLL M H. BaseX & DeepFS joint storage for filesystem and database[C]//Proceedings of the 12th International Conference on Extending Database Technology:Advances in Database Technology. New York:ACM,2009:1108-1111. [10] 薛矛, 薛巍, 舒继武, 等. 一种云存储环境下的安全存储系统[J]. 计算机学报,2015,38(5):987-998.(XUE M,XUE W, SHU J W,et al. A secure storage system over cloud storage environment[J]. Chinese Journal of Computers,2015,38(5):987-998.) [11] LEE G,SHIN S,SONG W,et al. Asynchronous I/O stack:a lowlatency kernel I/O stack for ultra-low latency SSDs[C]//Proceedings of the 2019 USENIX Annual Technical Conference. Berkeley:USENIX,2019:603-616. [12] YANG Z, HARRIS J R, WALKER B, et al. SPDK:a development kit to build high performance storage applications[C]//Proceedings of the 2017 IEEE International Conference on Cloud Computing Technology and Science. Piscataway:IEEE, 2017:154-161. [13] YANG Z, LIU C, ZHOU Y, et al. SPDK vhost-NVMe:accelerating I/Os in virtual machines on NVMe SSDs via user space vhost target[C]//Proceedings of the IEEE 8th International Symposium on Cloud and Service Computing. Piscataway:IEEE, 2018:67-76. [14] LIU J,ARPACI-DUSSEAU A C,ARPACI-DUSSEAU R H,et al. File systems as processes[C/OL]//Proceedings of the 11th USENIX Workshop on Hot Topics in Storage and File Systems. Berkeley:USENIX,2019[2020-01-11]. https://www.usenix.org/system/files/hotstorage19-paper-liu_0.pdf. [15] ZHU Y,YU W,JIAO B,et al. Efficient user-level storage disaggregation for deep learning[C]//Proceedings of the 2019 IEEE International Conference on Cluster Computing. Piscataway:IEEE,2019:1-12. [16] KALIA A,KAMINSKY M,ANDERSEN D G. Using RDMA efficiently for key-value services[C]//Proceedings of the 2014 ACM Conference on SIGCOMM. New York:ACM, 2014:295-306. [17] KAUR G,BALA M. RDMA over converged Ethernet:a review[J]. International Journal of Advances in Engineering and Technology,2013,6(4):1890-1894. [18] CAULFIELD A M,CHUNG E S,PUTNAM A,et al. A cloudscale acceleration architecture[C]//Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture. Piscataway:IEEE,2016:1-13. [19] DRAGOJEVIĆ A,NARAYANAN D,NIGHTINGALE E B,et al. No compromises:distributed transactions with consistency, availability, and performance[C]//Proceedings of the 25th Symposium on Operating Systems Principles. New York:ACM, 2015:54-70. [20] WEI X,SHI J,CHEN Y,et al. Fast in-memory transaction processing using RDMA and HTM[C]//Proceedings of the 25th Symposium on Operating Systems Principles. New York:ACM, 2015:87-104. [21] ZAMANIAN E,BINNIG C,HARRIS T,et al. The end of a myth:Distributed transactions can scale[J]. Proceedings of the VLDB Endowment,2017,10(6):685-696. [22] 吴昊, 陈康, 武永卫, 等. 基于RDMA和NVM的大数据系统一致性协议研究[J]. 大数据,2019,5(4):89-99.(WU H,CHEN K,WU Y W,et al. Research on the consensus of big data systems based on RDMA and NVM[J]. Big Data Research,2019,5(4):89-99.) [23] 陈游旻, 陆游游, 罗圣美, 等. 基于RDMA的分布式存储系统研究综述[J]. 计算机研究与发展,2019,56(2):227-239. (CHEN Y M,LU Y Y,LUO S M,et al. Survey on RDMA-based distributed storage systems[J]. Journal of Computer Research and Development,2019,56(2):227-239.) [24] 董勇, 周恩强, 卢宇彤, 等. 基于天河2高速互连网络实现混合层次文件系统H2FS高速通信[J]. 计算机学报,2017,40(9):1961-1979. (DONG Y, ZHOU E Q, LU Y T, et al. The implementation of communicating operation in hybrid hierarchy file system H2FS with TH-Express 2[J]. Chinese Journal of Computers,2017,40(9):1961-1979.) [25] DRAGOJEVIĆ A,NARAYANAN D,CASTRO M. RDMA reads:to use or not to use?[J]. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering,2017,40(1):3-14. [26] O'NEIL P,CHENG E,GAWLICK D,et al. The log-structured merge-tree (LSM-tree)[J]. Acta Informatica,1996,33(4):351-385. [27] GHEMAWAT S,DEAN J. LevelDB[EB/OL].[2020-01-25]. https://github.com/google/leveldb. [28] REN K,ZHENG Q,PATIL S,et al. IndexFS:scaling file system metadata performance with stateless caching and bulk insertion[C]//Proceedings of the 2014 International Conference for High Performance Computing, Networking, Storage and Analysis. Piscataway,IEEE,2014:237-248. [29] XIAO L,REN K,ZHENG Q,et al. ShardFS vs. IndexFS:replication vs. caching strategies for distributed metadata management in cloud storage systems[C]//Proceedings of the 6th ACM Symposium on Cloud Computing. New York:ACM,2015:236-249. [30] ZHENG Q,REN K,GIBSON G,et al. DeltaFS:exascale file systems scale better without dedicated servers[C]//Proceedings of the 10th Parallel Data Storage Workshop. New York:ACM, 2015:1-6. [31] LI S,LU Y,SHU J,et al. LocoFS:a loosely-coupled metadata service for distributed file systems[C]//Proceedings of the 2017 International Conference for High Performance Computing, Networking, Storage and Analysis. New York:ACM, 2017:No. 4. [32] RPC:remote procedure call protocol specification version 2:RFC 1831[S]. Fremont,CA:Internet Engineering Task Force,1995. [33] SOUMAGNE J,KIMPE D,ZOUNMEVO J,et al. Mercury:enabling remote procedure call for high-performance computing[C]//Proceedings of the 2013 IEEE International Conference on Cluster Computing. Piscataway:IEEE,2013:1-8. [34] OpenFabrics Interfaces Working Group. libfabric:Open Fabric Interfaces (OFI)[EB/OL].[2020-01-25]. https://github.com/ofiwg/libfabric. [35] BREITENFELD M S,FORTNER N,HENDERSON J,et al. DAOS for extreme-scale systems in scientific applications[EB/OL].[2020-01-25]. https://arxiv.org/pdf/1712.00423.pdf. [36] LOFSTEAD J,JIMENEZ I,MALTZAHN C,et al. DAOS and friends:a proposal for an exascale storage system[C]//Proceedings of the 2016 International Conference for High Performance Computing, Networking, Storage and Analysis. Piscataway:IEEE,2016:585-596. [37] DHRUBA B. RocksDB:a persistent key-value store for flash and RAM storage:a library that provides an embeddable,persistent key-value store for fast storage[EB/OL].[2020-01-25]. https://github.com/facebook/rocksdb. [38] Storage Performance Development Kit. RocksDB:SPDK RocksDB mirror[EB/OL].[2020-01-25]. https://github.com/spdk/rocksdb. [39] MORRONE C, LOEWE B, MCLARTY T. mdtest HPC Benchmark[EB/OL].[2020-01-25]. http://sourceforge.net/projects/mdtest. |