首页 | 本学科首页   官方微博 | 高级检索  
     


Revised cloud storage structure for light-weight data archiving in LHD
Affiliation:1. National Institute for Fusion Science, 322-6 Oroshi-cho, Toki, Gifu 509-5292, Japan;2. Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193, Japan;1. Princeton Plasma Physics Laboratory, Princeton, NJ 08543, USA;2. Princeton H.S., Princeton, NJ 08540, USA;3. Nova Photonics, Princeton, NJ 08543, USA;1. State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, Wuhan 430074, China;2. College of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
Abstract:The LHD data archiving system has newly selected GlusterFS distributed filesystem for the replacement of the present cloud storage software named “IznaStor/dSS”. Even though the prior software provided many favorable functionalities of hot plug and play node insertion, internal auto-replication of data files, and symmetric load balancing between all member nodes, it revealed a poor feature in recovering from an accidental malfunction of a storage node. Once a failure happened, the recovering process usually took at least several days or sometimes more than a week with a heavy cpu load. In some cases they fell into the so-called “split-brain” or “amnesia” condition, not to get recovered from it. Since the recovery time tightly depends on the capacity size of the fault node, individual HDD management is more desirable than large volumes of HDD arrays. In addition, the dynamic mutual awareness of data location information may be removed if some other static data distribution method can be applied. In this study, the candidate middleware of “OpenStack/Swift” and “GlusterFS” has been tested by using the real mass of LHD data for more than half a year, and finally GlusterFS has been selected to replace the present IznaStor. It has implemented very limited functionalities of cloud storage but a simplified RAID10-like structure, which may consequently provide lighter-weight read/write ability. Since the LABCOM data system is implemented to be independent of the storage structure, it is easy to plug off the IznaStor and on the new GlusterFS. The effective I/O speed is also confirmed to be on the same level as the estimated one from raw performance of disk hardware. This achievement may be informative to implement the ITER CODAC and the remote archiving system.
Keywords:LHD  Data archiving  GlusterFS  OpenStack Swift  IznaStor  LABCOM system
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号