首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
MapReduce is regarded as an adequate programming model for large-scale data-intensive applications. The Hadoop framework is a well-known MapReduce implementation that runs the MapReduce tasks on a cluster system. G-Hadoop is an extension of the Hadoop MapReduce framework with the functionality of allowing the MapReduce tasks to run on multiple clusters. However, G-Hadoop simply reuses the user authentication and job submission mechanism of Hadoop, which is designed for a single cluster. This work proposes a new security model for G-Hadoop. The security model is based on several security solutions such as public key cryptography and the SSL protocol, and is dedicatedly designed for distributed environments. This security framework simplifies the users authentication and job submission process of the current G-Hadoop implementation with a single-sign-on approach. In addition, the designed security framework provides a number of different security mechanisms to protect the G-Hadoop system from traditional attacks.  相似文献   

2.
Executing heterogeneous workloads with different priorities, resource demands and performance objectives is one of the key operations for today’s data centers to increase resource as well as energy efficiency. In order to meet the performance objectives of diverse workloads, schedulers rely on evictions even resulting in waste of resources due to lost executions of evicted tasks. It is not straightforward to design priority schedulers which capture key aspects of workloads and systems and also to strike a balance between resource (in)efficiency and application performance tradeoff. To explore large space of designing such schedulers, we propose a trace-driven cluster management framework that models a comprehensive set of system configurations and general priority-based scheduling policies. In particular, we focus on the impact of task evictions on resource inefficiency and task response times of multiple priority classes driven by Google production cluster trace. Moreover, we propose a system design as a use case exploiting workload heterogeneity and introducing workload-awareness into the system configuration and task assignment.  相似文献   

3.
荀亚玲  张继福  秦啸 《软件学报》2015,26(8):2056-2073
MapReduce是一种适用于大规模数据密集型应用的有效编程模型,具有编程简单、易于扩展、容错性好等特点,已在并行和分布式计算领域得到了广泛且成功的应用.由于MapReduce将计算扩展到大规模的机器集群上,处理数据的合理放置成为影响MapReduce集群系统性能(包括能耗、资源利用率、通信和I/O代价、响应时间、系统的可靠性和吞吐率等)的关键因素之一.首先,对MapReduce编程模型的典型实现——Hadoop缺省的数据放置策略进行分析,并进一步讨论了MapReduce框架下,设计数据放置策略时需考虑的关键问题和衡量数据放置策略的标准;其次,对目前MapReduce集群环境下的数据放置策略优化方法的研究与进展进行了综述和分析;最后,分析和归纳了MapReduce集群环境下数据放置策略的下一步研究工作.  相似文献   

4.
MapReduce is a popular framework for largescale data analysis. As data access is critical forMapReduce’s performance, some recent work has applied different storage models, such as column-store or PAX-store, to MapReduce platforms. However, the data access patterns of different queries are very different. No storagemodel is able to achieve the optimal performance alone. In this paper, we study how MapReduce can benefit from the presence of two different column-store models — pure column-store and PAX-store. We propose a hybrid storage system called hybrid columnstore (HC-store). Based on the characteristics of the incoming MapReduce tasks, our storage model can determine whether to access the underlying pure column-store or PAX-store.We studied the properties of the different storage models and create a cost model to decide the data access strategy at runtime. We have implemented HC-store on top of Hadoop. Our experimental results show that HC-store is able to outperform PAX-store and column-store, especially when confronted with diverse workload.  相似文献   

5.
MapReduce has emerged as a popular programming model in the field of data-intensive computing. This is due to its simplistic design, which provides ease of use for programmers, and its framework implementations such as Hadoop, which have been adopted by large business and technology companies. In this paper we make some improvements to the Hadoop MapReduce framework by introducing algorithms that are suitable for heterogeneous environments. The goal is to efficiently perform data-intensive computing in heterogeneous environments. The need for these adaptations derives from the fact that, following the framework design proposed by Google, Hadoop is optimized to run in large homogeneous clusters. Hence we propose MRA++, a new MapReduce framework design that considers the heterogeneity of nodes during data distribution, task scheduling and job control. MRA++establishes a training task to gather information prior to the data distribution. However, we show that the delay introduced in the setup phase is offset by the effectiveness of the mechanisms and algorithms, that achieve performance gains of more than 70% in 10 Mbps networks.  相似文献   

6.
Hadoop has emerged as a successful framework for large-scale data-intensive computing applications. However, there is no research on performance models for the Hadoop Distributed File System (HDFS). Due to the complexity of HDFS and the difficulty of modeling the multiple impact factors for HDFS performance, to establish HDFS performance models based directly on these impact factors is very complicated. In this paper, the relationship between file size and HDFS Write/Read (denoted as W/R for short) throughput, i.e., the average flow rate of a HDFS W/R operation, is studied to build HDFS performance models from a systematic view. Based on the measured data of specially designed experiments (in which HDFS W/R operations can be viewed as single-input single-output systems), a system identification-based approach is applied to construct performance models for HDFS W/R operations under different conditions. Furthermore, dynamic characteristics metrics for HDFS performance are defined, and based on the identified performance models and these metrics, the dynamic characteristics of HDFS W/R operations, such as steady state and overshoot, are studied, and the relationships between impact factors and dynamic characteristics are analyzed. These analysis results can provide effective guidance and implications for the design and configuration of HDFS and Hadoop-based applications.  相似文献   

7.
随着云计算技术的发展,许多MapReduce运行系统被开发出来,如Hadoop、Phoenix和Twister等.直观上,Hadoop具有很强的可扩展性、稳定性,适合处理大规模离线应用;Phoenix具有运行速度快等优点,适合处理数据密集型任务;Twister是轻量级的迭代系统,非常适合迭代式的应用.不同的应用在不同的MapReduce运行系统中有着不同的性能.通过测试不同应用在这些运行系统上的性能,给出了实验比较和性能分析,从而为大数据处理时选择合适的并行编程模型提供依据.  相似文献   

8.
The core business of many companies depends on the timely analysis of large quantities of new data. MapReduce clusters that routinely process petabytes of data represent a new entity in the evolving landscape of clouds and data centers. During the lifetime of a data center, old hardware needs to be eventually replaced by new hardware. The hardware selection process needs to be driven by performance objectives of the existing production workloads. In this work, we present a general framework, called Ariel, that automates system administrators’ efforts for evaluating different hardware choices and predicting completion times of MapReduce applications for their migration to a Hadoop cluster based on the new hardware. The proposed framework consists of two key components: (i) a set of microbenchmarks to profile the MapReduce processing pipeline on a given platform, and (ii) a regression-based model that establishes a performance relationship between the source and target platforms. Benchmarking and model derivation can be done using a small test cluster based on new hardware. However, the designed model can be used for predicting the jobs’ completion time on a large Hadoop cluster and be applied for its sizing to achieve desirable service level objectives (SLOs). We validate the effectiveness of the proposed approach using a set of twelve realistic MapReduce applications and three different hardware platforms. The evaluation study justifies our design choices and shows that the derived model accurately predicts performance of the test applications. The predicted completion times of eleven applications (out of twelve) are within 10% of the measured completion times on the target platforms.  相似文献   

9.
The rapid and extensive pervasion of information through the web has enhanced the diffusion of a huge amount of unstructured natural language textual resources. A great interest has arisen in the last decade for discovering, accessing and sharing such a vast source of knowledge. For this reason, processing very large data volumes in a reasonable time frame is becoming a major challenge and a crucial requirement for many commercial and research fields. Distributed systems, computer clusters and parallel computing paradigms have been increasingly applied in the recent years, since they introduced significant improvements for computing performance in data-intensive contexts, such as Big Data mining and analysis. Natural Language Processing, and particularly the tasks of text annotation and key feature extraction, is an application area with high computational requirements; therefore, these tasks can significantly benefit of parallel architectures. This paper presents a distributed framework for crawling web documents and running Natural Language Processing tasks in a parallel fashion. The system is based on the Apache Hadoop ecosystem and its parallel programming paradigm, called MapReduce. In the specific, we implemented a MapReduce adaptation of a GATE application and framework (a widely used open source tool for text engineering and NLP). A validation is also offered in using the solution for extracting keywords and keyphrase from web documents in a multi-node Hadoop cluster. Evaluation of performance scalability has been conducted against a real corpus of web pages and documents.  相似文献   

10.
基于MapReduce的程序被越来越多地应用于大型数据分析的应用中.Apache Hadoop是最常用的开源MapReduce模型之一.程序运行时间的缩短对于MapReduce程序以及所有数据处理应用而言至关重要,而能够准确估算MapReduce程序的执行时间是优化程序的重要环节.本文定义了一个在Hadoop2.x版本中能够准确估算MapReduce作业负载执行时间的性能模型.该模型包括一个优先级树模型与一个排队网络模型,分别用于展示一个MapReduce作业中不同任务之间的依赖关系及MapReduce作业内的同步约束.最后,实验证明了该模型的可用性.  相似文献   

11.
Due to the explosive growth in the size of scientific data-sets, data-intensive computing and analysing are an emerging trend in computational science. In these applications, data pre-processing is widely adopted because it can optimise the data layout or format beforehand to facilitate the future data access. On the other hand, current research shows an increasing popularity of MapReduce framework for large-scale data processing. However, the data access patterns which are generally applied to scientific data-set are not supported by current MapReduce framework directly. This gap motivates us to provide support for these scientific data access patterns in MapReduce framework. In our work, we study the data access patterns in matrix files and propose a new concentric data layout solution to facilitate matrix data access and analysis in MapReduce framework. Concentric data layout is a data layout which maintains the dimensional property in chunk level. Contrary to the continuous data layout adopted in the current Hadoop framework, concentric data layout stores the data from the same sub-matrix into one chunk. This layout can guarantee that the average performance of data access is optimal regardless of the various access patterns. The concentric data layout requires reorganising the data before it is being analysed or processed. Our experiments are launched on a real-world halo-finding application; the results indicate that the concentric data layout improves the overall performance by up to 38%.  相似文献   

12.
对Hadoop平台下的MapReduce现有的调度器进行分析研究。针对LATE调度算法在分配节点执行落后任务的备份任务时的不足,结合Hadoop集群的异构性和工作负载的特殊性,在LATE调度算法的基础上提出了一种改进的LATE调度算法。对该算法进行实验和性能分析,表明该算法在完成时间和负载均衡方面有很大改进。  相似文献   

13.
MapReduce is a currently popular programming model to support parallel computations on large datasets. Among the several existing MapReduce implementations, Hadoop has attracted a lot of attention from both industry and research. In a Hadoop job, map and reduce tasks coordinate to produce a solution to the input problem, exhibiting precedence constraints and synchronization delays that are characteristic of a pipeline communication between maps (producers) and reduces (consumers). We here address the challenge of designing analytical models to estimate the performance of MapReduce workloads, notably Hadoop workloads, focusing particularly on the intra-job pipeline parallelism between map and reduce tasks belonging to the same job. We propose a hierarchical model that combines a precedence graph model and a queuing network model to capture the intra-job synchronization constraints. We first show how to build a precedence graph that represents the dependencies among multiple tasks of the same job. We then apply it jointly with an approximate Mean Value Analysis (aMVA) solution to predict mean job response time, throughput and resource utilization. We validate our solution against a queuing network simulator and a real setup in various scenarios, finding very close agreement in both cases. In particular, our model produces estimates of average job response time that deviate from measurements of a real setup by less than 15 %.  相似文献   

14.
Hadoop Map Reduce框架的公平调度算法以统一的固定配置文件管理计算节点上计算槽的数量,这不能保障集群负载均衡,亦不能满足不同用户的资源需求。针对公平调度算法配置方式的不足,提出一种动态反馈的调度算法。该算法结合公平调度算法预先分配的特性,能够对计算节点上的计算槽进行动态调整。实验结果表明,基于动态反馈的改进算法有效地提高了集群的执行效率。  相似文献   

15.
MapReduce is emerging as a prominent tool for big data processing. Data locality is a key feature in MapReduce that is extensively leveraged in data-intensive cloud systems: it avoids network saturation when processing large amounts of data by co-allocating computation and data storage, particularly for the map phase. However, our studies with Hadoop, a widely used MapReduce implementation, demonstrate that the presence of partitioning skew (Partitioning skew refers to the case when a variation in either the intermediate keys’ frequencies or their distributions or both among different data nodes) causes a huge amount of data transfer during the shuffle phase and leads to significant unfairness on the reduce input among different data nodes. As a result, the applications severe performance degradation due to the long data transfer during the shuffle phase along with the computation skew, particularly in reduce phase. In this paper, we develop a novel algorithm named LEEN for locality-aware and fairness-aware key partitioning in MapReduce. LEEN embraces an asynchronous map and reduce scheme. All buffered intermediate keys are partitioned according to their frequencies and the fairness of the expected data distribution after the shuffle phase. We have integrated LEEN into Hadoop. Our experiments demonstrate that LEEN can efficiently achieve higher locality and reduce the amount of shuffled data. More importantly, LEEN guarantees fair distribution of the reduce inputs. As a result, LEEN achieves a performance improvement of up to 45 % on different workloads.  相似文献   

16.
Distributed applications are popular for heavy workloads where the resources of a single machine are not sufficient. These distributed applications come with many parameters to tune so that cluster resources can be effectively utilized. However, any misconfiguration of the available parameters may result in suboptimal performance of one or more machines in the cluster. These events may go unnoticed or can result in crashes. This problem of misconfigured parameters has no straightforward solution due to the variety of parameters and vastly different workloads being processed. In this article, we propose a methodology for machine learning-based detection of misconfigurations. We collect data mined from system resource utilization, Hadoop logs, and job-level metrics to train a model using decision tree and support vector machine. The models are used to identify whether a set of configuration parameters could result in a crash or a slowdown for a specific workload. The approach explained in this article can be extended to other distributed big data applications, such as Spark, Hive, Pig, and so on.  相似文献   

17.
Data‐intensive applications process large volumes of data using a parallel processing method. MapReduce is a programming model designed for data‐intensive applications for massive data sets and an execution framework for large‐scale data processing on clusters of commodity servers. While fault tolerance, easy programming structure, and high scalability are considered strong points of MapReduce; however its configuration parameters must be fine‐tuned to the specific deployment, which makes it more complex in configuration and performance. This paper explains tuning of the Hadoop configuration parameters, which directly affect MapReduce's job workflow performance under various conditions to achieve maximum performance. On the basis of the empirical data we collected, it became apparent that three main methodologies can affect the execution time of MapReduce running on cluster systems. Therefore, in this paper, we present a model that consists of three main modules: (1) Extending a data redistribution technique in order to find the high‐performance nodes, (2) Utilizing the number of map/reduce slots in order to make it more efficient in terms of execution time, and (3) Developing a new hybrid routing schedule shuffle phase in order to define the scheduler task while memory management level is reduced.  相似文献   

18.
MapReduce is a popular programming model for distributed processing of large data sets. Apache Hadoop is one of the most common open-source implementations of such paradigm. Performance analysis of concurrent job executions has been recognized as a challenging problem, at the same time, that may provide reasonably accurate job response time estimation at significantly lower cost than experimental evaluation of real setups. In this paper, we tackle the challenge of defining MapReduce performance model for Hadoop 2.x. While there are several efficient approaches for modeling the performance of MapReduce workloads in Hadoop 1.x, they could not be applied to Hadoop 2.x due to fundamental architectural changes and dynamic resource allocation in Hadoop 2.x. Thus, the proposed solution is based on an existing performance model for Hadoop 1.x, but taking into consideration architectural changes and capturing the execution flow of a MapReduce job by using queuing network model. This way, the cost model reflects the intra-job synchronization constraints that occur due the contention at shared resources. The accuracy of our solution is validated via comparison of our model estimates against measurements in a real Hadoop 2.x setup.  相似文献   

19.
针对传统MapReduce算法结构在处理大数据时,负载均衡性能不理想的缺点,设计了一种具有负载均衡机制的层次MapReduce模型。该模型利用超立方拓扑结构对MapReduce的映射操作进行改进,通过特定算法将八个结构化的数据中心链接到一个对等的云环境结构中,并使用奇偶直方图组合采样方式的均衡划分方法,实现在用户请求下的节点工作负荷指数均衡。最后,基于Hadoop框架对所提算法进行仿真实验,结果显示本文所提算法结构相对于原始MapReduce结构,具有更高的并行计算的资源利用率,以及更佳的容错和负载均衡性能,综合性能得到有效提升。  相似文献   

20.
鉴于单节点数据库审计系统检索性能低下的现状,探讨应用Hadoop伪分布模式和HBase列存储模型重构数据库审计系统的检索存储体系,重点研究HDFS存储机制、MapReduce运算框架和HBase数据模型三者的集成,以提升数据库审计系统实时检索和综合分析的性能.重构方案有效提升了检索性能,但鉴于数据的高可靠性和大体积,提出结合生产现状应用Hadoop和HBase分布式集群的展望.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号