首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
MapReduce编程模型是广泛应用于云计算环境下处理海量数据的一种并行计算框架。然而该框架下的面向数据密集型计算,集群节点间的数据传输依赖性较强,造成节点间的消息处理负载过重。提出基于消息代理机制的MapReduce改进模型,优化数据流。经实验数据表明,基于消息代理机制的MapReduce框架能提高数据密集型应用上的负载均衡。  相似文献   

2.
MapReduce is a programming model proposed to simplify large-scale data processing. In contrast, the message passing interface (MPI) standard is extensively used for algorithmic parallelization, as it accommodates an efficient communication infrastructure. In the original implementation of MapReduce, the reduce function can only start processing following termination of the map function. If the map function is slow for any reason, this will affect the whole running time. In this paper, we propose MapReduce overlapping using MPI, which is an adapted structure of the MapReduce programming model for fast intensive data processing. Our implementation is based on running the map and the reduce functions concurrently in parallel by exchanging partial intermediate data between them in a pipeline fashion using MPI. At the same time, we maintain the usability and the simplicity of MapReduce. Experimental results based on three different applications (WordCount, Distributed Inverted Indexing and Distributed Approximate Similarity Search) show a good speedup compared to the earlier versions of MapReduce such as Hadoop and the available MPI-MapReduce implementations.  相似文献   

3.
数据分析和处理是大规模分布式数据处理应用中的重要任务。由于简单易用和具有灵活性, MapReduce编程模型逐渐成为大规模分布式数据处理系统(如Hadoop系统)的核心模型。由于所处理的数据可能不是均匀分布的,MapReduce编程模型在处理连接操作时,会出现数据倾斜问题。数据倾斜问题严重降低了MapReduce执行连接操作的效率。针对MapReduce中连接操作的数据倾斜问题,分析了造成MapReduce连接性能瓶颈的原因并建立负载均衡代价模型,提出了用范围分割方法控制连接过程中的数据倾斜问题实现负载均衡的策略。实验结果表明,所提方法明显提高了连接的效率。  相似文献   

4.
Web-scale digital assets comprise millions or billions of documents. Due to such increase, sequential algorithms cannot cope with this data, and parallel and distributed computing become the solution of choice. MapReduce is a programming model proposed by Google for scalable data processing. MapReduce is mainly applicable for data intensive algorithms. In contrast, the message passing interface (MPI) is suitable for high performance algorithms. This paper proposes an adapted structure of the MapReduce programming model using MPI for multimedia indexing. Experimental results are done on various multimedia applications to validate our model. The experiments indicate that our proposed model achieves good speedup compared to the original sequential versions, Hadoop and the earlier versions of MapReduce using MPI.  相似文献   

5.
A data warehouse can store very large amounts of data that should be processed in parallel in order to achieve reasonable query execution times. The MapReduce programming model is a very convenient way to process large amounts of data in parallel on commodity hardware clusters. A very popular query used in data warehouses is star‐join. In this paper, we present a fast and efficient star‐join query execution algorithm built on top of a MapReduce framework called Hadoop. By using dynamic filters against dimension tables, the algorithm needs a single scan of the fact table, which means a significant reduction of input/output operations and computational complexity. Also, the algorithm requires only two MapReduce iterations in total–one to build the filters against dimension tables and one to scan the fact table. Our experiments show that the proposed algorithm performs much better than the existing solutions in terms of execution time and input/output. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
荀亚玲  张继福  秦啸 《软件学报》2015,26(8):2056-2073
MapReduce是一种适用于大规模数据密集型应用的有效编程模型,具有编程简单、易于扩展、容错性好等特点,已在并行和分布式计算领域得到了广泛且成功的应用.由于MapReduce将计算扩展到大规模的机器集群上,处理数据的合理放置成为影响MapReduce集群系统性能(包括能耗、资源利用率、通信和I/O代价、响应时间、系统的可靠性和吞吐率等)的关键因素之一.首先,对MapReduce编程模型的典型实现——Hadoop缺省的数据放置策略进行分析,并进一步讨论了MapReduce框架下,设计数据放置策略时需考虑的关键问题和衡量数据放置策略的标准;其次,对目前MapReduce集群环境下的数据放置策略优化方法的研究与进展进行了综述和分析;最后,分析和归纳了MapReduce集群环境下数据放置策略的下一步研究工作.  相似文献   

7.
MapReduce frameworks allow programmers to write distributed, data‐parallel programs that operate on multisets. These frameworks offer considerable flexibility to support various kinds of programs and data. To understand the essence of the programming model better and to provide a rigorous foundation for optimizations, we present an abstract, functional model of MapReduce along with a number of customization options. We demonstrate that the MapReduce programming model can also represent programs that operate on lists, which differ from multisets in that the order of elements matters. Along with the functional model, we offer a cost model that allows programmers to estimate and compare the performance of MapReduce programs. Based on the cost model, we introduce two transformation rules aiming at performance optimization of MapReduce programs, which also demonstrates the usefulness of our model. In an exploratory study, we assess the impact of applying these rules to two applications. The functional model and the cost model provide insights at a proper level of abstraction into why the optimization works. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
基于MapReduce的程序被越来越多地应用于大型数据分析的应用中.Apache Hadoop是最常用的开源MapReduce模型之一.程序运行时间的缩短对于MapReduce程序以及所有数据处理应用而言至关重要,而能够准确估算MapReduce程序的执行时间是优化程序的重要环节.本文定义了一个在Hadoop2.x版本...  相似文献   

9.
MapReduce:新型的分布式并行计算编程模型   总被引:3,自引:0,他引:3  
MapReduce是Google提出的分布式并行计算编程模型,用于大规模数据的并行处理。Ma-pReduce模型受函数式编程语言的启发,将大规模数据处理作业拆分成若干个可独立运行的Map任务,分配到不同的机器上去执行,生成某种格式的中间文件,再由若干个Reduce任务合并这些中间文件获得最后的输出文件。用户在使用MapReduce模型进行大规模数据处理时,可以将主要精力放在如何编写Map和Reduce函数上,其它并行计算中的复杂问题诸如分布式文件系统、工作调度、容错、机器间通信等都交给MapReduce系统处理,在很大程度上降低了整个编程难度。MapReduce日益成为云计算平台的主流编程模型。Apache Hadoop项目提供开源的MapReduce系统还有待进一步完善。  相似文献   

10.
Traditional High-Performance Computing (HPC) based big-data applications are usually constrained by having to move large amount of data to compute facilities for real-time processing purpose. Modern HPC systems, represented by High-Throughput Computing (HTC) and Many-Task Computing (MTC) platforms, on the other hand, intend to achieve the long-held dream of moving compute to data instead. This kind of data-aware scheduling, typically represented by Hadoop MapReduce, has been successfully implemented in its Map Phase, whereby each Map Task is sent out to the compute node where the corresponding input data chunk is located. However, Hadoop MapReduce limits itself to a one-map-to-one-reduce framework, leading to difficulties for handling complex logics, such as pipelines or workflows. Meanwhile, it lacks built-in support and optimization when the input datasets are shared among multiple applications and/or jobs. The performance can be improved significantly when the knowledge of the shared and frequently accessed data is taken into scheduling decisions.To enhance the capability of managing workflow in modern HPC system, this paper presents CloudFlow, a Hadoop MapReduce based programming model for cloud workflow applications. CloudFlow is built on top of MapReduce, which is proposed not only being data aware, but also shared-data aware. It identifies the most frequently shared data, from both task-level and job-level, replicates them to each compute node for data locality purposes. It also supports user-defined multiple Map- and Reduce functions, allowing users to orchestrate the required data-flow logic. Mathematically, we prove the correctness of the whole scheduling framework by performing theoretical analysis. Further more, experimental evaluation also shows that the execution runtime speedup exceeds 4X compared to traditional MapReduce implementation with a manageable time overhead.  相似文献   

11.
MapReduce is a popular programming model for distributed processing of large data sets. Apache Hadoop is one of the most common open-source implementations of such paradigm. Performance analysis of concurrent job executions has been recognized as a challenging problem, at the same time, that may provide reasonably accurate job response time estimation at significantly lower cost than experimental evaluation of real setups. In this paper, we tackle the challenge of defining MapReduce performance model for Hadoop 2.x. While there are several efficient approaches for modeling the performance of MapReduce workloads in Hadoop 1.x, they could not be applied to Hadoop 2.x due to fundamental architectural changes and dynamic resource allocation in Hadoop 2.x. Thus, the proposed solution is based on an existing performance model for Hadoop 1.x, but taking into consideration architectural changes and capturing the execution flow of a MapReduce job by using queuing network model. This way, the cost model reflects the intra-job synchronization constraints that occur due the contention at shared resources. The accuracy of our solution is validated via comparison of our model estimates against measurements in a real Hadoop 2.x setup.  相似文献   

12.
The MapReduce paradigm provides a scalable model for large scale data intensive computing and associated fault-tolerance. Data volumes generated and processed by scientific applications are growing rapidly. Several MapReduce implementations, with various degrees of conformance to the key tenets of the model, are available today. Each of these implementations is optimized for specific features. To make the right decisions, HPC application and middleware developers must thus understand the complex dependences between MapReduce features and their application. We present a set of benchmarks for quantifying, comparing, and contrasting the performance of MapReduce implementations under a wide range of representative use cases. To demonstrate the utility of the benchmarks and to provide a snapshot of the current implementation landscape, we report the performance of three different MapReduce implementations, and draw conclusions about their current performance characteristics. The three implementations we chose for evaluation are the widely used Hadoop implementation, Twister, which has been widely discussed in the literature in the context of scientific applications, and LEMO-MR which is our own implementation. We present the performance of these three implementations and draw conclusions about their performance characteristics.  相似文献   

13.
As a parallel programming framework, MapReduce can process scalable and parallel applications with large scale datasets. The executions of Mappers and Reducers are independent of each other. There is no communication among Mappers, neither among Reducers. When the amount of final results is much smaller than the original data, it is a waste of time processing the unpromising intermediate data. We observe that this waste can be significantly reduced by simple communication mechanisms to enhance the performance of MapReduce. In this paper, we propose ComMapReduce, an efficient framework that extends and improves MapReduce for big data applications in the cloud. ComMapReduce can effectively obtain certain shared information with efficient lightweight communication mechanisms. Three basic communication strategies, Lazy, Eager and Hybrid, and two optimization communication strategies, Prepositive and Postpositive, are proposed to obtain the shared information and effectively process big data applications. We also illustrate the implementations of three typical applications with large scale datasets on ComMapReduce. Our extensive experiments demonstrate that ComMapReduce outperforms MapReduce in all metrics without affecting the existing characteristics of MapReduce.  相似文献   

14.
As a widely-used parallel computing framework for big data processing today, the Hadoop MapReduce framework puts more emphasis on high-throughput of data than on low-latency of job execution. However, today more and more big data applications developed with MapReduce require quick response time. As a result, improving the performance of MapReduce jobs, especially for short jobs, is of great significance in practice and has attracted more and more attentions from both academia and industry. A lot of efforts have been made to improve the performance of Hadoop from job scheduling or job parameter optimization level. In this paper, we explore an approach to improve the performance of the Hadoop MapReduce framework by optimizing the job and task execution mechanism. First of all, by analyzing the job and task execution mechanism in MapReduce framework we reveal two critical limitations to job execution performance. Then we propose two major optimizations to the MapReduce job and task execution mechanisms: first, we optimize the setup and cleanup tasks of a MapReduce job to reduce the time cost during the initialization and termination stages of the job; second, instead of adopting the loose heartbeat-based communication mechanism to transmit all messages between the JobTracker and TaskTrackers, we introduce an instant messaging communication mechanism for accelerating performance-sensitive task scheduling and execution. Finally, we implement SHadoop, an optimized and fully compatible version of Hadoop that aims at shortening the execution time cost of MapReduce jobs, especially for short jobs. Experimental results show that compared to the standard Hadoop, SHadoop can achieve stable performance improvement by around 25% on average for comprehensive benchmarks without losing scalability and speedup. Our optimization work has passed a production-level test in Intel and has been integrated into the Intel Distributed Hadoop (IDH). To the best of our knowledge, this work is the first effort that explores on optimizing the execution mechanism inside map/reduce tasks of a job. The advantage is that it can complement job scheduling optimizations to further improve the job execution performance.  相似文献   

15.
MapReduce框架下的Skyline计算   总被引:2,自引:0,他引:2  
由于Skyline查询广泛应用于多目标决策、数据可视化等领域,近年来成为数据库领域的一个研究热点。针对云计算环境,在MapReduce框架下设计并实现了Skyline算法。MapReduce是一个运行在大型集群上处理海量数据的并行计算框架,其主要思想是任务的分解与结果的汇总。基于不同的数据划分思想,实施了三种Skyline并行算法,分别是基于MapReduce的块嵌套循环算法(MapReduce based block-nested-loops,MR-BNL)、基于MapReduce的排序过滤算法(MapReduce based sort-filter-skyline,MR-SFS)以及基于MapReduce的位图算法(MapReduce based bitmap,MR-Bitmap),并针对这三种算法进行了系统的实验比较,得出了不同数据分布、维数、缓存等因素对算法性能的影响结果。  相似文献   

16.
MapReduce programming paradigm has been widely applied to solve large‐scale data‐intensive problems. Intensive studies of MapReduce scheduling have been carried out to improve MapReduce system performance. Delay scheduling is a common way to achieve high data locality and system performance. However, inappropriate delays can lead to low system throughput and potentially break the original job priority constraints. This paper proposes a deadline‐enabled delay (DLD) scheduling algorithm that optimizes job delay decisions according to real‐time resource availability and resource competition, while still meets job deadline constraints. Experimental results illustrate that the resource availability estimation method of DLD is accurate (92%). Compared with other approaches, DLD reduces job turnaround time by 22% in average while keeping a high locality rate (88%).Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
It is a fact that the attention of research community in computer science, business executives, and decision makers is drastically drawn by big data. As the volume of data becomes bigger, it needs performance‐oriented data‐intensive processing frameworks such as MapReduce, which can scale computation on large commodity clusters. Hadoop MapReduce processes data in Hadoop Distributed File System as jobs scheduled according to YARN fair scheduler and capacity scheduler. However, with advancement and dynamic changes in hardware and operating environments, the performance of clusters is greatly affected. Various efforts in literature have been made to address the issues of heterogeneity (i.e., clusters consisting of virtual machines and machines with different hardware), network communication, data locality, better resource utilization, and run‐time scheduling. In this paper, we present a survey to discuss various research efforts made so far to improve Hadoop MapReduce scheduling. We classify scheduling algorithms and techniques proposed in the literature so far based on their addressing areas and present a taxonomy. Furthermore, we also discuss various aspects of open issues and challenges in the scheduling of MapReduce to improve its performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
大多数图像处理算法都可利用GPU进行加速以达到更好的执行性能,但数据传输操作与核函数执行之间的调度策略问题仍是桎梏加速性能进一步提升的主要瓶颈。为了解决这个问题,通常采用GPU任务流将核函数执行与数据传输操作进行重叠,以隐藏部分数据传输与核函数执行耗时。但是,由于CUDA编程模型的特性以及GPU硬件资源的限制,在某些情况下,即使创建较多的任务流用于任务重叠,每个流上仍会存在串行执行的任务,导致加速效果无法进一步提升。因此,考虑利用CSS将待处理图像进行合并从而将单个流中的算子核函数及数据传输操作进行合并,以减少数据传输操作和核函数执行的固定代价及调用间隙。通过实验结果可知,提出的CSS结构不仅能在单流的情况下提高GPU图像处理算法执行性能,在多流的情况下其加速性能也得到了进一步提升,具有较好的实用性及可扩展性,适用于包含较多算子操作或较小尺寸图像批量处理的情况。此外,提出的方法对图像处理算法的GPU加速提供了新的研究思路。  相似文献   

19.
针对Hadoop云平台下MapReduce计算模型在处理图数据时效率低下的问题,提出了一种类似谷歌Pregel的图数据处理计算框架--MyBSP.首先,分析了MapReduce的运行机制及不足之处;其次,阐述了MyBSP框架的结构、工作流程及主要接口;最后,在分析PageRank图处理算法原理的基础上,设计并实现了基于MyBSP框架的PageRank算法.实验结果表明,基于MyBSP框架的图数据处理算法与基于MapReduce的算法相比,迭代处理的性能提升了1.9~3倍.MyBSP算法的执行时间减少了67%,能够满足图数据高效处理的应用前景.  相似文献   

20.
In the era of Big Data, huge amounts of structured and unstructured data are being produced daily by a myriad of ubiquitous sources. Big Data is difficult to work with and requires massively parallel software running on a large number of computers. MapReduce is a recent programming model that simplifies writing distributed applications that handle Big Data. In order for MapReduce to work, it has to divide the workload among computers in a network. Consequently, the performance of MapReduce strongly depends on how evenly it distributes this workload. This can be a challenge, especially in the advent of data skew. In MapReduce, workload distribution depends on the algorithm that partitions the data. One way to avoid problems inherent from data skew is to use data sampling. How evenly the partitioner distributes the data depends on how large and representative the sample is and on how well the samples are analyzed by the partitioning mechanism. This paper proposes an improved partitioning algorithm that improves load balancing and memory consumption. This is done via an improved sampling algorithm and partitioner. To evaluate the proposed algorithm, its performance was compared against a state of the art partitioning mechanism employed by TeraSort. Experiments show that the proposed algorithm is faster, more memory efficient, and more accurate than the current implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号