首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 171 毫秒
1.
虽然以MapReduce和Hadoop分布式系统(HDFS)为核心的Hadoop已在大规模数据密集的商业领域成功应用,但是对于多个并行操作之间重用工作数据集却表现不佳。作为对其的一种补充,本文介绍了Spark。首先介绍Hadoop的MapReduce与HDFS基本概念与设计思想,然后介绍了Spark的基本概念与思想,并且着重介绍了弹性分布式数据集RDD,并通过实验证明和分析对比了Hadoop与Spark。  相似文献   

2.
Hadoop集MapReduce、HDFS、HBase、Avro、Pig等子项目于一身,并行编程模型(MapReduce)、分布式文件系统(HDFS)是Hadoop的核心技术。用户可以通过结合编程模型MapReduce与Hadoop的方式对分布式程序进行进行二次开发,从海量数据中挖掘隐含的、新颖的、对决策实施工作有指导价值的关系、模型,在Hadoop平台上构建数据挖掘系统。  相似文献   

3.
Hadoop 是一个实现 MapReduce 计算模型的开源分布式并行编程框架,借助于 Hadoop,程序员可以轻松编写分布式并行程序,并将其运行于计算机集群上,完成海量数据的计算。本文介绍了基于Hadoop架构的系统的设计与实现,介绍如何提高Hadoop的分布式文件管理系统(HDFS)和MapReduce的执行效率与速度,以及 Hadoop 的安装部署和基本运行方法。  相似文献   

4.
文章介绍了Hadoop分布式计算架构及其核心技术HDFS(Hadoop Distributed Filesystem)、MapReduce处理大数据的原理,分析了该技术适用于海量网络安全事件分析的优势和特点。提出了一种基于Hadoop架构的网络安全事件分析方法,并进行了实例分析,验证了该方法的可行性。  相似文献   

5.
极限学习机算法虽然训练速度较快,但包含了大量矩阵运算,因此其在面对大数据量时,处理效率依然缓慢。在充分研究Spark分布式数据集并行计算机制的基础上,设计了核心环节矩阵乘法的并行计算方案,并对基于Spark的极限学习机并行化算法进行了设计与实现。为方便性能比较,同时实现了基于Hadoop MapReduce的极限学习机并行化算法。实验结果表明,基于Spark的极限学习机并行化算法相比于Hadoop MapReduce版本的运行时间明显缩短,而且若处理数据量越大,Spark在效率方面的优势就越明显。  相似文献   

6.
Spark的崛起对作为当前最为流行的大数据问题解决方案的Hadoop及其生态系统形成了有力的冲击,甚至一度有人认为Spark有取代Hadoop的趋势,但是因为Hadoop与Spark有着各自不同的特点,使得二者拥有不同的应用场景,从而Spark无法完全取代Hadoop。针对以上问题,我们对Hadoop与Spark的应用场景进行了分析。首先介绍了Hadoop与Spark的相关技术以及各自的生态系统,然后详细分析了二者的特性,最后针对二者特性,阐述了Hadoop与Spark各自所适应的应用场景。  相似文献   

7.
本文针对现有的图处理和图管理框架存在的效率低下以及数据存储结构等问题,提出了一种适合于大规模图数据处理机制。首先分析了目前的一些图处理模型以及图存储框架的优势与存在的不足。其次,通过对分布式计算的特性分析采取适合大规模图的分割算法、数据抽取的优化以及缓存、计算层与持久层结合机制三方面来设计本文的图数据处理框架。最后通过PageRank和SSSP算法来设计实验与MapReduce框架和采用HDFS作持久层的Spark框架做性能对比。实验证明本文提出的框架要比MapReduce框架快90倍,比采用HDFS作持久层的Spark框架快2倍,能够满足高效率图数据处理的应用前景。  相似文献   

8.
随着电商规模的逐渐扩大,传统的Hadoop资源利用率和计算速度都无法全面满足发展需求,因此提出将低延时、基于内存计算的Spark作为计算引擎。利用SparkCore、SparkSQL做离线分析、利用SparkStreaming做实时分析,将Hadoop分布式文件系统(HDFS)作为分布式文件存储,利用YARN做资源管理与程序调度,从而完成了一个电商的行为数据分析系统,通过Flume、Kafka等技术对数据进行采集及存储,利用Spark进行数据处理。经过测试,电商用户行为分析系统表现突出,具有良好的应用价值。  相似文献   

9.
Hadoop平台在云计算中的应用   总被引:4,自引:0,他引:4  
王宏宇 《软件》2011,32(4):36-38,50
云计算是当前比较热门的新兴技术之一,受到业界的广泛关注。Hadoop是一个可实现大规模分布式计算的开源软件平台,因此被广泛应用在云计算领域。本文在对Hadoop的主要组件Hadoop分布式文件系统HDFS(Hadoop Distributed File System)和计算模型MapReduce进行深入分析和研究的基础上,建立基于Hadoop平台的云计算模型,通过实验证明该模型可以有效完成分布式数据处理任务。  相似文献   

10.
随着互联网的用户及内容呈指数级增长,大规模数据场景下的相似度计算对算法的效率提出了更高的要求。为提高算法的执行效率,对MapReduce架构下的算法执行缺陷进行了分析,结合Spark适于迭代型及交互型任务的特点,基于二维划分算法将算法从MapReduce平台移植到Spark平台;同时,通过参数调整、内存优化等方法进一步提高算法的执行效率。通过2组数据集分别在3组不同规模的集群上的实验表明,与MapReduce相比,在Spark平台下算法的执行效率平均提高了4.715倍,平均能耗效率只有Hadoop能耗的24.86%,能耗效率提升了4倍左右。  相似文献   

11.
鉴于单节点数据库审计系统检索性能低下的现状,探讨应用Hadoop伪分布模式和HBase列存储模型重构数据库审计系统的检索存储体系,重点研究HDFS存储机制、MapReduce运算框架和HBase数据模型三者的集成,以提升数据库审计系统实时检索和综合分析的性能.重构方案有效提升了检索性能,但鉴于数据的高可靠性和大体积,提出结合生产现状应用Hadoop和HBase分布式集群的展望.  相似文献   

12.
大数据、云计算技术的迅猛发展为挖掘气象数据丰富的科研和经济价值提供了技术支撑,促进了Hadoop及其包含的文件存储系统(HDFS,Hadoop Distributed File System)和分布式计算模型在气象数据处理领域广泛应用。由于气象数据具有大数据的4V特征,还需要引入新的数据处理算法来提高气象数据处理效率。通过对决策树算法原理的研究,基于Hadoop云平台,创建随机森林模型,为数据挖掘算法在云平台上的应用提供一种新的可能性。基于决策树(CART,Classification And Regression Trees)挖掘算法的气象大数据云平台设计,采用Hadoop系统架构和MapReduce工作流程,对气象大数据云平台采用集群部署。平台总体架构分为基础设施层、数据管理与处理层、应用层,减少了决策树建立的时间,实现了气象数据高效加工和挖掘分析等平台功能。  相似文献   

13.
从云计算三个层次的服务模式出发,提出了一种基于云计算平台的分布式并行信息系统数据采集分析系统.首先,通过Hadoop云计算平台提供的分布式文件系统提升数据的存取速度,增强系统的容错性.在此基础上,利用MapReduce编程模型并行化数据流系综分类算法,提高数据的分类挖掘效率.最后,采用Web Service技术构建了SOA服务体系架构,从而整合了技术平台.测试结果表明,检测系统运行高效,并且检测精度高,具有一定的实用性和推广价值.  相似文献   

14.
Hadoop MapReduce has evolved to an important industry standard for massive parallel data processing and has become widely adopted for a variety of use-cases. Recent works have shown that indexes can improve the performance of selective MapReduce jobs dramatically. However, one major weakness of existing approaches is high index creation costs. We present HAIL (Hadoop Aggressive Indexing Library), a novel indexing approach for HDFS and Hadoop MapReduce. HAIL creates different clustered indexes over terabytes of data with minimal, often invisible costs, and it dramatically improves runtimes of several classes of MapReduce jobs. HAIL features two different indexing pipelines, static indexing and adaptive indexing. HAIL static indexing efficiently indexes datasets while uploading them to HDFS. Thereby, HAIL leverages the default replication of Hadoop and enhances it with logical replication. This allows HAIL to create multiple clustered indexes for a dataset, e.g., one for each physical replica. Still, in terms of upload time, HAIL matches or even improves over the performance of standard HDFS. Additionally, HAIL adaptive indexing allows for automatic, incremental indexing at job runtime with minimal runtime overhead. For example, HAIL adaptive indexing can completely index a dataset as byproduct of only four MapReduce jobs while incurring an overhead as low as 11 % for the very first of those job only. In our experiments, we show that HAIL improves job runtimes by up to 68 $\times $ over Hadoop. This article is an extended version of the VLDB 2012 paper (Dittrich et al. in PVLDB 5(11):1591–1602, 2012).  相似文献   

15.
Large-scale data-intensive cloud computing with the MapReduce framework is becoming pervasive for the core business of many academic, government, and industrial organizations. Hadoop, a state-of-the-art open source project, is by far the most successful realization of MapReduce framework. While MapReduce is easy- to-use, efficient and reliable for data-intensive computations, the excessive configuration parameters in Hadoop impose unexpected challenges on running various workloads with a Hadoop cluster effectively. Consequently, developers who have less experience with the Hadoop configuration system may devote a significant effort to write an application with poor performance, either because they have no idea how these configurations would influence the performance, or because they are not even aware that these configurations exist. There is a pressing need for comprehensive analysis and performance modeling to ease MapReduce application development and guide performance optimization under different Hadoop configurations. In this paper, we propose a statistical analysis approach to identify the relationships among workload characteristics, Hadoop configurations and workload performance. We apply principal component analysis and cluster analysis to 45 different metrics, which derive relationships between workload characteristics and corresponding performance under different Hadoop configurations. Regression models are also constructed that attempt to predict the performance of various workloads under different Hadoop configurations. Several non-intuitive relationships between workload characteristics and performance are revealed through our analysis and the experimental results demonstrate that our regression models accurately predict the performance of MapReduce workloads under different Hadoop configurations.  相似文献   

16.
Nowadays, many organizations analyze their data with the MapReduce paradigm, most of them using the popular Apache Hadoop framework. As the data size managed by MapReduce applications is steadily increasing, the need for improving the Hadoop performance also grows. Existing modifications of Hadoop (e.g., Mellanox Unstructured Data Accelerator) attempt to improve performance by changing some of its underlying subsystems. However, they are not always capable to cope with all its performance bottlenecks or they hinder its portability. Furthermore, new frameworks like Apache Spark or DataMPI can achieve good performance improvements, but they do not keep compatibility with existing MapReduce applications. This paper proposes Flame-MR, a new event-driven MapReduce architecture that increases Hadoop performance by avoiding memory copies and pipelining data movements, without modifying the source code of the applications. The performance evaluation on two representative systems (an HPC cluster and a public cloud platform) has shown experimental evidence of significant performance increases, reducing the execution time by up to 54% on the Amazon EC2 cloud.  相似文献   

17.
在新媒体视频业务快速发展的今天,传统单机视频转码能力已经出现瓶颈. 在Hadoop云计算平台的研究基础上,结合当前主流的音视频处理工具FFmpeg,提出了一种新的视频转码方案. 该方案通过使用Hadoop两大核心:HDFS(Hadoop Distributed File System)和MapReduce编程思想,进行分布式转码. 同时,还详细地介绍和设计了分布式转码的具体流程. 最后实验结果表明,该分布式转码方案在效率上有较大提高. 在实验中,视频的分段大小也影响着视频转码的时间. 随着分段大小从小到大,同样的视频转码时间变化却是由高降低再升高. 从实验数据来看,相对于其他的分段,分段大小为32M的时候,转码时间最佳.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号