首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
《Computer》2008,41(4):c1-c1
  相似文献   

2.
近年来,随着数据量的不断增大,数据密集型计算任务变得日益繁重.如何能够快速、高效地实现在大规模数据集上的计算,已成为数据密集型计算的主要研究方向.最近几年,研究人员利用新型的硬件处理器对数据密集型计算进行加速处理,并针对不同新型处理器的特点,设计了不同形式的加速处理算法.主要对新型硬件处理器基于数据密集型计算的研究进行了综述.首先概述了新型硬件处理器的特点;然后,分别对新型处理器FPGA和GPU等硬件进行性能分析,并分析了每种处理器对数据密集型计算的效果;最后提出了进一步的研究方向.  相似文献   

3.
数据密集型计算编程模型研究进展   总被引:12,自引:0,他引:12  
作为一种新兴的计算模式,云计算受到了学术界和产业界的广泛关注.云计算以互联网服务和应用为中心,服务提供者需要存储和分析海量数据.为了能够低成本高效率地处理Web量级数据,主要的互联网公司都在由商品化服务器组成的大规模集群系统上研发了分布式编程系统.编程模型可以降低开发人员在大规模集群上编程的难度,并让程序充分利用集群资源,但设计这样的编程模型面临巨大挑战.首先说明了数据密集型计算的特点,并指出了编程模型要解决的基本问题;接着深入介绍了国际上代表性的编程模型,并对这些编程模型的特点进行了比较和分析;最后对当前所面临的问题和今后的发展趋势进行了总结和展望.  相似文献   

4.
随着高性能计算机逐步应用在大规模数据处理领域,存储系统将成为制约数据处理效率的主要瓶颈.在分析了影响数据密集型计算I/O性能若干关键因素的基础上,提出使用计算结点本地存储构建协作式非易失缓存、以分布式存储架构加速集中式存储架构的方法.该方法基于应用层协同使用分布化的本地存储资源,使用非易失存储介质构成大缓存空间,存放大规模数据分析的中间过程结果,以此实现高缓存命中率,并利用并发度约束控制等手段避免I/O竞争,充分利用本地存储的特定性能优势保证缓存加速效果,从而有效地提高了大规模数据处理过程的I/O效率.基于多平台多种I/O模式的测试结果证实了该方法的有效性,聚合I/O带宽具有高扩展性,典型数据密集应用的整体性能最大可提升6倍.  相似文献   

5.
The Changing Paradigm of Data-Intensive Computing   总被引:1,自引:0,他引:1  
《Computer》2009,42(1):26-34
Through the development of new classes of software, algorithms, and hardware, data-intensive applications provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements.  相似文献   

6.
We review fast networking technologies for both wide-area and high performance cluster computer systems. We describe our experiences in constructing asynchronous transfer mode (ATM)-based local- and wide-area clusters and the tools and technologies this experience led us to develop. We discuss our experiences using Internet Protocol on such systems as well as native ATM protocols and the problems facing wide-area integration of cluster systems. We are presently constructing Beowulf-class computer clusters using a mix of Fast Ethernet and Gigabit Ethernet technology and we anticipate how such systems will integrate into a new local-area Gigabit Ethernet network and what technologies will be used for connecting shared HPC resources across wide-areas. High latencies on wide-area cluster systems led us to develop a metacomputing problem-solving environment known as distributed information systems control world (DISCWorld). We summarize our main developments in this project as well as the key features and research directions for software to exploit computational services running on fast networked cluster systems.  相似文献   

7.
The deluge of data that future applications must process—in domains ranging from science to business informatics—creates a compelling argument for substantially increased R&D targeted at discovering scalable hardware and software solutions for data-intensive problems.  相似文献   

8.
9.
可重构计算的硬件结构   总被引:14,自引:2,他引:14  
首先讨论了可重构计算的基本含义及特点,指出它的实质是突破了通用微处理仅时间维可变,ASIC空间维可变的限制,实现时间、空间两维可编程。其次,系统地综述了基于FPGA的可重构计算硬件结构的基本技术,重点讨论了逻辑单远的粒度及单元间互连的路由问题,最后给出了基于可重构计算的几个典型体系结构框架。  相似文献   

10.
Genes in an organism's DNA (genome) have embedded in them information about proteins, which are the molecules that do most of a cell's work. A typical bacterial genome contains on the order of 5,000 genes. Mammalian genomes can contain tens of thousands of genes. For each genome sequenced, the challenge is to identify protein components (proteome) being actively used for a given set of conditions. Fundamentally, sequence alignment is a sequence matching problem focused on unlocking protein information embedded in the genetic code, making it possible to assemble a "tree of life” by comparing new sequences against all sequences from known organisms. But, the memory footprint of sequence data is growing more rapidly than per-node core memory. Despite years of research and development, high-performance sequence alignment applications either do not scale well, cannot accommodate very large databases in core, or require special hardware. We have developed a high-performance sequence alignment application, ScalaBLAST, which accommodates very large databases and which scales linearly to as many as thousands of processors on both distributed memory and shared memory architectures, representing a substantial improvement over the current state-of-the-art in high-performance sequence alignment with scaling and portability. ScalaBLAST relies on a collection of techniques—distributing the target database over available memory, multilevel parallelism to exploit concurrency, parallel I/O, and latency hiding through data prefetching—to achieve high-performance and scalability. This demonstrated approach of database sharing combined with effective task scheduling should have broad ranging applications to other informatics-driven sciences.  相似文献   

11.
Today, campus grids provide users with easy access to thousands of CPUs. However, it is not always easy for nonexpert users to harness these systems effectively. A large workload composed in what seems to be the obvious way by a naive user may accidentally abuse shared resources and achieve very poor performance. To address this problem, we argue that campus grids should provide end users with high-level abstractions that allow for the easy expression and efficient execution of data-intensive workloads. We present one example of an abstraction—All-Pairs—that fits the needs of several applications in biometrics, bioinformatics, and data mining. We demonstrate that an optimized All-Pairs abstraction is both easier to use than the underlying system, achieve performance orders of magnitude better than the obvious but naive approach, and is both faster and more efficient than a tuned conventional approach. This abstraction has been in production use for one year on a 500 CPU campus grid at the University of Notre Dame and has been used to carry out a groundbreaking analysis of biometric data.  相似文献   

12.
13.
集合预报中需要大量的高性能计算资源对海量数据进行实时分析和处理,高效的资源管理和数据共享将有效提高预报的效率和时效性。本文在分析集合预报的特点和需求的基础上,设计了基于元数据提取的海量数据管理方案和基于虚拟组织的高性能计算资源管理方案,并采用网格技术对这些资源进行有效管理和共享,为分布在各个地域、不同组织的气象科学家提供一个高效共享的协同开发平台,达到有效提高预报结果的时效性并推动中尺度天气预报事业发展的目标。  相似文献   

14.
As the scale of parallel systems continues to grow, fault management of these systems is becoming a critical challenge. While existing research mainly focuses on developing or improving fault tolerance techniques, a number of key issues remain open. In this paper, we propose runtime strategies for spare node allocation and job rescheduling in response to failure prediction. These strategies, together with failure predictor and fault tolerance techniques, construct a runtime system called FARS (Fault-Aware Runtime System). In particular, we propose a 0-1 knapsack model and demonstrate its flexibility and effectiveness for reallocating running jobs to avoid failures. Experiments, by means of synthetic data and real traces from production systems, show that FARS has the potential to significantly improve system productivity (i.e., performance and reliability).  相似文献   

15.
16.
基于Portlet的高性能计算Portal   总被引:1,自引:1,他引:1       下载免费PDF全文
提出基于Portlet的高性能计算Ponal——HPCP,提供一个简单、通用、安全、可定制的Web作业管理系统,支持作业提交、查询、终止、目录列表及文件内容实时查看、大文件的数据流无缓存下载等功能。理论分析和实际测试表明。HPCP具有类似桌面应用程序的用户友好、交互性等特点,且有良好的可扩展性和安全性。  相似文献   

17.
Storage backends of parallel compute clusters are still based mostly on magnetic disks,while newer and faster storage technologies such as flash-based SSDs or non-volatile random access memory(NVRAM)are deployed within compute nodes.Including these new storage technologies into scientific workflows is unfortunately today a mostly manual task,and most scientists therefore do not take advantage of the faster storage media.One approach to systematically include nodelocal SSDs or NVRAMs into scientific workflows is to deploy ad hoc file systems over a set of compute nodes,which serve as temporary storage systems for single applications or longer-running campaigns.This paper presents results from the Dagstuhl Seminar 17202"Challenges and Opportunities of User-Level File Systems for HPC"and discusses application scenarios as well as design strategies for ad hoc file systems using node-local storage media.The discussion includes open research questions,such as how to couple ad hoc file systems with the batch scheduling environment and how to schedule stage-in and stage-out processes of data between the storage backend and the ad hoc file systems.Also presented are strategies to build ad hoc file systems by using reusable components for networking and how to improve storage device compatibility.Various interfaces and semantics are presented,for example those used by the three ad hoc file systems BeeOND,GekkoFS,and BurstFS.Their presentation covers a range from file systems running in production to cutting-edge research focusing on reaching the performance limits of the underlying devices.  相似文献   

18.
19.
随着云计算的深入应用,为用户提供高质量的远程高性能云服务成为当前研究的热点。文章提出了面向云服务的高性能计算柔性服务平台,该平台通过虚拟化技术、图形化和面向对象的技术等,设计和实现了全新的高性能计算环境,具有工具整合程度高、用户界面友好等特点,可为远程云服务提供必要支持。  相似文献   

20.
李文君  杭德全  张果 《计算机工程》2010,36(22):283-285
提出一种可以减少数据传输量的调度算法RDMS。通过在调度过程中综合考虑任务中数据的关联性、硬件任务的资源利用率及内部任务之间的通信量,采用动态编程方法,减少微处理器和FPGA可重构协处理器之间的通信量及FPGA可重构资源的消耗。实验结果表明,RDMS能提高映射在FPGA器件上硬件任务的整体性能,有效降低通信开销和重构开销。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号