首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
罗立辉  张耀南 《软件学报》2013,24(S2):80-88
为了从多角度精确评估陆面特征,改善陆面过程模型的模拟性能,并为研究者提供一套完整的从数据处理到模拟分析的陆面建模系统.采用多种脚本语言和模型数据融合方法来构建陆面建模系统.此建模系统集成观测数据、陆面过程模型、高性能计算、数据处理和分析方法,以及可视化等技术手段.在此系统内针对两种不同的陆面过程模型进行了应用示范,证明了不同脚本语言的建模系统在目前高性能计算环境中的应用潜力,以及不同可视化方案在陆面建模系统的作用.  相似文献   

2.
The Land Information System software (LIS; http://lis.gsfc.nasa.gov/, 2006) has been developed to support high-performance land surface modeling and data assimilation. LIS integrates parallel and distributed computing technologies with modern land surface modeling capabilities, and establishes a framework for easy interchange of subcomponents, such as land surface physics, input/output conventions, and data assimilation routines. The software includes multiple land surface models that can be run as a multi-model ensemble on global or regional domains with horizontal resolutions ranging from 2.5° to 1 km. The software may execute serially or in parallel on various high-performance computing platforms. In addition, the software has well-defined, standard-conforming interfaces and data structures to interface and interoperate with other Earth system models. Developed with the support of an Earth science technology office (ESTO) computational technologies project round~3 cooperative agreement, LIS has helped advance NASA’s Earth–Sun division’s software engineering principles and practices, while promoting portability, interoperability, and scalability for Earth system modeling. LIS was selected as a co-winner of NASA’s 2005 software of the year award.  相似文献   

3.
本文介绍了高性能集群的发展和特点,以及目前流行的高性能集群的作业管理系统;分析了TORQUE作业管理系统组成及作业提交过程,并通过一个具体实现纳米材料分析的MPI并 行程序在高性能集群上运行。  相似文献   

4.
High-performance computing (HPC) is essential for both traditional and emerging scientific fields, enabling scientific activities to make progress. With the development of high-performance computing, it is foreseeable that exascale computing will be put into practice around 2020. As Moore’s law approaches its limit, high-performance computing will face severe challenges when moving from exascale to zettascale, making the next 10 years after 2020 a vital period to develop key HPC techniques. In this study, we discuss the challenges of enabling zettascale computing with respect to both hardware and software. We then present a perspective of future HPC technology evolution and revolution, leading to our main recommendations in support of zettascale computing in the coming future.  相似文献   

5.
Adaptive mesh refinement (AMR) techniques allow for representing a simulation's computation domain in an adaptive fashion. Although these techniques have found widespread adoption in high-performance computing simulations, visualizing their data output interactively and without cracks or artifacts remains challenging. In this paper, we present an efficient solution for direct volume rendering and hybrid implicit isosurface ray tracing of tree-based AMR (TB-AMR) data. We propose a novel reconstruction strategy, Generalized Trilinear Interpolation (GTI), to interpolate across AMR level boundaries without cracks or discontinuities in the surface normal. We employ a general sparse octree structure supporting a wide range of AMR data, and use it to accelerate volume rendering, hybrid implicit isosurface rendering and value queries. We demonstrate that our approach achieves artifact-free isosurface and volume rendering and provides higher quality output images compared to existing methods at interactive rendering rates.  相似文献   

6.
The solution of complex global challenges in the land system, such as food and energy security, requires information on the management of agricultural systems at a high spatial and temporal resolution over continental or global extents. However, computing capacity remains a barrier to large-scale, high-resolution agricultural modeling. To model wheat production, soil carbon, and nitrogen dynamics in Australia's cropping regions at a high resolution, we developed a hybrid computing approach combining parallel processing and grid computing. The hybrid approach distributes tasks across a heterogeneous grid computing pool and fully utilizes all the resources of computers within the pool. We simulated 325 management scenarios (nitrogen application rates and stubble management) at a daily time step over 122 years, for 12,707 climate–soil zones using the Windows-based Agricultural Production Systems SIMulator (APSIM). These simulations would have taken over 30 years on a single computer. Our hybrid high performance computing (HPC) approach completed the modeling within 10.5 days—a speed-up of over 1000 times—with most jobs finishing within the first few days. The approach utilizes existing idle organization-wide computing resources and eliminates the need to translate Windows-based models to other operating systems for implementation on computing clusters. There are however, numerous computing challenges that need to be addressed for the effective use of these techniques and there remain several potential areas for further performance improvement. The results demonstrate the effectiveness of the approach in making high-resolution modeling of agricultural systems possible over continental and global scales.  相似文献   

7.
The rapid development of space and computer technologies has made possible to store a large amount of remotely sensed image data, collected from heterogeneous sources. In particular, NASA is continuously gathering imagery data with hyperspectral Earth observing sensors such as the Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) or the Hyperion imager aboard Earth Observing-1 (EO-1) spacecraft. The development of fast techniques for transforming the massive amount of collected data into scientific understanding is critical for space-based Earth science and planetary exploration. This paper describes commodity cluster-based parallel data analysis strategies for hyperspectral imagery, a new class of image data that comprises hundreds of spectral bands at different wavelength channels for the same area on the surface of the Earth. An unsupervised technique that integrates the spatial and spectral information in the image data using multi-channel morphological transformations is parallelized and compared to other available parallel algorithms. The code's portability, reusability and scalability are illustrated by using two high-performance parallel computing architectures: a distributed memory, multiple instruction multiple data (MIMD)-style multicomputer at European Center for Parallelism of Barcelona, and a Beowulf cluster at NASA's Goddard Space Flight Center. Experimental results suggest that Beowulf clusters are a source of computational power that is both accessible and applicable to obtaining results in valid response times in information extraction applications from hyperspectral imagery.  相似文献   

8.
Reinforcement learning (RL) is a powerful solution to adaptive control when no explicit model exists for the system being controlled. To handle uncertainty along with the lack of explicit model for the Cloud's resource management systems, this paper utilizes continuous RL in order to provide an intelligent control scheme for dynamic resource provisioning in the spot market of the Cloud's computational resources. On the other hand, the spot market of computational resources inside Cloud is a real-time environment in which, from the RL point of view, the control task of dynamic resource provisioning requires defining continuous domains for (state, action) pairs. Commonly, function approximation is used in RL controllers to overcome continuous requirements of (state, action) pair remembrance and to provide estimates for unseen statuses. However, due to the computational complexities of approximation techniques like neural networks, RL is almost impractical for real-time applications. Thus, in this paper, Ink Drop Spread (IDS) modeling method, which is a solution to system modeling without dealing with heavy computational complexities, is used as the basis to develop an adaptive controller for dynamic resource provisioning in Cloud's virtualized environment. The performance of the proposed control mechanism is evaluated through measurement of job rejection rate and capacity waste. The results show that at the end of the training episodes, in 90 days, the controller learns to reduce job rejection rate down to 0% while capacity waste is optimized down to 11.9%.  相似文献   

9.
10.
由于科学研究与商业应用等对高性能计算的需求与日俱增,高性能计算的性能和系统规模得到迅速发展。但是,急剧增长的功耗严重限制了高性能计算系统的设计和使用,使得低功耗技术成为高性能计算领域的关键技术。作为整个系统的核心组件,作业调度系统立足有限的系统资源,对用户提交的应用进行作业-资源分配,其能效性对于整个高性能计算系统的能耗控制与调节起到至关重要的作用。首先介绍主要的能量效率技术和常用的作业调度策略,然后对当前高性能计算作业调度能效性进行分析,并讨论了其面临的挑战及未来发展方向。  相似文献   

11.
高性能集群的作业调度通常使用作业调度系统来实现,准确填写作业运行时间能在很大程度上提升作业调度效率。现有的研究通常使用机器学习的预测方式,在预测精度和实用性上还存在一定的提升空间。为了进一步提高集群作业运行时间预测的准确率,考虑先对集群作业日志进行聚类,将作业类别信息添加到作业特征中,再使用基于注意力机制的NR-Transformer网络对作业日志数据建模和预测。在数据处理上,根据与预测目标的相关性、特征的完整性和数据的有效性,从历史日志数据集中筛选出7维特征,并按作业运行时间的长度将其划分为多个作业集,再对各作业集分别进行训练和预测。实验结果表明,相比于传统机器学习和BP神经网络,时序神经网络结构有更好的预测性能,其中NR-Transformer在各作业集上都有较好的性能。  相似文献   

12.
Elucidating uncertainty and sensitivity structures in environmental models can be a difficult task, even for low-order, single-medium constructs driven by a unique set of site-specific data. Quantitative assessment of integrated, multimedia models that simulate hundreds of sites, spanning multiple geographical and ecological regions, will ultimately require a comparative approach using several techniques, coupled with sufficient computational power. The Framework for Risk Analysis in Multimedia Environmental Systems – Multimedia, Multipathway, and Multireceptor Risk Assessment (FRAMES-3MRA) is an important software model being developed by the United States Environmental Protection Agency for use in risk assessment of hazardous waste management facilities. The 3MRA modeling system includes a set of 17 science modules that collectively simulate release, fate and transport, exposure, and risk associated with hazardous contaminants disposed of in land-based waste management units (WMU). The 3MRA model encompasses 966 multi-dimensional input variables, over 185 of which are explicitly stochastic. Design of SuperMUSE, a 215 GHz PC-based, Windows-based Supercomputer for Model Uncertainty and Sensitivity Evaluation is described. Developed for 3MRA and extendable to other computer models, an accompanying platform-independent, Java-based parallel processing software toolset is also discussed. For 3MRA, comparison of stand-alone PC versus SuperMUSE simulation executions showed a parallel computing overhead of only 0.57 seconds/simulation, a relative cost increase of 0.7% over average model runtime. Parallel computing software tools represent a critical aspect of exploiting the capabilities of such modeling systems. The Java toolset developed here readily handled machine and job management tasks over the Windows cluster, and is currently capable of completing over 3 million 3MRA model simulations per month on SuperMUSE. Preliminary work is reported for an example uncertainty analysis of Benzene disposal that describes the relative importance of various exposure pathways in driving risk levels for ecological receptors and human health. Incorporating landfills, waste piles, aerated tanks, surface impoundments, and land application units, the site-based data used in the analysis included 201 facilities across the United States representing 419 site-WMU combinations.  相似文献   

13.
With the scaling up of high-performance computing systems in recent years, their reliability has been descending continuously. Therefore, system resilience has been regarded as one of the critical challenges for large-scale HPC systems. Various techniques and systems have been proposed to ensure the correct execution and completion of parallel programs. This paper provides a comprehensive survey of existing software resilience approaches. Firstly, a classification of software resilience approaches is presented; then we introduce major approaches and techniques, including checkpointing, replication, soft error resilience, algorithm-based fault tolerance, fault detection and prediction. In addition, challenges exposed by system-scale and heterogeneous architecture are also discussed.  相似文献   

14.
Recent advances in space and computer technologies are revolutionizing the way remotely sensed data is collected, managed and interpreted. In particular, NASA is continuously gathering very high-dimensional imagery data from the surface of the Earth with hyperspectral sensors such as the Jet Propulsion Laboratory's airborne visible-infrared imaging spectrometer (AVIRIS) or the Hyperion imager aboard Earth Observing-1 (EO-1) satellite platform. The development of efficient techniques for extracting scientific understanding from the massive amount of collected data is critical for space-based Earth science and planetary exploration. In particular, many hyperspectral imaging applications demand real time or near real-time performance. Examples include homeland security/defense, environmental modeling and assessment, wild-land fire tracking, biological threat detection, and monitoring of oil spills and other types of chemical contamination. Only a few parallel processing strategies for hyperspectral imagery are currently available, and most of them assume homogeneity in the underlying computing platform. In turn, heterogeneous networks of workstations (NOWs) have rapidly become a very promising computing solution which is expected to play a major role in the design of high-performance systems for many on-going and planned remote sensing missions. In order to address the need for cost-effective parallel solutions in this fast growing and emerging research area, this paper develops several highly innovative parallel algorithms for unsupervised information extraction and mining from hyperspectral image data sets, which have been specifically designed to be run in heterogeneous NOWs. The considered approaches fall into three highly representative categories: clustering, classification and spectral mixture analysis. Analytical and experimental results are presented in the context of realistic applications (based on hyperspectral data sets from the AVIRIS data repository) using several homogeneous and heterogeneous parallel computing facilities available at NASA's Goddard Space Flight Center and the University of Maryland.  相似文献   

15.
This paper describes a speaker-adaptive HMM-based speech synthesis system. The new system, called “HTS-2007,” employs speaker adaptation (CSMAPLR+MAP), feature-space adaptive training, mixed-gender modeling, and full-covariance modeling using CSMAPLR transforms, in addition to several other techniques that have proved effective in our previous systems. Subjective evaluation results show that the new system generates significantly better quality synthetic speech than speaker-dependent approaches with realistic amounts of speech data, and that it bears comparison with speaker-dependent approaches even when large amounts of speech data are available. In addition, a comparison study with several speech synthesis techniques shows the new system is very robust: It is able to build voices from less-than-ideal speech data and synthesize good-quality speech even for out-of-domain sentences.   相似文献   

16.
Recent advancements in mobile sensing and wearable technologies create new opportunities to improve our understanding of how people experience their environment. This understanding can inform urban design decisions. Currently, an important urban design issue is the adaptation of infrastructure to increasing cycle and e-bike use. Using data collected from 12 cyclists on a cycle highway between two municipalities in The Netherlands, we coupled location and wearable emotion data at a high spatiotemporal resolution to model and examine relationships between cyclists' emotional arousal (operationalized as skin conductance responses) and visual stimuli from the environment (operationalized as extent of visible land cover type). We specifically took a within-participants multilevel modeling approach to determine relationships between different types of viewable land cover area and emotional arousal, while controlling for speed, direction, distance to roads, and directional change. Surprisingly, our model suggests ride segments with views of larger natural, recreational, agricultural, and forested areas were more emotionally arousing for participants. Conversely, segments with views of larger developed areas were less arousing. The presented methodological framework, spatial-emotional analyses, and findings from multilevel modeling provide new opportunities for spatial, data-driven approaches to portable sensing and urban planning research. Furthermore, our findings have implications for design of infrastructure to optimize cycling experiences.  相似文献   

17.
海洋是高质量发展的要地,海洋科学大数据的发展为认知和经略海洋带来机遇的同时也引入了新的挑战。海洋科学大数据具有超多模态的显著特征,目前尚未形成面向海洋领域特色的多模态智能计算理论体系和技术框架。因此,本文首次从多模态数据技术的视角,系统性介绍面向海洋现象/过程的智能感知、认知和预知的交叉研究进展。首先,通过梳理海洋科学大数据全生命周期的阶段演进过程,明确海洋多模态智能计算的研究对象、科学问题和典型应用场景。其次,在海洋多模态大数据内容分析、推理预测和高性能计算3个典型应用场景中展开现有工作的系统性梳理和介绍。最后,针对海洋数据分布和计算模式的差异性,提出海洋多模态大数据表征建模、跨模态关联、推理预测以及高性能计算4个关键科学问题中的挑战,并提出未来展望。  相似文献   

18.

This paper use the well-discussed PVM (Parallel Virtual Machine) software with several personal computers, and adopt the widespread Microsoft Windows '98 operating system as our operation platform to construct a heterogeneous PCs cluster. By engaging the related researches of PC cluster system and cluster computing theory, we apply our heterogeneous PC cluster computing system to generate more secure parameters for some public key cryptosystems such as RSA. Copes with each parameter's related mathematic theory's restriction, enormous computation power is needed to get better computation performance in generating these parameters. In this paper, we contribute heterogeneous PCs combined with the PVM software to cryptosystem parameters, which is conformed to today's safety specification and requirement. We practically generate these data to prove that computer cluster can effectively accumulate enormous computation power, and then demonstrate the cluster computation application in finding strong primes which are needed in some public key cryptosystems.  相似文献   

19.
随着大规模的MapReduce集群广泛地用于大数据处理,特别是当有多个任务需要使用同一个Hadoop集群时,一个关键问题是如何最大限度地减少集群的工作时间,提高MapReduce作业的服务效率。可将多个MapReduce作业当做一个调度任务建模,观察发现多个任务的总完工时间和任务的执行顺序有密切关系。 研究目标是设计作业调度系统分析模型,最小化一批MapReduce作业的总完工时间。提出一个更好的调度策略和实现方法, 使整个调度系统符合经典Johnson算法的条件, 从而可使用经典Johnson算法在线性时间内获取总完工时间的最优解。同时,针对需要使用两个或多个资源池进行平衡的问题, 提出了一种线性时间解决方案, 优于已知的近似模拟方案。该理论模型可应用于提高系统响应速度、节能和负载均衡等方面, 对应的应用实例提供了证实。  相似文献   

20.
This paper presents a short-contact multitouch vocabulary for interacting with scatterplot matrices (SPLOMs) on wall-sized displays. Fling-based gestures overcome central interaction challenges of such large displays by avoiding long swipes on the typically blunt surfaces, frequent physical navigation by walking for accessing screen areas beyond arm's reach in the horizontal direction and uncomfortable postures for accessing screen areas in the vertical direction. Furthermore, we make use of the display's high resolution and large size by supporting the efficient specification of two-tiered focus + context regions which are consistently propagated across the SPLOM. These techniques are complemented by axis-centered and lasso-based selection techniques for specifying subsets of the data. An expert review as well as a user study confirmed the potential and general usability of our seamlessly integrated multitouch interaction techniques for SPLOMs on large vertical displays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号