首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The computing power provided by high performance and low-cost PC-based clusters with Grid platforms are attractive and they are equal or superior to supercomputers and mainframes. In this paper, we present implementation and design rationale of Visuel toolkit for MPI parallel program performance measurement and analysis in cluster and grid environments. Most of performance visualization tools available today for high-performance platforms show solely system performance data (e.g., CPU load, memory usage, network bandwidth, server average load), and thus, being suitable for computing system activity visualization. The Visuel (Visuel (in French language) = to visualize) toolkit is web-based interface designed to show performance activities of all computing nodes of a distributed environment involved in the execution of MPI parallel program, such as CPU load level and memory usage of each computing node. In addition, this toolkit is able to display comparative performance data charts of MPI parallel applications and multiple executions under investigation. The usage of this toolkit shows that it outperforms in easing the process of investigation of parallel applications.
Hsun-Chang ChangEmail:
  相似文献   

2.
一种同构机群系统中的处理机分配算法   总被引:5,自引:0,他引:5  
机群系统的分布式计算环境为并行处理技术带来了新的研究与应用问题,正成为并行计算的热点问题.如何合理、有效地将并行任务划分到机群系统的结点上,将直接影响系统的执行性能.本文分析影响系统执行效率的执行开销因素,同时提出一个启发式的处理机分配算法.  相似文献   

3.
We envision future work and play environments in which the user's computing interface is more closely integrated with the physical surroundings than today's conventional computer display screens and keyboards.We are working toward realizable versions of such environments, in which multiple video projectors and digital cameras enable every visible surface to be both measured in 3D and used for display. If the 3D surface positions were transmitted to a distant location, they may also enable distant collaborations to become more like working in adjacent offices connected by large windows. With collaborators at the University of Pennsylvania, Brown University, Advanced Network and Services, and the Pittsburgh Supercomputing Center, we at Chapel Hill have been working to bring these ideas to reality. In one system, depth maps are calculated from streams of video images and the resulting 3D surface points are displayed to the user in head‐tracked stereo. Among the applications we are pursuing for this tele‐presence technology, is advanced training for trauma surgeons by immersive replay of recorded procedures. Other applications display onto physical objects, to allow more natural interaction with them ``painting'' a dollhouse, for example. More generally, we hope to demonstrate that the principal interface of a future computing environment need not be limited to a screen the size of one or two sheets of paper. Just as a useful physical environment is all around us, so too can the increasingly ubiquitous computing environment be all around us ‐integrated seamlessly with our physical surroundings.  相似文献   

4.
Understanding the behavior of large scale distributed systems is generally extremely difficult as it requires to observe a very large number of components over very large time. Most analysis tools for distributed systems gather basic information such as individual processor or network utilization. Although scalable because of the data reduction techniques applied before the analysis, these tools are often insufficient to detect or fully understand anomalies in the dynamic behavior of resource utilization and their influence on the applications performance. In this paper, we propose a methodology for detecting resource usage anomalies in large scale distributed systems. The methodology relies on four functionalities: characterized trace collection, multi‐scale data aggregation, specifically tailored user interaction techniques, and visualization techniques. We show the efficiency of this approach through the analysis of simulations of the volunteer computing Berkeley Open Infrastructure for Network Computing architecture. Three scenarios are analyzed in this paper: analysis of the resource sharing mechanism, resource usage considering response time instead of throughput, and the evaluation of input file size on Berkeley Open Infrastructure for Network Computing architecture. The results show that our methodology enables to easily identify resource usage anomalies, such as unfair resource sharing, contention, moving network bottlenecks, and harmful short‐term resource sharing. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
Multistage Interconnection Networks (MIN) have been widely used for building large-scale shared-memory multiprocessor systems. Complex interactions between many processors and memory modules through the MIN, (such as interprocessor communication, process scheduling and synchronization and remote-memory access) result in a significantly large space of possible performance behavior and potential performance bottlenecks. To provide insight into dynamic system performance, we have developed an integrated data collection, analysis, and visualization environment for a MIN-based multiprocessor system, called MIN-Graph. The MIN-Graph is a graphical instrumentation monitor to aid users in investigating performance problems and in determining an effective way to exploit the high performance capabilities of interconnection network multiprocessor systems. Interconnection network contention is a major bottleneck of parallel computing on MIN-based multiprocessors. This paper focuses on evaluating the contention behavior through performance monitoring and visualization. Four sets of system and scientific application programs with different programming and scheduling models and different memory access patterns are monitored and tested to observe the various network contention behaviors. The MIN-Graph is implemented on the BBN GP1000 and the BBN TC2000.  相似文献   

6.
Cloud computing is an emerging technology in which information technology resources are virtualized to users in a set of computing resources on a pay‐per‐use basis. It is seen as an effective infrastructure for high performance applications. Divisible load applications occur in many scientific and engineering applications. However, dividing an application and deploying it in a cloud computing environment face challenges to obtain an optimal performance due to the overheads introduced by the cloud virtualization and the supporting cloud middleware. Therefore, we provide results of series of extensive experiments in scheduling divisible load application in a Cloud environment to decrease the overall application execution time considering the cloud networking and computing capacities presented to the application's user. We experiment with real applications within the Amazon cloud computing environment. Our extensive experiments analyze the reasons of the discrepancies between a theoretical model and the reality and propose adequate solutions. These discrepancies are due to three factors: the network behavior, the application behavior and the cloud computing virtualization. Our results show that applying the algorithm result in a maximum ratio of 1.41 of the measured normalized makespan versus the ideal makespan for application in which the communication to computation ratio is big. They show that the algorithm is effective for those applications in a heterogeneous setting reaching a ratio of 1.28 for large data sets. For application following the ensemble clustering model in which the computation to communication ratio is big and variable, we obtained a maximum ratio of 4.7 for large data set and a ratio of 2.11 for small data set. Applying the algorithm also results in an important speedup. These results are revealing for the type of applications we consider under experiments. The experiments also reveal the impact of the choice of the platforms provided by Amazon on the performance of the applications under study. Considering the emergence of cloud computing for high performance applications, the results in this paper can be widely adopted by cloud computing developers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
8.
Network-based concurrent computing and interactive data visualization are two important components in industry applications of high-performance computing and communication. We propose an execution framework to build interactive remote visualization systems for real-world applications on heterogeneous parallel and distributed computers. Using a dataflow model of a commercial visualization software AVS in three case studies, we demonstrate a simple, effective, and modular approach to couple parallel simulation modules into an interactive remote visualization environment. The applications described in this paper are drawn from our industrial projects in financial modeling, computational electromagnetics and computational chemistry. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
10.
随着边缘计算的发展,边缘节点的计算规模不断增加,现有的边缘设备难以搭载深度神经网络模型,网络通信与云端服务器承受着巨大压力。为解决上述问题,通过对Roofline模型进行改进,借助新模型对边缘设备的性能与网络环境进行动态评估。根据评估指标,对神经网络模型进行分离式拆分,部分计算任务分配给边缘节点完成,云端服务器结合节点返回数据完成其它任务。该方法基于节点自身性能与网络环境,进行动态任务分配,具有一定兼容性与鲁棒性。实验结果表明,基于边缘节点的深度神经网络任务分配方法可在不同环境中利用设备的闲置性能,大幅度降低中心服务器的计算负载。  相似文献   

11.
The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i. e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed/visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU's and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitioners using in situ methods in extreme‐scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.  相似文献   

12.
13.
In order to provide “intimate” and “dynamic” adaptations under Weiser's vision for ubiquitous computing environments, we propose the utilization of context history together with user modeling and machine learning techniques. Our approach supports proactive adaptations by inducing patterns of user behavior. In addition, we support the requirement for enabling the user to receive an explicit and understandable explanation when a proactive adaptation occurs in order to encourage a trust relationship between the user and the context-aware system. In this article, we describe an experiment to examine the feasibility of our approach for supporting proactive adaptations in the domain of an intelligent office environment. The initial results of our experiment are promising and demonstrate how our system could gradually learn the user's preferences for controling his office environment by making inductions from the context history. Based on these initial findings, we believe that context history has a concrete role to play in supporting proactive adaptation in a ubiquitous computing environment.  相似文献   

14.
In this paper, we present a Collaborative Object-oriented Visualization Environment (COVE) which provides a flexible and extensible framework for collaborative visualization. COVE integrates collaborative and parallel computing environments based on a distributed object model. It is built as a collection of concurrent objects: collaborative and application objects which interact with one another to construct collaborative parallel computing environments. The former enables COVE to execute various collaborative functions, while the latter allows it to execute fast parallel visualization in various modes. Also, flexibility and extensibility are provided by plugging the proper application objects into COVE at run-time, and making them interact with one another through collaboration objects. For our experiment, three visualization modes for volume rendering are designed and implemented to support the fast and flexible analysis of volume data in a collaborative environment. This work has been supported by KIPA-Information Technology Research Center, University research program by Ministry of Information & Communication, and Brain Korea 21 projects in 2005.  相似文献   

15.
为了实现资源和系统环境的隔离,近年来新兴了多种虚拟化工具,容器便是其中之一。在超算资源上运行的问题通常是由软件配置引起的。容器的一个作用就是将依赖打包进轻量级可移植的环境中,这样可以提高超算应用程序的部署效率。为了解基于IB网的CPU-GPU异构超算平台上容器虚拟化技术的性能特征,使用标准基准测试工具对Docker容器进行了全面的性能评估。该方法能够评估容器在虚拟化宿主机过程中产生的性能开销,包括文件系统访问性能、并行通信性能及GPU计算性能。结果表明,容器具备近乎原生宿主机的性能,文件系统I/O开销及GPU计算开销与原生宿主机差别不大。随着网络负载的增大,容器的并行通信开销也相应增大。根据评估结果,提出了一种能够发挥超算平台容器性能的方法,为使用者有针对性地进行系统配置、合理设计应用程序提供依据。  相似文献   

16.
Neural Network(NN) is well-known as one of powerful computing tools to solve optimization problems. Due to the massive computing unit-neurons and parallel mechanism of neural network approach we can solve the large-scale problem efficiently and optimal solution can be gotten. In this paper, we intoroduce improvement of the two-phase approach for solving fuzzy multiobjectve linear programming problem with both fuzzy objectives and constraints and we propose a new neural network technique for solving fuzzy multiobjective linear programming problems. The procedure and efficiency of this approach are shown with numerical simulations.  相似文献   

17.
There has been an increasing research interest in extending the use of Java towards high‐performance demanding applications such as scalable Web servers, distributed multimedia applications, and large‐scale scientific applications. However, extending Java to a multicomputer environment and improving the low performance of current Java implementations pose great challenges to both the systems developer and application designer. In this survey, we describe and classify 14 relevant proposals and environments that tackle Java's performance bottlenecks in order to make the language an effective option for high‐performance network‐based computing. We further survey significant performance issues while exposing the potential benefits and limitations of current solutions in such a way that a framework for future research efforts can be established. Most of the proposed solutions can be classified according to some combination of three basic parameters: the model adopted for inter‐process communication, language extensions, and the implementation strategy. In addition, where appropriate to each individual proposal, we examine other relevant issues, such as interoperability, portability, and garbage collection. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
19.
谭良  陈菊 《软件学报》2012,23(8):2084-2103
可信计算的链式度量机制不容易扩展到终端所有应用程序,因而可信终端要始终保证其动态运行环境的可信仍然较为困难.为了提供可信终端动态运行环境客观、真实、全面的可信证据,设计并实现了一个基于可信平台模块(trusted platform model,简称TPM)的终端动态运行环境可信证据收集代理.该代理的主要功能是收集可信终端内存、进程、磁盘文件、网络端口、策略数据等关键对象的状态信息和操作信息.首先,通过扩展TPM信任传递过程及其度量功能保证该代理的静态可信,利用可信虚拟机监视器(trusted virtual machine monitor,简称TVMM)提供的隔离技术保证该代理动态可信;然后,利用TPM的加密和签名功能保证收集的证据的来源和传输可信;最后,在Windows平台中实现了一个可信证据收集代理原型,并以一个开放的局域网为实验环境来分析可信证据收集代理所获取的终端动态运行环境可信证据以及可信证据收集代理在该应用实例中的性能开销.该应用实例验证了该方案的可行性.  相似文献   

20.
Cloud computing allows the deployment and delivery of application services for users worldwide. Software as a Service providers with limited upfront budget can take advantage of Cloud computing and lease the required capacity in a pay‐as‐you‐go basis, which also enables flexible and dynamic resource allocation according to service demand. One key challenge potential Cloud customers have before renting resources is to know how their services will behave in a set of resources and the costs involved when growing and shrinking their resource pool. Most of the studies in this area rely on simulation‐based experiments, which consider simplified modeling of applications and computing environment. In order to better predict service's behavior on Cloud platforms, we developed an integrated architecture that is based on both simulation and emulation. The proposed architecture, named EMUSIM, automatically extracts information from application behavior via emulation and then uses this information to generate the corresponding simulation model. We performed experiments using an image processing application as a case study and found that EMUSIM was able to accurately model such application via emulation and use the model to supply information about its potential performance in a Cloud provider. We also discuss our experience using EMUSIM for deploying applications in a real public Cloud provider. EMUSIM is based on an open source software stack and therefore it can be extended for analysis behavior of several other applications. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号