首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
With the rapid advance of computing technologies, it becomes more and more common to construct high-performance computing environments with heterogeneous commodity computers. Previous loop scheduling schemes were not designed for this kind of environments. Therefore, better loop scheduling schemes are needed to further increase the performance of the emerging heterogeneous PC cluster environments. In this paper, we propose a new heuristic for the performance-based approach to partition loop iterations according to the performance weighting of cluster/grid nodes. In particular, a new parameter is proposed to consider HPCC benchmark results as part of performance estimation. A heterogeneous cluster and grid were built to verify the proposed approach, and three kinds of application program were implemented for execution on cluster testbed. Experimental results show that the proposed approach performs better than the previous schemes on heterogeneous computing environments.  相似文献   

2.
As the demands for faster data processing and enterprise computing are increasing, the traditional client/server architecture has gradually been replaced by Grid computing or the peer-to-peer (P2P) model which can share the content or resources over the network. In this paper, a new computing architecture – computing power services (CPS) – has been applied to utilize web services and business process execution language for overcoming the issues about flexibility, compatibility and workflow management. CPS is a lightweight web services based computing power-sharing architecture, and suitable for enterprise computing tasks which can be executed in the batch processes within a trusty network. However, a real-time load balance and dispatching mechanism is needed for distributed-computing architecture like CPS in order to handle computing resources efficiently and properly. Therefore, a fuzzy group decision-making based adaptive collaboration design for CPS is proposed in this paper to provide the real-time computation coordination and quality of service. In this study, the approach has been applied to analyze the robustness of digital watermark by filter bank selection and the performance can be improved in the aspect of speedup, stability and processing time. This scheme increases the overall computing performance and shows stability for the dynamic environment.  相似文献   

3.
高性能计算机性能评测基准HPCC应用研究   总被引:1,自引:0,他引:1  
随着高性能计算机研制的迅速发展,其性能评测显得越来越重要。基准HPCC集计算、存储访问、网络传输等性能评测功能于一体,用于对高性能计算机进行综合评价。本文在高性能计算机性能评测技术研究的基础上,对基准HPCC的应用进行了深入探讨。  相似文献   

4.
Finding replacement candidates for accommodating a new object is an important research issue in web caching. Due to the new emerging factors in the transcoding proxy and the aggregate effect of caching multiple versions of the same multimedia object, this problem becomes more important and complex as audio and video applications have proliferated over the Internet, especially in the environment of mobile computing systems. This paper addresses coordinated cache replacement in transcoding proxies. First, we propose an original model which determines cache replacement candidates on all candidate nodes in a coordinated fashion with the objective of minimizing the total cost loss for linear topology. We formulate this problem as an optimization problem and present a low-cost optimal solution for deciding cache replacement candidates. Second, we extend this problem to solve the same problem for tree networks. Finally, we conduct extensive simulations to evaluate the performance of our solutions by comparing with existing models.  相似文献   

5.
Now that the number of electronic commerce (hereunder called e-commerce) users has been explosively increasing, the consequent heavy network traffic leads to the delayed service of the e-commerce systems. This paper focuses on operational efficiency and response speed of the e-commerce systems. An e-commerce system, with a hierarchical structure based on the local server, is designed. And we proposed a web object replacement algorithm with heterogeneity of a web object. The algorithm is designed with a divided scope that considered size reference characteristic and reduced size heterogeneity on web object. The performances of the system and algorithm are analyzed with an experiment. With the experiment results, the algorithm is compared with previous replacement algorithms, and we confirm with 10–20% performance improvement of object-hit ratio gain. And we get with 15–30% performance improvement of response speed with proposed system.  相似文献   

6.
Linux-based mobile computing systems such as robots, electronic control devices, and smart-phone are the most important types of P2P cloud systems in recent days. To improve the overall performance of networked systems, each mobile computing system requires real-time characteristics. For this reason, mobile computing system developers want to know how well real-time responsiveness is supported; several real-time measurement tools have been proposed. However, those previous tools have their own measurement schemes and we think that the results from those models do not show how responsive those systems are. In this paper, we propose ELRM, a new real-time measurement method that has clear measurement interval definitions and an accurate measurement method for real-time responsiveness. We evaluate ELRM on various mobile computing systems and compare it with other existing models. As a result, our method can obtain more accurate and intuitive real-time responsiveness measurement results.  相似文献   

7.
The development of knowledge-based systems involves the management of a diversity of knowledge sources, computing resources and system users, often distributed geographically. The knowledge acquisition, modelling and representation communities have developed a wide range of tools relevant to the development and management of large-scale knowledge-based systems, but the majority of these tools run on individual workstations and use specialist data formats making system integration and knowledge interchange very problematic. However, widespread access to the Internet has led to a new era of distributed client–server computing. In particular, the introduction of support forformson World Wide Web in late 1993 has provided an easily programmable, cross-platform graphic-user interface that has become widely used in innovative interactive systems. This article reports on the development of open architecture knowledge management tools operating through the web to support knowledge acquisition, representation and inference through semantic networks and repertory grids.  相似文献   

8.
Simulation of physical phenomena on computers has joined engineering mechanics theory and laboratory experimentation as the third method of engineering analysis design. It is in fact the only feasible method for analyzing many critically important phenomena, e.g., quenching, heat treating, full scale phenomena, etc. With the rapid maturation of inexpensive parallel computing technology, and high performance communications, there will emerge shortly a totally new, highly interactive computing/simulation environment supporting engineering design optimization. This environment will exist on the Internet employing interoperable software/hardware infrastructures emergent today. A key element of this will involve development of computational software enabling utilization of all HPCC advances. This paper introduces the concept and details the development of a user-adapted computational simulation software platform prototype on the Internet.  相似文献   

9.
Support vector machines (SVM) and other machine-learning (ML) methods have been explored as ligand-based virtual screening (VS) tools for facilitating lead discovery. While exhibiting good hit selection performance, in screening large compound libraries, these methods tend to produce lower hit-rate than those of the best performing VS tools, partly because their training-sets contain limited spectrum of inactive compounds. We tested whether the performance of SVM can be improved by using training-sets of diverse inactive compounds. In retrospective database screening of active compounds of single mechanism (HIV protease inhibitors, DHFR inhibitors, dopamine antagonists) and multiple mechanisms (CNS active agents) from large libraries of 2.986 million compounds, the yields, hit-rates, and enrichment factors of our SVM models are 52.4–78.0%, 4.7–73.8%, and 214–10,543, respectively, compared to those of 62–95%, 0.65–35%, and 20–1200 by structure-based VS and 55–81%, 0.2–0.7%, and 110–795 by other ligand-based VS tools in screening libraries of ≥1 million compounds. The hit-rates are comparable and the enrichment factors are substantially better than the best results of other VS tools. 24.3–87.6% of the predicted hits are outside the known hit families. SVM appears to be potentially useful for facilitating lead discovery in VS of large compound libraries.  相似文献   

10.
Next-generation scientific applications feature complex workflows comprised of many computing modules with intricate inter-module dependencies. Supporting such scientific workflows in wide-area networks especially Grids and optimizing their performance are crucial to the success of collaborative scientific discovery. We develop a Scientific Workflow Automation and Management Platform (SWAMP), which enables scientists to conveniently assemble, execute, monitor, control, and steer computing workflows in distributed environments via a unified web-based user interface. The SWAMP architecture is built entirely on a seamless composition of web services: the functionalities of its own are provided and its interactions with other tools or systems are enabled through web services for easy access over standard Internet protocols while being independent of different platforms and programming languages. SWAMP also incorporates a class of efficient workflow mapping schemes to achieve optimal end-to-end performance based on rigorous performance modeling and algorithm design. The performance superiority of SWAMP over existing workflow mapping schemes is justified by extensive simulations, and the system efficacy is illustrated by large-scale experiments on real-life scientific workflows for climate modeling through effective system implementation, deployment, and testing on the Open Science Grid.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号