共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
CROWN:A service grid middleware with trust management mechanism 总被引:7,自引:0,他引:7
Based on a proposed Web service-based grid architecture, a service grid middleware system called CROWN is designed in this paper. As the two kernel points of the middleware, the overlay-based distributed grid resource management mechanism is proposed, and the policy-based distributed access control mechanism with the capability of automatic negotiation of the access control policy and trust management and negotia- tion is also discussed in this paper. Experience of CROWN testbed deployment and ap- plication development shows that the middleware can support the typical scenarios such as computing-intensive applications, data-intensive applications and mass information processing applications. 相似文献
4.
A complicated task running on the grid system is usually made up of many services, each of which typically offers a better service quality at a higher cost. Mapping service level agreements (SLAs) optimally is to find the most appropriate quality level for each service such that the overall SLA of a task is achieved at the minimum cost. This paper considers mapping the execution time SLA in the case of the discrete cost function, which is an NP-hard problem. Due to the high computation of mapping SLAs, we propose a precomputation scheme that makes the selection of each individual service level in advance for every possible SLA requirement, which can reduce the request response time greatly. We use a (1+ε)-approximation method, whose solution for any time bound is at most (1+ε) times of the optimal cost. Simulation results demonstrate the superiority of our method compared with others. 相似文献
5.
Distribution of data and computation allows for solving larger problems and executing applications that are distributed in nature. The grid is a distributed computing infrastructure that enables coordinated resource sharing within dynamic organizations consisting of individuals, institutions, and resources. The grid extends the distributed and parallel computing paradigms allowing for resource negotiation and dynamical allocation, heterogeneity, open protocols, and services. Grid environments can be used both for compute-intensive tasks and data intensive applications by exploiting their resources, services, and data access mechanisms. Data mining algorithms and knowledge discovery processes are both compute and data intensive, therefore the grid can offer a computing and data management infrastructure for supporting decentralized and parallel data analysis. This paper discusses how grid computing can be used to support distributed data mining. Research activities in grid-based data mining and some challenges in this area are presented along with some promising future directions for developing grid-based distributed data mining. 相似文献
6.
网格环境中基于Web服务的DAI中间件的设计与实现 总被引:1,自引:0,他引:1
分析和研究了网格环境中数据访问和集成(data access and integration,DAI)需求,结合中间件技术的特点,提出了具有N层应用体系结构的DAI中间件实现策略,即在不同种类的数据资源和网格应用之间构建了一个中间层软件--"虚拟数据库系统",经授权认证的合法用户可通过"虚拟数据库系统"提供的标准访问接口及统一数据格式对网格环境中各种分布的、异构的、不同种类的数据资源进行动态访问和集成. 相似文献
7.
随着网格计算技术的发展和Web Services技术的出现,使得整合各种计算资源解决具有重大挑战性的科学和工程计算问题成为可能.在对已有的应用于高性能计算的网格计算系统Netsolve进行研究的基础上,结合Web Services技术提出了一种用于高性能计算的网格系统,并且在初步实现的基础上,探讨了系统的优缺点. 相似文献
8.
Lutz Schubert Author Vitae Alexander Kipp Author VitaeAuthor Vitae 《Advanced Engineering Informatics》2008,22(4):431-437
Collaborative Engineering tasks are difficult to manage and involve a high amount of risk - as such, these tasks generally involve only well-known pre-established relationships. Such collaborations are generally quite static and do not allow for dynamic reactions to changes in the environment. Furthermore, not all optimal resource providers can be utilised for the respective tasks as they are potentially unknown. The TrustCoM project elaborated the means to create and manage Virtual Organisations in a trusted and secure manner integrating different providers on-demand. However, TrustCoM focused more on the VO than on the participant, whereas the BREIN project is now enhancing the intelligence of such VO systems to support even providers with little business expertise and provide them with capabilities to optimise their performance. This paper analyses the capabilities of current VO frameworks on the example of TrustCoM and identifies the gaps from the participant’s perspective. It then shows how BREIN addresses these gaps. 相似文献
9.
10.
网格服务管理是网格计算的核心问题。通过对于目前网格服务管理体系架构的三种模型进行分析和比较,基于开放式服务体系架构(OGSA),探讨了网格服务管理系统的功能需求,进而设计了一种层次化的网格服务管理模型HGSM,描述了模型的工作流程。将网格服务管理分为任务分解、静态调度和动态调度三种层次,讨论了HGSM的各个层次的相关功能模块,以有向无环图和高级随机Petri网分别对于任务分解和服务调度提出了相关算法,算法中的可实施谓词、随机开关、实施速率等描述可以直接在SPN求解软件的编程中实现,从而为构造一种层次化的网格服务管理模型提供一个可实现的有效途径。 相似文献
11.
Nowadays organisations are willing to outsource their business processes as services and make them accessible via the Web.
In doing so, they can dynamically combine individual services to their service applications. However, unless the data on the
Web can be meaningfully shared and is interpretable, this objective cannot be realised. In this paper, a new agent-based approach
for managing ontology evolution in a Web services environment is exploited. The proposed approach has several key characteristics
such as flexibility and extensibility that differentiate this research from others. The refinement mechanisms which cope with
an evolving ontology are carefully examined. The novelty of our work is that inter-processes between different ontologies
are studied from the agent’s perspective. Based on this perspective, an agent negotiation model is applied to reach an agreement
regarding ontology discrepancy in an application. The efficiency and effectiveness of reaching an agreement over an ontology
dispute is leveraged by the private negotiation strategy applied in the argumentation approach. An extended negotiation strategy
is discussed to enable sufficient information in decision making at each negotiation round. A case study is presented to demonstrate
ontology refinement in a Web services environment. 相似文献
12.
In this paper, we consider designing pro-active failure handling strategies for grid environments. These strategies estimate the availability of resources in the Grid, and also preemptively calculate the expected long term capacity of the Grid. Using these strategies, we create modified versions of the backfill and replication algorithms to include all three pro-active strategies to ascertain each of their effectiveness in the prevention of job failures during execution. Also, we extend our earlier work on a co-ordinate based allocation strategy. The extended algorithm also shows continual improvement when operating under the same execution environment. In our experiments, we compare these enhanced algorithms to their original forms, and show that pro-active failure handling is able to, in some cases, avoid all job failures during execution. Also, we show that NSA provides the best balance of enhanced throughput and job failures during execution of the algorithms we have considered. 相似文献
13.
14.
With the proliferation of Web services, scientific applications are more and more designed as temporal compositions of services, commonly referred to as workflows. To address this paradigm shift, different workflow management systems have been proposed. While their efficiency has been established over centralized static systems, it is questionable over decentralized failure-prone platforms.Scientific applications recently started to be deployed over large distributed computing platforms, leading to new issues, like elasticity, i.e., the possibility to dynamically refine, at runtime, the amount of resources dedicated to an application. This raised again the demand for new programming models, able to express autonomic self-coordination of services in a dynamic platform.Nature-inspired, rule-based computing models recently gained a lot of attention in this context. They are able to naturally express parallelism, distribution, and autonomic adaptation. While their high expressiveness and adequacy for this context has been established, such models severely suffer from a lack of proof of concepts. In this paper, we concretely show how to leverage such models in this context. We focus on the design, the implementation and the experimental validation of a chemistry-inspired scientific workflow management system. 相似文献
15.
Robert L. Yunhong David Michal Joe Alex Ani Kazumi Oie Minsun Yoonjoo Woojin 《Future Generation Computer Systems》2006,22(8):940-948
In this paper, we describe two distributed, data intensive applications that were demonstrated at iGrid 2005 (iGrid Demonstration US109 and iGrid Demonstration US121). One involves transporting astronomical data from the Sloan Digital Sky Survey (SDSS) and the other involves computing histograms from multiple high-volume data streams. Both rely on newly developed data transport and data mining middleware. Specifically, we describe a new version of the UDT network protocol called Composible-UDT, a file transfer utility based upon UDT called UDT-Gateway, and an application for building histograms on high-volume data flows called BESH (for Best Effort Streaming Histogram). For both demonstrations, we include a summary of the experimental studies performed at iGrid 2005. 相似文献
16.
17.
Hai Jin Author Vitae Yaqin Luo Author Vitae Author Vitae Jie Dai Author Vitae Author Vitae 《Journal of Systems and Software》2010,83(10):1983-1994
When the scale of computational system grow from a single machine to a Grid with potentially thousands of heterogeneous nodes, the interdependencies among the resources and software components make management and maintenance activities much more complicated. One of the most important challenges to overcome is how to balance maintenance of the system and the global system availability. In this paper, a novel mechanism is proposed, the Cobweb Guardian, which provides solutions not only to reduce the effects of maintenance but to remove the effects of dependencies on system availability due to deployment dependencies, invocation dependencies, and environment dependencies. By using the Cobweb Guardian, Grid administrators can execute the maintenance tasks safely at runtime whilst ensuring high system availability. The results of our evaluations show that our proposed dependency-aware maintenance mechanism can significantly increase the throughput and the availability of the whole system at runtime. 相似文献
18.
Marcelo S. Sousa Alba C.M.A. Melo Azzedine Boukerche 《Journal of Parallel and Distributed Computing》2010
In the last decade, we have observed an unprecedented development in molecular biology. An extremely high number of organisms have been sequenced in genome projects and included in genomic databases, for further analysis. These databases present an exponential growth rate and they are intensively accessed daily, all over the world. Once a sequence is obtained, its function and/or structure must be determined. Direct experimentation is considered to be the most reliable method to do that. However, the experiments that must be conducted are very complex and time consuming. For this reason, it is far more productive to use computational methods to infer biological information from a sequence. This is usually done by comparing the new sequence with sequences that already had their characteristics determined. BLAST is the most widely used heuristic tool for sequence comparison. Thousands of BLAST searches are made daily, all over the world. In order to further reduce the BLAST execution time, cluster and grid environments can be effectively used. This paper proposes and evaluates an adaptive task allocation framework to perform BLAST searches in a grid environment. The framework, called PackageBLAST, provides an infrastructure that executes distributed BLAST genomic database comparisons. In addition, it is flexible since the user can choose or incorporate new task allocation strategies. Furthermore, we propose a mechanism to compute grid nodes’ execution weight, adapting the chosen allocation policy to the observed computational power and local load of the nodes. Our results present very good speedups. For instance, in a 16-machine heterogeneous grid testbed, a speedup of 14.59 was achieved, reducing the BLAST execution time from 30.88 min to 2.11 min. Also, we show that the adaptive task allocation strategy was able to handle successfully the complexity of a grid environment. 相似文献
19.
Bartholomäus Kellerer Manfred Reitenspiess 《International Journal on Software Tools for Technology Transfer (STTT)》2005,7(4):376-387
Telecommunications technologies are undergoing a major paradigm shift. Standards-based, off-the-shelf components and the Internet are gaining wide acceptance. The success of this move is strongly dependent upon the quality and availability of these technologies.Practical quality assurance in this environment can take advantage of the tools and methods developed when carrier-grade systems for the telecommunications market were being deployed. Besides standard test methods, availability-related methods for redundant hardware and software components are applied. Statistics are available that prove the success of this approach. The statistical data are derived from the deployment of the commercial product RTP4 Continuous Services, a standards-based high-availability middleware.Additional momentum has been gained in the Service Availability Forum (www.saforum.org), where the interface standards are validated and certified in independent test processes. 相似文献