首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Alex Dvinsky  Roy Friedman 《Software》2015,45(10):1429-1455
This paper reports about our experience in designing and developing Chameleon, a highly portable and adaptable group communication framework for smartphones. Chameleon owes its level of portability to several design choices, including the following: (i) a layered architecture, where the headers of each layer have a standard XML‐based format, enabling automatic, error‐resistant generation of efficient serialization code in any platform; (ii) reliance only on the J2ME library, which serves as least common denominator for Java dialects and facilitates automatic translation to.NET; (iii) having flexible membership models; and (iv) supporting multiple concurrent protocol stacks.Through a single codebase, Chameleon is currently available as an open‐source project for J2ME, J2SE, Android,.NET CF, and.NET. Chameleon is easily extendable and is bundled with tools, configurations, and third‐party code tuned in a way that lifts some of the burden normally associated with multiplatform development for smartphones. Both the header generation from XML and automatic translation to.NET features of Chameleon are readily available to any application that is based on it. Chameleon's threading model separates between execution of internal layers and application's code and by that protects one from the other. As we describe in the paper, it simplifies layers' development and allows the protocol stack to easily block application calls when this is required by internal algorithms. Additionally, this model simplifies testing, and an extensive testing framework is supplied along with Chameleon, which is also usable for testing of application‐specific layers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Project Bayanihan is developing the idea of volunteer computing, which seeks to enable people to form very large parallel computing networks very quickly by using ubiquitous and easy-to-use technologies such as web browsers and Java. By utilizing Java’s object-oriented features, we have built a flexible software framework that makes it easy for programmers to write different volunteer computing applications, while allowing researchers to study and develop the underlying mechanisms behind them. In this paper, we show how we have used this framework to write master-worker style applications, and to develop approaches to the problems of programming interface, adaptive parallelism, fault-tolerance, computational security, scalability, and user interface design.  相似文献   

3.
The goal of the Network Weather Service is to provide accurate forecasts of dynamically changing performance characteristics from a distributed set of metacomputing resources. Providing a ubiquitous service that can both track dynamic performance changes and remain stable in spite of them requires adaptive programming techniques, an architectural design that supports extensibility, and internal abstractions that can be implemented efficiently and portably. In this paper, we describe the current implementation of the NWS for Unix and TCP/IP sockets and provide examples of its performance monitoring and forecasting capabilities.  相似文献   

4.
The Globus project: a status report   总被引:8,自引:0,他引:8  
The Globus project is a multi-institutional research effort that seeks to enable the construction of computational grids providing pervasive, dependable, and consistent access to high-performance computational resources, despite geographical distribution of both resources and users. Computational grid technology is being viewed as a critical element of future high-performance computing environments that will enable entirely new classes of computation-oriented applications, much as the World Wide Web fostered the development of new classes of information-oriented applications. In this paper, we report on the status of the Globus project as of early 1998. We describe the progress that has been achieved to date in the development of the Globus toolkit, a set of core services for constructing grid tools and applications. We also discuss the Globus Ubiquitous Supercomputing Testbed Organization (GUSTO) that we have constructed to enable large-scale evaluation of Globus technologies, and we review early experiences with the development of large-scale grid applications on the GUSTO testbed.  相似文献   

5.
The huge amount of computing resources in the Internet makes it possible to build metacomputers for solving large-scale problems. Despite the great availability of software infrastructures for managing such systems, metacomputer programming is often based on models that do not appear to be suitable to run applications on wide-area, unreliable, highly-variable networks of computers. In this paper, we present a customisable, Java-based middleware which provides programmers with a portable and flexible framework to run applications over a hierarchical, virtual network architecture. The middleware is designed according to a component-based approach that enables the execution behaviour of each computing node to be customised in order to satisfy application needs. The paper shows some examples of programming model customisation and demonstrates that flexibility can be achieved without significantly compromising performance.  相似文献   

6.
7.
The low bandwidth hinders the development of mobile computing.Besides providing relatively higher bandwidth on communication layer,constructing adaptable upper application is important.In this paper,a framework of autoadapting distributed object is proposed,and evaluating methods of object performance are given as well.Bistributed objects can abjust their behaviors automatically in the framework and keep in relatively good performance to serve requests of remote applications.It is an efficient way to implement the performance transparency for mobile clients.  相似文献   

8.
The service‐oriented architecture paradigm can be exploited for the implementation of data and knowledge‐based applications in distributed environments. The Web services resource framework (WSRF) has recently emerged as the standard for the implementation of Grid services and applications. WSRF can be exploited for developing high‐level services for distributed data mining applications. This paper describes Weka4WS, a framework that extends the widely used open source Weka toolkit to support distributed data mining on WSRF‐enabled Grids. Weka4WS adopts the WSRF technology for running remote data mining algorithms and managing distributed computations. The Weka4WS user interface supports the execution of both local and remote data mining tasks. On every computing node, a WSRF‐compliant Web service is used to expose all the data mining algorithms provided by the Weka library. The paper describes the design and implementation of Weka4WS using the WSRF libraries and services provided by Globus Toolkit 4. A performance analysis of Weka4WS for executing distributed data mining tasks in different network scenarios is presented. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
We describe our DISCWorld system for wide-area, high-performance metacomputing in which we adopt a high-level, service-based approach. Users’ client programs request combinations of services from a set of server nodes which communicate at a peer-based level. DISCWorld is a constrained metacomputing system, running only the service operations its participating resource administrators have chosen to provide and advertise, and provides a common integration environment for clients to access these services and developers to make them available. We discuss our software architecture and experiences building DISCWorld using Java and CORBA components, and the associated research issues for metacomputing that we are addressing.  相似文献   

10.
在分析Hadoop框架与TF-IDF算法的基础上,给出了TF-IDF算法在Hadoop分布式框架下的具体实现。实验表明,在处理大数据量时,与传统方法相比,新方法的效率更高。  相似文献   

11.
Clusters of computers have emerged as mainstream parallel and distributed platforms for high‐performance, high‐throughput and high‐availability computing. To enable effective resource management on clusters, numerous cluster management systems and schedulers have been designed. However, their focus has essentially been on maximizing CPU performance, but not on improving the value of utility delivered to the user and quality of services. This paper presents a new computational economy driven scheduling system called Libra, which has been designed to support allocation of resources based on the users' quality of service requirements. It is intended to work as an add‐on to the existing queuing and resource management system. The first version has been implemented as a plugin scheduler to the Portable Batch System. The scheduler offers market‐based economy driven service for managing batch jobs on clusters by scheduling CPU time according to user‐perceived value (utility), determined by their budget and deadline rather than system performance considerations. The Libra scheduler has been simulated using the GridSim toolkit to carry out a detailed performance analysis. Results show that the deadline and budget based proportional resource allocation strategy improves the utility of the system and user satisfaction as compared with system‐centric scheduling strategies. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
Mobile agent has shown its promise as a powerful means to complement and enhance existing technology in various application areas. In particular, existing work has demonstrated that MA can simplify the development and improve the performance of certain classes of distributed applications, especially for those running on a wide-area, heterogeneous, and dynamic networking environment like the Internet. In our previous work, we extended the application of MA to the design of distributed control functions, which require the maintenance of logical relationship among and/or coordination of proc- essing entities in a distributed system. A novel framework is presented for structuring and building distributed systems, which use cooperating mobile agents as an aid to carry out coordination and cooperation tasks in distributed systems. The framework has been used for designing various distributed control functions such as load balancing and mutual ex- clusion in our previous work. In this paper, we use the framework to propose a novel ap- proach to detecting deadlocks in distributed system by using mobile agents, which dem- onstrates the advantage of being adaptive and flexible of mobile agents. We first describe the MAEDD (Mobile Agent Enabled Deadlock Detection) scheme, in which mobile agents are dispatched to collect and analyze deadlock information distributed across the network sites and, based on the analysis, to detect and resolve deadlocks. Then the design of an adaptive hybrid algorithm derived from the framework is presented. The algorithm can dynamically adapt itself to the changes in system state by using different deadlock detec- tion strategies. The performance of the proposed algorithm has been evaluated using simulations. The results show that the algorithm can outperform existing algorithms that use a fixed deadlock detection strategy.  相似文献   

13.
Service-oriented computing (SOC) suggests that the Internet will be an open repository of many modular capabilities realized as web services. Organizations may be able to leverage this SOC paradigm if their employees are able to ubiquitously incorporate such capabilities and their resulting information into their daily practices. It is impractical to assume that human users will be able to manually search vast distributed repositories at real-time. This paper presents an architecture, Software Agent-Based Groupware using E-services (SAGE), that incorporates the use of intelligent agents to integrate human users with web services. SAGE provides background search and discovery approaches, thus enabling human users to exploit service-based capabilities that were previously too time-consuming to locate and integrate. We present a multi-agent system where each agent learns the rule-based preferences of a human user with regards to their current operational “context” and manages the incorporation of relevant web services. Recommended by: Djamal Benslimane and Zakaria Maamar  相似文献   

14.
Current environments for metacomputing generally have tools for managing the resources of a metacomputer but often lack adequate tools for designing, writing, and executing programs. Building an application for a metacomputer typically involves writing source codes on a local node, transferring and compiling codes on every node, and starting their execution. Without such tools, the application development phases can come up against considerable difficulties. In order to alleviate these problems, some graphical user interfaces (GUIs) based on PVM, such as XPVM, Parallel Application Development Environment (PADE) and Wide Area Metacomputing Manager (WAMM) have been implemented. These GUIs integrate a programming environment which facilitates the user in performing the application development phases and the application execution.

This paper outlines the general requirements for designing GUIs for metacomputing management, and compares WAMM, a graphical user interface, with some related works.  相似文献   


15.
Current innovative distributed architectures, proposing on-line services, involve more and more computing resources. From a provider point of view, the platform management leads to challenging problematic relating to resource allocation, which involve different kind of quality of service parameters, the provider has to focus on to keep his platform reliable and efficient. MFHS is a modular generic framework, which can be adapted to any distributed computing environment. Structured in modules, MFHS allows to discover the existing computing resources in terms of computing performance, network throughput and disk I/O speeds (Resources Discovery module) and to predict how the experiment should behave (Pi value). As the setting up of real experiments is often complex, MFHS allows: to make theoretical experimentation (based on models), to use any kind of distributed emulators, or to deploy experiments on real-experimental platforms. In this article, these three environments are used to highlight the reliability of MFHS (measured Pi=90% against 94% for the predicted Pi). Deployment and scheduling studies have also been achieved using an experimental Cloud based on OpenStack while Emulab test-bed has been used as emulator. During experiments, four QoS parameters are taken into account (Resources Monitoring module): energy consumption, cost, resource utilization, and makespan. These studies also includes a new heuristic called MMin, based on Max-Min and Min-Min algorithms. Experimentation section, proposes a detailed comparative analysis of these algorithms in terms of QoS results, while the abilities of the proposed heuristic MMin regarding the makespan metric is shown.  相似文献   

16.
This paper presents Arcademis, a Java‐based framework for communication middleware development. Arcademis consists of a set of abstract classes, interfaces and concrete components that define the general architecture of middleware systems. Its main objective is to support the implementation of non‐monolithic and easily configurable middleware platforms. Arcademis can be used by middleware developers to deploy systems that meet the requirements of a particular network or technology. Instances of Arcademis can also be customized by distributed systems engineers to meet the requirements of a particular application. For example, new transport protocols, connection management policies, authentication algorithms or invocation semantics can be easily configured in middleware platforms derived from Arcademis. In order to illustrate the use of the framework, the paper describes the RME system, a middleware derived from Arcademis that adds a remote method invocation service to the CLDC configuration of Java 2 Micro Edition (J2ME). Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
分析了分布式虚拟环境仿真的特点,提出了基于网格的分布式虚拟环境仿真的海量数据管理框架.该框架结构采用分层结构,自底向上依次为网格节点、高性能通信系统、数据存储与处理系统和计算系统.给出了一个基于上述体系结构的原型系统.对该原型系统的仿真结果表明,该海量数据管理体系结构设计符合虚拟环境仿真实时性、稳定性和高可靠性的要求.  相似文献   

18.
Over the last few decades, distributed systems have architecturally evolved. One recent evolutionary step is SOA. The SOA model is perfectly engendered in Web services, which provide software platforms for building applications as services. Web services utilize supportive capabilities such as security, reliability, and monitoring. These capabilities are typically provisioned as handlers, which incrementally add new features. Even though handlers are very important, the method of utilization is crucial for obtaining potential benefits. Every attempt to support a service with an additional handler increases the chance of an overwhelmingly crowded handler chain. Moreover, a handler may become a bottleneck because of its comparably higher processing time. In this paper, we present the Distributed Handler Architecture to provide an efficient, scalable, and modular architecture. The performance and scalability benchmarks show that the distributed and parallel handler executions are very promising for suitable handler configurations. The paper is concluded with remarks on the fundamentals of a promising computing environment for Web service handlers. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
The inherent complex nature of current distributed computing architectures hinders the widespread adoption of these systems for mainstream use. In general, users have access to a highly heterogeneous set of compute resources, which may include clusters, grids, desktop grids, clouds, and other compute platforms. This heterogeneity is especially problematic when running parallel and distributed applications. Software is needed which easily combines as many resources as possible into one coherent computing platform. In this paper, we introduce Zorilla: peer‐to‐peer (P2P) middleware that creates a single distributed environment from any available set of compute resources. Zorilla imposes minimal requirements on the resource used, is platform independent, and does not rely on central components. In addition to providing functionality on bare resources, Zorilla can exploit locally available middleware. Zorilla explicitly supports distributed and parallel applications, and allows resources from multiple sites to cooperate in a single computation. Zorilla makes extensive use of both virtualization and P2P techniques. We will demonstrate how virtualization and P2P combine into a simple design, while enhancing functionality and ease of use. Together, these techniques bring our goal a step closer: transparent, easy use of resources, even on very heterogeneous distributed systems. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
In the maintenance of software applications, database evolution is one common difficulty. In object‐oriented databases, this process comprises schema evolution and instance adaptation. Both tasks usually require significant effort from programmers and database administrators. In this paper, we propose orthogonal persistence and aspect‐oriented programming to support semi‐transparent database evolution. A default mechanism for instance evolution is defined, but the user may provide modularized solutions using the aspect‐oriented paradigm. We present our framework AOF4OOP to test the feasibility of our proposed approach. This prototype allows programmes to transparently access data in other versions of the database schema. We evaluate our framework, comparing it to related approaches using two real applications and measuring the improvement of the productivity of the programmer. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号