共查询到20条相似文献,搜索用时 15 毫秒
1.
Giuseppe Romanazzi Peter K. JimackChristopher E. Goodyer 《Advances in Engineering Software》2011,42(5):247-258
We propose a model for describing and predicting the parallel performance of a broad class of parallel numerical software on distributed memory architectures. The purpose of this model is to allow reliable predictions to be made for the performance of the software on large numbers of processors of a given parallel system, by only benchmarking the code on small numbers of processors. Having described the methods used, and emphasized the simplicity of their implementation, the approach is tested on a range of engineering software applications that are built upon the use of multigrid algorithms. Despite their simplicity, the models are demonstrated to provide both accurate and robust predictions across a range of different parallel architectures, partitioning strategies and multigrid codes. In particular, the effectiveness of the predictive methodology is shown for a practical engineering software implementation of an elastohydrodynamic lubrication solver. 相似文献
2.
《Computers in Industry》2014,65(6):913-923
Knowledge sharing and reuse are important factors affecting the performance of supply chains. These factors can be amplified in information systems by supply chain management (SCM) ontology. The literature provides various SCM ontologies for a range of industries and tasks. Although many studies make claims of the benefits of SCM ontology, it is unclear to what degree the development of these ontologies is informed by research outcomes from the ontology engineering field. This field has produced a set of specific engineering techniques, which are supposed to help developing quality ontologies. This article reports a study that assesses the adoption of ontology engineering techniques in 16 SCM ontologies. Based on these findings, several implications for research as well as SCM ontology adoption are articulated. 相似文献
3.
Control of spatially distributed systems is a challenging problem because of their complex nature, nonlinearity, and generally high order. The lack of accurate and computationally efficient model-based techniques for large, spatially distributed systems leads to challenges in controlling the system. Agent-based control structures provide a powerful tool to manage distributed systems by utilizing (organizing) local and global information obtained from the system. A hierarchical, agent-based system with local and global controller agents is developed to control networks of interconnected chemical reactors (CSTRs). The global controller agent dynamically updates local controller agent’s objectives as the reactor network conditions change. One challenge posed is control of the spatial distribution of autocatalytic species in a network of reactors hosting multiple species. The multi-agent control system is able to intelligently manipulate the network flow rates such that the desired spatial distribution of species is achieved. Furthermore, the robustness and flexibility of the agent-based control system is illustrated through examples of disturbance rejection and scalability with respect to the size of the network. 相似文献
4.
Jorge Calmon de Almeida Biolchini Author Vitae Author Vitae Ana Candida Cruz Natali Author Vitae Author Vitae Guilherme Horta Travassos Author Vitae 《Advanced Engineering Informatics》2007,21(2):133-151
The term systematic review is used to refer to a specific methodology of research, developed in order to gather and evaluate the available evidence pertaining to a focused topic. It represents a secondary study that depends on primary study results to be accomplished. Several primary studies have been conducted in the field of Software Engineering in the last years, determining an increasing improvement in methodology. However, in most cases software is built with technologies and processes for which developers have insufficient evidence to confirm their suitability, limits, qualities, costs, and inherent risks. Conducting systematic reviews in Software Engineering consists in a major methodological tool to scientifically improve the validity of assertions that can be made in the field and, as a consequence, the reliability degree of the methods that are employed for developing software technologies and supporting software processes. This paper aims at discussing the significance of experimental studies, particularly systematic reviews, and their use in supporting software processes. A template designed to support systematic reviews in Software Engineering is presented, and the development of ontologies to describe knowledge regarding such experimental studies is also introduced. 相似文献
5.
Ontologies are the backbone of the Semantic Web, a semantic-aware version of the World Wide Web. The availability of large-scale high quality domain ontologies depends on effective and usable methodologies aimed at supporting the crucial process of ontology building. Ontology building exhibits a structural and logical complexity that is comparable to the production of software artefacts. This paper proposes an ontology building methodology that capitalizes the large experience drawn from a widely used standard in software engineering: the Unified Software Development Process or Unified Process (UP). In particular, we propose UP for ONtology (UPON) building, a methodology for ontology building derived from the UP. UPON is presented with the support of a practical example in the eBusiness domain. A comparative evaluation with other methodologies and the results of its adoption in the context of the Athena EU Integrated Project are also discussed. 相似文献
6.
Victor Korotkikh Galina Korotkikh 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2008,12(2):201-206
Engineering of distributed computing systems requires understanding of principles of complex systems, which have not been yet identified. To address the situation we use a concept of structural complexity and present results of computational experiments suggesting the possibility of a general optimality condition of complex systems. The optimality condition introduces the structural complexity of a system as a key to its optimization. 相似文献
7.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution.
Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications.
Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET 相似文献
8.
9.
Zafeirios C. Papazachos Author Vitae Helen D. Karatza Author Vitae 《Journal of Systems and Software》2010,83(8):1346-1354
Distributed systems deliver a cost-effective and scalable solution to the increasing performance intensive applications by utilizing several shared resources. Gang scheduling is considered to be an efficient time-space sharing scheduling algorithm for parallel and distributed systems. In this paper we examine the performance of scheduling strategies of jobs which are bags of independent gangs in a heterogeneous system. A simulation model is used to evaluate the performance of bag of gangs scheduling in the presence of high priority jobs implementing migrations. The simulation results reveal the significant role of the implemented migration scheme as a load balancing factor in a heterogeneous environment. Another significant aspect of implementing migrations presented in this paper is the reduction of the fragmentation caused in the schedule by gang scheduled jobs and the alleviation of the performance impact of the high priority jobs. 相似文献
10.
Marinho P. Barcellos Rodolfo S. AntunesHisham H. Muhammad Ruthiano S. Munaretti 《Journal of Network and Computer Applications》2012,35(1):328-339
Simulation has been of paramount importance to the development of novel Internet protocols. Such an approach typically focuses on one of three domains: wireless and other link-layer technologies, routing protocols, and transport-layer mechanisms and protocols. Existing techniques can tackle well simulation at layers 2, 3 and 4 of the TCP/IP architecture, but are not flexible enough to appropriately deal with application-layer protocols. These require simulators that support the modeling of networks and components with different levels of abstraction. Simmcast is an object-oriented framework that focuses on the necessary flexibility for application-layer protocol research. A simulation can be developed by the simple extension of building blocks that closely resemble components of a real network such as hosts, links and routers. The internal complexity of these components, however, is hidden from the user, so he/she can focus on the implementation of the desired protocol characteristics. This paper describes the flexible simulation architecture proposed and instantiated through Simmcast, and draws lessons from our experience in designing, implementing and deploying it. We also present framework instances used to evaluate application-layer protocols, exemplifying how different kinds of simulations can be developed with Simmcast. 相似文献
11.
The fast emergent and continuously evolving areas of the Semantic Web and Knowledge Management make the incorporation of ontology engineering tasks in knowledge-empowered organizations and in the World Wide Web more than necessary. In such environments, the development and evolution of ontologies must be seen as a dynamic process that has to be supported through the entire ontology life cycle, resulting to living ontologies. The aim of this paper is to present the Human-Centered Ontology Engineering Methodology (HCOME) for the development and evaluation of living ontologies in the context of communities of knowledge workers. The methodology aims to empower knowledge workers to continuously manage their formal conceptualizations in their day-to-day activities and shape their information space by being actively involved in the ontology life cycle. The paper also demonstrates the Human Centered ONtology Engineering Environment, HCONE, which can effectively support this methodology.
George VOUROS (B.Sc. Ph.D.) holds a B.Sc. in Mathematics, and a Ph.D. in Artificial Intelligence all from the University of Athens, Greece. Currently he is a Professor and Head of the Department of Information and Communication Systems Engineering, University of the Aegean, Greece, Director of the AI Lab and head of the Intelligent and Cooperative Systems Group (InCoSys). He has done research in the areas of Expert Systems, Knowledge management, Collaborative Systems, Ontologies, and Agent-based Systems. His published scientific work includes more than 80 book chapters, journal and national and international conference papers in the above-mentioned themes. He has served as program chair and chair and member of organizing committees of national and international conferences on related topics.
Konstantinos KOTIS (B.Sc. Ph.D.) holds a B.Sc. in Computation from the University of Manchester, UK (1995), and a Ph.D. in Information Management from University of the Aegean, Greece (May, 2005). Currently, he is a member of the Intelligent and Cooperative Systems Group (InCoSys) and director of the Information Technology Department of the Prefecture of Samos, Greece. His research and published work concerns Knowledge management, Ontology Engineering and Semantic Web. He has lectured in several IT seminars and has served as member of program committees in international workshops. 相似文献
12.
Bridging metamodels and ontologies in software engineering 总被引:3,自引:0,他引:3
B. Henderson-SellersAuthor Vitae 《Journal of Systems and Software》2011,84(2):301-313
13.
Sheng Wan 《Automatica》2002,38(1):33-46
The proper measure of closed-loop performance variation in the presence of model-plant mismatch is discussed in this paper. A generalized closed-loop error transfer function, which is a special representation of the dual Youla parameter and has a close relationship with the pointwise ν-gap metric, is proposed as the suitable means of representing closed-loop performance variation in case of plant perturbation, and the closed-loop performance variation measure is accordingly defined as its maximum singular value frequency by frequency. It is shown that this measure is essential and informative in characterizing closed-loop performance variation. This measure is also shown to be readily applicable to on-line closed-loop performance assessment or monitoring, even without the explicit model of the plant. Its variant, defined as the η-function, which features the relative performance variation as well as generalized stability margin variation with respect to the nominal plant, is also discussed. 相似文献
14.
Internet-based distributed systems enable globally-scattered resources to be collectively pooled and used in a cooperative manner to achieve unprecedented petascale supercomputing capabilities. Numerous resource discovery approaches have been proposed to help achieve this goal. To report or discover a multi-attribute resource, most approaches use multiple messages, with one message for each attribute, leading to high overhead of memory consumption, node communication, and subsequent merging operation. Another approach can report and discover a multi-attribute resource using one query by reducing multi-attribute to a single index, but it is not practically effective in an environment with a large number of different resource attributes. Furthermore, few approaches are able to locate resources geographically close to the requesters, which is critical to system performance. This paper presents a P2P-based intelligent resource discovery (PIRD) mechanism that weaves all attributes into a set of indices using locality sensitive hashing, and then maps the indices to a structured P2P overlay. PIRD can discover resources geographically close to requesters by relying on a hierarchical P2P structure. It significantly reduces overhead and improves search efficiency and effectiveness in resource discovery. It further incorporates the Lempel–Ziv–Welch algorithm to compress attribute information for higher efficiency. Theoretical analysis and simulation results demonstrate the efficiency of PIRD in comparison with other approaches. It dramatically reduces overhead and yields significant improvements on the efficiency of resource discovery. 相似文献
15.
16.
W. Eric Wong Author Vitae T.H. Tse Author Vitae Author Vitae Victor R. Basili Author Vitae Author Vitae 《Journal of Systems and Software》2009,82(8):1370-1373
This paper summarizes a survey of publications in the field of systems and software engineering from 2002 to 2006. The survey is an ongoing, annual event that identifies the top 15 scholars and institutions over a 5-year period. The rankings are calculated based on the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea, and the top-ranked scholar is Magne Jørgensen of Simula Research Laboratory, Norway. 相似文献
17.
Rakesh Kushwaha 《Performance Evaluation》1993,18(3):189-204
This paper describes an accurate and efficient method to model and predict the performance of distributed/parallel systems. Various performance measures, such as the expected user response time, the system throughput and the average server utilization, can be easily estimated using this method. The methodology is based on known product form queueing network methods, with some additional approximations. The method is illustrated by evaluating performance of a multi-client multi-server distributed system. A system model is constructed and mapped to a probabilistic queueing network model which is used to predict its behavior. The effects of user think time and various design parameters on the performance of the system are investigated by both the analytical method and computer simulation. The accuracy of the former is verified. The methodology is applied to identify the bottleneck server and to establish proper balance between clients and servers in distributed/parallel systems. 相似文献
18.
Interoperation among agent-based information systems through a communication acts ontology 总被引:1,自引:0,他引:1
Jesús Bermúdez Alfredo Goi Arantza Illarramendi Miren I. Bagüs 《Information Systems》2007,32(8):1121-1144
Information technology is evolving from focusing on local systems to encompassing a more global interaction among multiple systems in enterprises and communities. On the one hand, new advances in the area of network communications have facilitated in some way the intercommunication among heterogeneous information systems located at different places. However, what is still missing is the possibility of a real and efficient interoperation among those systems in an open environment as the one favoured by Internet. On the other hand, agent technology provides platforms where cooperative work of information systems is concernable because of the software agents working on behalf of these information systems. But, nowadays that cooperation is in general restricted and requires a laborious a priori preparation. In this paper we present the features of a formal ontology that can play a relevant role in the development of a new kind of information systems interoperation frameworks. The ontology includes classes and properties for describing communication acts among agents. We claim that the communication acts ontology provides interoperability support due to the recognition of communication acts from one Agent Communication Language (acl) as instances of communication acts in another acl. Sometimes the comprehension will not be complete, but partial comprehension of the communication may be useful and preferable to the “not understood” answer given nowadays. Terms of the ontology are described as classes or properties using the Web Ontology Language owl. 相似文献
19.
P. Morillo A. Bierbaum P. Hartling M. Fernández C. Cruz-Neira 《Journal of Parallel and Distributed Computing》2008
Cluster computing has become an essential issue for designing immersive visualization systems. This paradigm employs scalable clusters of commodity computers with much lower costs than would be possible with the high-end, shared memory computers that have been traditionally used for virtual reality purposes. This change in the design of virtual reality systems has caused some development environments oriented toward shared memory computing to require modifications to their internal architectures in order to support cluster computing. This is the case of VR Juggler, which is considered one of the most important virtual reality application development frameworks based on open source code. 相似文献
20.
For the past decades computer engineers have focused on building high-performance and large-scale computer systems with low-cost. One of the examples is a distributed-memory computer system like a cluster, where fast processing nodes to use commodity processors are connected through a high speed network. But it is not easy to develop applications on this system, because a programmer must consider all data and control dependences between processes and program them explicitly. For alleviating this problem the distributed virtual shared-memory (DVSM) system has been proposed. It is well known that the performance of the DVSM system highly depends on the network’s performance and programming semantics, and currently its performance is very limited on a conventional network. Recently many advanced hardware-based interconnection technologies have been introduced, and one of them is the InfiniBand Architecture (IBA) which supports shared-memory programming semantics by means of remote direct-memory access (RDMA) and atomic operations. In this paper, we present the implementation of our InfiniBand-based DVSM system and analyze the performance of SPEC OMP benchmarks in detail by comparing with the DVSM based on the traditional network architecture and the hardware shared-memory multiprocessor (SMP) system. As experiment result, we show that our DVSM system to use full features of the IBA can improve the performance significantly over the IPoIB-based traditional system on the IBA, and furthermore the performance of one application on the IBA-based DVSM system is better than on the hardware SMP. 相似文献