首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
We present a novel framework for automatic inference of efficient synchronization in concurrent programs, a task known to be difficult and error-prone when done manually. Our framework is based on abstract interpretation and can infer synchronization for infinite state programs. Given a program, a specification, and an abstraction, we infer synchronization that avoids all (abstract) interleavings that may violate the specification, but permits as many valid interleavings as possible. Combined with abstraction refinement, our framework can be viewed as a new approach for verification where both the program and the abstraction can be modified on-the-fly during the verification process. The ability to modify the program, and not only the abstraction, allows us to remove program interleavings not only when they are known to be invalid, but also when they cannot be verified using the given abstraction. We implemented a prototype of our approach using numerical abstractions and applied it to verify several example programs.  相似文献   

2.
The Transactional Memory (TM) paradigm aims at simplifying the development of concurrent applications by means of the familiar abstraction of atomic transaction. After a decade of intense research, hardware implementations of TM have recently entered the domain of mainstream computing thanks to Intel’s decision to integrate TM support, codenamed RTM (Reduced Transactional Memory), in their last generation of processors.In this work we shed light on a relevant issue with great impact on the performance of Intel’s RTM: the correct tuning of the logic that regulates how to cope with failed hardware transactions. We show that the optimal tuning of this policy is strongly workload dependent, and that the relative difference in performance among the various possible configurations can be remarkable (up to 10 × slow-downs).We address this issue by introducing a simple and effective approach that aims to identify the optimal RTM configuration at run-time via lightweight reinforcement learning techniques. The proposed technique requires no off-line sampling of the application, and can be applied to optimize both the cases in which a single global lock or a software TM implementation is used as fall-back synchronization mechanism.We propose and evaluate different designs for the proposed self-tuning mechanisms, which we integrated with GCC in order to achieve full transparency for the programmers. Our experimental study, based on standard TM benchmarks, demonstrates average gains of 60% over any static approach while remaining within 5% from the performance of manually identified optimal configurations.  相似文献   

3.
Network invariants for real-time systems   总被引:1,自引:0,他引:1  
We extend the approach of model checking parameterized networks of processes by means of network invariants to the setting of real-time systems. We introduce timed transition structures (which are similar in spirit to timed automata) and define a notion of abstraction that is safe with respect to linear temporal properties. We strengthen the notion of abstraction to allow a finite system, then called network invariant, to be an abstraction of networks of real-time systems. In general the problem of checking abstraction of real-time systems is undecidable. Hence, we provide sufficient criteria, which can be checked automatically, to conclude that one system is an abstraction of a concrete one. Our method is based on timed superposition and discretization of timed systems. We exemplify our approach by proving mutual exclusion of a simple protocol inspired by Fischer’s protocol, using the model checker TLV. Part of this work was done during O. Grinchtein’s stay at Weizmann Institute. This author was supported by the European Research Training Network “Games”.  相似文献   

4.
Runtime software architectures (RSA) are architecture-level, dynamic representations of running software systems, which help monitor and adapt the systems at a high abstraction level. The key issue to support RSA is to maintain the causal connection between the architecture and the system, ensuring that the architecture represents the current system, and the modifications on the architecture cause proper system changes. The main challenge here is the abstraction gap between the architecture and the system. In this paper, we investigate the synchronization mechanism between architecture configurations and system states for maintaining the causal connections. We identify four required properties for such synchronization, and provide a generic solution satisfying these properties. Specifically, we utilize bidirectional transformation to bridge the abstraction gap between architecture and system, and design an algorithm based on it, which addresses issues such as conflicts between architecture and system changes, and exceptions of system manipulations. We provide a generative tool-set that helps developers implement this approach on a wide class of systems. We have successfully applied our approach on JOnAS JEE system to support it with C2-styled runtime software architecture, as well as some other cases between practical systems and typical architecture models.  相似文献   

5.
This paper presents an ontology-based approach for the design of a collaborative business process model (CBP). This CBP is considered as a specification of needs in order to build a collaboration information system (CIS) for a network of organizations. The study is a part of a model-driven engineering approach of the CIS in a specific enterprise interoperability framework that will be summarised. An adaptation of the Business Process Modelling Notation (BPMN) is used to represent the CBP model. We develop a knowledge-based system (KbS) which is composed of three main parts: knowledge gathering, knowledge representation and reasoning, and collaborative business process modelling. The first part starts from a high abstraction level where knowledge from business partners is captured. A collaboration ontology is defined in order to provide a structure to store and use the knowledge captured. In parallel, we try to reuse generic existing knowledge about business processes from the MIT Process Handbook repository. This results in a collaboration process ontology that is also described. A set of rules is defined in order to extract knowledge about fragments of the CBP model from the two previous ontologies. These fragments are finally assembled in the third part of the KbS. A prototype of the KbS has been developed in order to implement and support this approach. The prototype is a computer-aided design tool of the CBP. In this paper, we will present the theoretical aspects of each part of this KbS as well as the tools that we developed and used in order to support its functionalities.  相似文献   

6.
We present a theoretical framework and a case study for reusing the same conceptual and computational methodology for both temporal abstraction and linear (unidimensional) space abstraction, in a domain (evaluation of traffic-control actions) significantly different from the one (clinical medicine) in which the method was originally used. The method, known asknowledge-based temporal abstraction, abstracts high-level concepts and patterns from time-stamped raw data using a formal theory of domain-specific temporal-abstraction knowledge. We applied this method, originally used to interpret time-oriented clinical data, to the domain of traffic control, in which the monitoring task requires linear pattern matching along both space and time. First we reused the method for creation of unidimensional spatial abstractions over highways, given sensor measurements along each highway measured at the same time point. Second, we reused the method to create temporal abstractions of the traffic behaviour, for the same space segments, but during consecutive time points. We defined the corresponding temporal-abstraction and spatial-abstraction domain-specific knowledge. Our results suggest that (1) the knowledge-base temporal-abstraction method is reusable over time and unidimensional space as well as over significantly different domains; (2) the method can be generalised into a knowledge-based linear-abstraction method, which solves tasks requiring abstraction of data along any linear distance measure; and (3) a spatiotemporal-abstraction method can be assembled, from two copies of the generalised method and a spatial-decomposition mechanism, and is applicable to tasks requiring abstraction of time-oriented data into meaningful spatiotemporal patterns over a linear, decomposable space, such as traffic over a set of highways.  相似文献   

7.
Multiprocessor embedded systems integrates diverse dedicated processing units to handle high performance applications such as in multimedia and network processing. However, lock-based synchronization limits the efficiency of such heterogeneous concurrent systems. Hardware Transactional Memory (HTM) is a promising approach in creating an abstraction layer for multi-threaded programming. However, HTM performance is application-specific and determined by version and conflict management configurations. Most previous HTM implementations for embedded system in literature were built on fixed version management that result in significant performance loss when transaction behaviour changes. In this paper, we propose a HTM targeted for embedded applications which is able to adapt its version management based on application behaviour at runtime. It is prototyped and analysed on Altera Cyclone IV platform. Random requests at different contention levels and different transaction sizes are used to verify the performance of the proposed HTM. Based on our experiments, lazy version management is able to obtain up to 12.82% speed-up compared to eager version management at high contention level. Meanwhile, eager version management obtains up to 37.84% speed-up compared to lazy version management at low contention. The adaptive mechanism is able to switch configuration at runtime based on applications behaviour for maximum performance.  相似文献   

8.
We consider the verification problem of a class of infinite-state systems called wPAD. These systems can be used to model programs with (possibly recursive) procedure calls and dynamic creation of parallel processes. They correspond to PAD models extended with an acyclic finite-state control unit, where PAD models can be seen as combinations of prefix rewrite systems (pushdown systems) with context-free multiset rewrite systems (synchronization-free Petri nets). Recently, we have presented symbolic reachability techniques for the class of PAD based on the use of a class of unranked tree automata. In this paper, we generalize our previous work to the class wPAD which is strictly larger than PAD. This generalization brings a positive answer to an open question on decidability of the model checking problem for wPAD against EF logic. Moreover, we show how symbolic reachability analysis of wPAD can be used in (under) approximate analysis of Synchronized PAD, a (Turing) powerful model for multithreaded programs (with unrestricted synchronization between parallel processes). This leads to a pragmatic approach for detecting the presence of erroneous behaviors in these models based on the bounded reachability paradigm where the notion of bound considered here is the number of synchronization actions.  相似文献   

9.
Collaborative knowledge base (KB) authoring environments are critical for the construction of high-performance KBs. Such environments must support rapid construction of KBs by a collaborative effort of teams of knowledge engineers through reuse of existing knowledge and software components. They should support the manipulation of knowledge by diverse problem-solving engines even if that knowledge is encoded in different languages and by different researchers. They should support large KBs and provide a scalable and interoperable development infrastructure. In this paper, we present an environment that satisfies many of these goals.We present an architecture for scalable frame representation systems (FRSs). The Generic Frame Protocol (GFP) provides infrastructure for reuse of software components. It is a procedural interface to frame representation systems that provides a common means of accessing and modifying frame KBs. The Generic KB Editor (GKB-EDITOR) provides graphical KB browsing, editing, and comprehension services for large KBs. Scalability of loading and saving time is provided by a storage system (PERK) which submerges a database management system in an FRS. Multi-user access is controlled through a collaboration subsystem that uses a novel optimistic concurrency control algorithm. All the results have been implemented and tested in the development of several real KBs.  相似文献   

10.
Human experts tend to introduce intermediate terms in giving their explanations. The expert's explanation of such terms is operational for the context that triggered the explanation; however, term definitions remain often incomplete. Further, the expert's (re) use of these terms is hierarchical (similar to natural language). In this paper, we argue that a hierarchical incremental knowledge acquisition (KA) process that captures the expert terms and operationalizes them while incompletely defined makes the KA task more effective. Towards this we present our knowledge representation formalism Nested Ripple Down Rules (NRDR) that is a substantial extension to the (Multiple Classification) Ripple Down Rule (RDR) KA framework. The incremental KA process with NRDR as the underlying knowledge representation has confirmation holistic features. This allows simultaneous incremental modelling and KA and eases the knowledge base (KB) development process.Our NRDR formalism preserves the strength of incremental refinement methods, that is the ease of maintenance of the KB. It also addresses some of their shortcomings: repetition, lack of explicit modelling and readability. KBs developed with NRDR describe an explicit model of the domain. This greatly enhances the reuseability of the acquired knowledge.This paper also presents a theoretical framework for analysing the structure of RDR in general and NRDR in particular. Using this framework, we analyse the conditions under which RDR converges towards the target KB. We discuss the maintenance problems of NRDR as a function of this convergence. Further, we analyse the conditions under which NRDR offers an effective approach for domain modelling. We show that the maintenance of NRDR requires similar effort to maintaining RDR for most of the KB development cycle. We show that when an NRDR KB shows an increase in maintenance requirement in comparison with RDR during its development, this added requirement can be automatically handled using stored past seen cases.  相似文献   

11.
数据同步作为维护分布式环境中各个节点数据库数据一致性的方法,是分布式环境中的一项关键技术。本文针对Oracle多数据库的应用环境,提出一种类似于"门面模式"的适合企业级数据库同步技术。从数据库链路、同义词、事务同步提交等3个环节,阐述分布式数据库的事务同步的基本工作原理。在实际研发的项目中,表明了基于门面模式的分布式数据库事务应用的可行性与良好的功能特性。  相似文献   

12.
In this paper we present an approach for supporting the semi-automated architectural abstraction of architectural models throughout the software life-cycle. It addresses the problem that the design and implementation of a software system often drift apart as software systems evolve, leading to architectural knowledge evaporation. Our approach provides concepts and tool support for the semi-automatic abstraction of architecture component and connector views from implemented systems and keeping the abstracted architecture models up-to-date during software evolution. In particular, we propose architecture abstraction concepts that are supported through a domain-specific language (DSL). Our main focus is on providing architectural abstraction specifications in the DSL that only need to be changed, if the architecture changes, but can tolerate non-architectural changes in the underlying source code. Once the software architect has defined an architectural abstraction in the DSL, we can automatically generate architectural component views from the source code using model-driven development (MDD) techniques and check whether architectural design constraints are fulfilled by these models. Our approach supports the automatic generation of traceability links between source code elements and architectural abstractions using MDD techniques to enable software architects to easily link between components and the source code elements that realize them. It enables software architects to compare different versions of the generated architectural component view with each other. We evaluate our research results by studying the evolution of architectural abstractions in different consecutive versions of five open source systems and by analyzing the performance of our approach in these cases.  相似文献   

13.
Hierarchical control system design using approximate simulation   总被引:1,自引:0,他引:1  
In this paper, we present a new approach for hierarchical control based on the recent notions of approximate simulation and simulation functions, a quantitative version of the simulation relations. Given a complex system that needs to be controlled and a simpler abstraction, we show how the knowledge of a simulation function allows us to synthesize hierarchical control laws by first controlling the abstraction and then lifting the abstract control law to the complex system using an interface. For the class of linear control systems, we give an effective characterization of the simulation functions and of the associated interfaces. This characterization allows us to use algorithmic procedures for their computation. We show how to choose an abstraction for a linear control system such that our hierarchical control approach can be used. Finally, we show the effectiveness of our approach on an example.  相似文献   

14.
《Information Sciences》2006,176(18):2642-2672
In this paper, we propose and formalize a rule based knowledge transaction model for mobile environments. Our model integrates the features of both mobile environments and intelligent agents. We use logic programming as a mathematic tool and formal specification method to study knowledge transaction in mobile environments. Our knowledge transaction model has the following major advantages: (1) It can be used for knowledge transaction representation, formalization and knowledge reasoning in mobile environments. (2) It is knowledge oriented and has a declarative semantics inherited from logic programming. (3) It is a formalization that can be applied to general problem domains. We show that our model can be used for knowledge transaction representation, formalization and knowledge reasoning in mobile environments.  相似文献   

15.
There are many datastore systems to choose from that differ in many ways including public versus private cloud support, data management interfaces, programming languages, supported feature sets, fault tolerance, consistency guarantees, configuration, and their deployment processes. In this paper, we focus on technologies for structured data access (database/datastore systems) in cloud systems. Our goal is to simplify the use of datastore systems through automation and to facilitate their empirical evaluation using real world applications. To enable this, we provide a cloud platform abstraction layer that decouples a data access API from its implementation. Applications that use this API can use any datastore that “plugs into” our abstraction layer, thus enabling application portability. We use this layer to extend the functionality of multiple datastores without modifying the datastores directly. Specifically, we provide support for ACID transaction semantics for popular key-value stores (none of which provide this feature). We integrate this layer into the AppScale cloud platform—an open-source cloud platform that executes cloud applications written in Python, Java, and Go, over virtualized cluster resources and infrastructures-as-a-service (Eucalyptus, OpenStack, and Amazon EC2). We use this system to investigate the impact of extending disparate datastores via the application portability layer with distributed transaction support.  相似文献   

16.
A reasoning method for a ship design expert system   总被引:4,自引:0,他引:4  
Abstract: The ship design process is a highly data‐oriented, dynamic, iterative and multi‐stage algorithm. It utilizes multiple abstraction levels and concurrent engineering techniques. Specialized techniques for knowledge acquisition, knowledge representation and reasoning must be developed to solve these problems for a ship design expert system. Consequently, very few attempts have been made to model the ship design process using an expert system approach. The current work investigates a knowledge representation–reasoning technique for such a purpose. A knowledge‐based conceptual design was developed by utilizing a prototype approach and hierarchical decompositioning. An expert system program called ALDES (accommodation layout design expert system) was developed by using the CLIPS expert system shell and an object‐oriented user interface. The reasoning and knowledge representation methods of ALDES are explained in the paper. An application of the method is given for the general arrangement design of a containership.  相似文献   

17.
In recent years, generalization-based data mining techniques have become an interesting topic for many data scientists. Generalized itemset mining is an exploration technique that focuses on extracting high-level abstractions and correlations in a database. However, the problem that domain experts must always deal with is how to manage and interpret a large number of extracted patterns from a massive database of transactions. In generalized pattern mining, taxonomies that contain abstraction information for each dataset are defined, so the number of frequent patterns can grow enormously. Therefore, exploiting knowledge turns into a difficult and costly process. In this article, we introduce an approach that uses cardinality-based constraints with transaction id and numeric encoding to mine generalized patterns. We applied transaction id to support the computation of each frequent itemset as well as to encode taxonomies into a numeric type using two simple rules. We also attempted to apply the combination of cardinality cons- traints and closed or maximal patterns. Experiments show that our optimizations significantly improve the performance of the original method, and the importance of comprehensive information within closed and maximal patterns is worth considering in generalized frequent pattern mining.  相似文献   

18.
Model management research investigates the formulation, analysis and interpretation of models. This paper focuses on the formulation aspects of modeling so that the task can be supported by decision support systems (DSS) environments. Given the knowledge intensive nature of the formulation process, the development of a modeling tool requires explicating the knowledge pertaining to modeling. This involves comprehending not only the static knowledge about model components (e.g. decision variables, coefficients, associated indices and constants), but also the process knowledge required to construct models from model pieces. The proposed top-down approach configures equations by exploiting the structural modeling knowledge inherent in equation components. The possible representation of equations at various abstraction levels is introduced, the aim being to uncover the structural model components together with the process knowledge required for their appropriate configuration. As part of developing this conceptual model, the role of semantic and syntactic information in model building is investigated. The paper proposes an approach where the formulation semantics are captured as a simple 'action-resource' view which composes models by identifying and piecing together the equation components. The process of equation construction is illustrated with examples from the linear programming (LP) modeling domain. The proposed top-down approach is contrasted with a bottom-up method.  相似文献   

19.
Market-based principles can be used to manage the risk of distributed peer-to-peer transactions. This is demonstrated by Ptrim, a system that builds a transaction default market on top of a main transaction processing system, within which peers offer to underwrite the transaction risk for a slight increase in the transaction cost. The insurance cost, determined through market-based mechanisms, is a way of identifying untrustworthy peers and perilous transactions. The risk of the transactions is contained, and at the same time members of the peer-to-peer network capitalise on their market knowledge by profiting as transaction insurers. We evaluated the approach through trials with the deployed Ptrim prototype, as well as composite experiments involving real online transaction data and real subjects participating in the transaction default market. We examine the efficacy of our approach both from a theoretical and an experimental perspective. Our findings suggest that the Ptrim market layer functions in an efficient manner, and is able to support the transaction processing system through the insurance offers it produces, thus acting as an effective means of reducing the risk of peer-to-peer transactions. In our conclusions we discuss how a system like Ptrim assimilates properties of real world markets, and its potential exposure and possible countermeasures to events such as those witnessed in the recent global financial turmoil.  相似文献   

20.
A goal of this study is to develop a Composite Knowledge Manipulation Tool (CKMT). Some of traditional medical activities are rely heavily on the oral transfer of knowledge, with the risk of losing important knowledge. Moreover, the activities differ according to the regions, traditions, experts’ experiences, etc. Therefore, it is necessary to develop an integrated and consistent knowledge manipulation tool. By using the tool, it will be possible to extract the tacit knowledge consistently, transform different types of knowledge into a composite knowledge base (KB), integrate disseminated and complex knowledge, and complement the lack of knowledge. For the reason above, I have developed the CKMT called as K-Expert and it has four advanced functionalities as follows. Firstly, it can extract/import logical rules from data mining (DM) with the minimum of effort. I expect that the function can complement the oral transfer of traditional knowledge. Secondly, it transforms the various types of logical rules into database (DB) tables after the syntax checking and/or transformation. In this situation, knowledge managers can refine, evaluate, and manage the huge-sized composite KB consistently with the support of the DB management systems (DBMS). Thirdly, it visualizes the transformed knowledge in the shape of decision tree (DT). With the function, the knowledge workers can evaluate the completeness of the KB and complement the lack of knowledge. Finally, it gives SQL-based backward chaining function to the knowledge users. It could reduce the inference time effectively since it is based on SQL query and searching not the sentence-by-sentence translation used in the traditional inference systems. The function will give the young researchers and their fellows in the field of knowledge management (KM) and expert systems (ES) more opportunities to follow up and validate their knowledge. Finally, I expect that the approach can present the advantages of mitigating knowledge loss and the burdens of knowledge transformation and complementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号