首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In automated synthesis, given a specification, we automatically create a system that is guaranteed to satisfy the specification. In the classical temporal synthesis algorithms, one usually creates a “flat” system “from scratch”. However, real-life software and hardware systems are usually created using preexisting libraries of reusable components, and are not “flat” since repeated sub-systems are described only once.In this work we describe an algorithm for the synthesis of a hierarchical system from a library of hierarchical components, which follows the “bottom-up” approach to system design. Our algorithm works by synthesizing in many rounds, when at each round the system designer provides the specification of the currently desired module, which is then automatically synthesized using the initial library and the previously constructed modules. To ensure that the synthesized module actually takes advantage of the available high-level modules, we guide the algorithm by enforcing certain modularity criteria.We show that the synthesis of a hierarchical system from a library of hierarchical components is Exptime-complete for μ-calculus, and 2Exptime-complete for Ltl, both in the cases of complete and incomplete information. Thus, in all cases, it is not harder than the classical synthesis problem (of synthesizing flat systems “from scratch”), even though the synthesized hierarchical system may be exponentially smaller than a flat one.  相似文献   

2.
3.
One of the biggest obstacles to software reuse is the cost involved in evaluating the suitability of possible reusable components. In recent years, code search engines have made significant progress in establishing the semantic suitability of components for new usage scenarios, but the problem of ranking components according to their non-functional suitability has largely been neglected. The main difficulty is that a component’s non-functional suitability for a specific reuse scenario is usually influenced by multiple, “soft” criteria, but the relative weighting of metrics for these criteria is rarely known quantitatively. What is required, therefore, is an effective and reliable strategy for ranking software components based on their non-functional properties without requiring users to provide quantitative weighting information. In this paper we present a novel approach for achieving this based on the non-dominated sorting of components driven by a specification of the relative importance of non-functional properties as a partial ordering. After describing the ranking algorithm and its implementation in a component search engine, we provide an explorative study of its properties on a sample set of components harvested from Maven Central.  相似文献   

4.
Problems with portability of applications across various Linux distributions is one of the major sore spots of independent software vendors (ISVs) wishing to support the Linux platform in their products. The source of the problem is that different distributions have different sets of system libraries that vary in the interfaces (APIs) provided. And the critical questions arise for ISVs such as “which distributions my application would run on?” or “what can I specifically do to make my application run on a greater number of distributions?”. This article refers to an industry-wide approach to mitigate the problem of Linux platform fragmentation through standardization of common interfaces—the Linux Standard Base (LSB) standard, the leading effort for the “single Linux specification”. The article shows how extending this approach with a knowledge base about the composition of real world Linux distributions can enable automatic portability analysis for Linux applications even if they use interfaces outside the scope of the standard. The knowledge base powered Linux Application Checker tool is described that can help answer the above questions by automatically analyzing the target application and confronting collected data about its external dependencies with what various distributions provide. Additionally, Linux Application Checker is an official tool approved by the Linux Foundation for certifying applications for compliance with the LSB standard.  相似文献   

5.
Several authors have identified that the only feasible way to increase productivity in software construction is to reuse existing software. To achieve this, component-based software development is one of the more promising approaches. However, traditional research in component-oriented programming often assumes that components are reused “as-is”. Practitioners have found that “as-is” reuse seldom occurs and that reusable components generally need to be adapted to match the system requirements. Existing component object models provide only limited support for component adaptation, i.e. white-box techniques such as copy–paste and inheritance, and black-box approaches such as aggregation and wrapping. These techniques suffer from problems related to reusability, efficiency, implementation overhead or the self problem. To address these problems, this article proposes superimposition, a novel black-box adaptation technique that allows one to impose predefined, but configurable types of functionality on a reusable component. Three categories of typical adaptation types are discussed, related to the component interface, component composition and component monitoring. Superimposition and the types of component adaptation are exemplified by several examples.  相似文献   

6.
In object-oriented composition, classes and class inheritance are applied to realize type relationships and reusable building blocks. Unfortunately, these two goals might be contradictory in many situations, leading to classes and inheritance hierarchies that are hard to reuse. Some approaches exist to remedy this problem, such as mixins, aspects, roles, and meta-objects. However, in all these approaches, situations where the mixins, aspects, roles, or meta-objects have complex interdependencies among each other are not well solved yet. In this paper, we propose transitive mixins as an extension of the mixin concept. This approach provides a simple and reusable solution to define “mixins of mixins”. Moreover, because mixins can be easily realized on top of aspects, roles, and meta-objects, the same solution can also be applied to those other approaches.  相似文献   

7.
An Experiment in Program Composition and Proof   总被引:1,自引:0,他引:1  
This paper explores a compositional approach to program specification, development and proof. We apply a theory of composition to a problem in distributed computing with the goal of understanding the strengths and weaknesses of this compositional approach. First, we describe the theory briefly. Then we give a specification of a desired system. Next, we propose a design of the desired system as a composition of components and prove its correctness. Finally, we show how the proof can be reused for a slightly different compositional structure by using the concept of observation.  相似文献   

8.
We humans usually think in words; to represent our opinion about, e.g., the size of an object, it is sufficient to pick one of the few (say, five) words used to describe size (“tiny,” “small,” “medium,” etc.). Indicating which of 5 words we have chosen takes 3 bits. However, in the modern computer representations of uncertainty, real numbers are used to represent this “fuzziness.” A real number takes 10 times more memory to store, and therefore, processing a real number takes 10 times longer than it should. Therefore, for the computers to reach the ability of a human brain, Zadeh proposed to represent and process uncertainty in the computer by storing and processing the very words that humans use, without translating them into real numbers (he called this idea granularity). If we try to define operations with words, we run into the following problem: e.g., if we define “tiny” + “tiny” as “tiny,” then we will have to make a counter-intuitive conclusion that the sum of any number of tiny objects is also tiny. If we define “tiny” + “tiny” as “small,” we may be overestimating the size. To overcome this problem, we suggest to use nondeterministic (probabilistic) operations with words. For example, in the above case, “tiny” + “tiny” is, with some probability, equal to “tiny,” and with some other probability, equal to “small.” We also analyze the advantages and disadvantages of this approach: The main advantage is that we now have granularity and we can thus speed up processing uncertainty. The main disadvantage is that in some cases, when defining symmetric associative operations for the set of words, we must give up either symmetry, or associativity. Luckily, this necessity is not always happening: in some cases, we can define symmetric associative operations. © 1997 John Wiley & Sons, Inc.  相似文献   

9.
Mass customization has become one of the key strategies for a service provider to differentiate itself from its competitors in a highly segmented global service market. This paper proposes an interactive service customization model to support individual service offering for customers. In this model, not only that the content of an activity is customizable, but the process model can also be constructed dynamically according to the customer's requirements. Based on goal ontology, the on-demand customer requirements are transformed into a high-level service process model. Process components, which are building blocks for reusable standardized service processes, are designed to support on-demand process composition. The customer can incrementally define the customized service process through a series of operations, including activation of goal decomposition, reusable component selection, and process composition. In this paper, we first discuss the key requirements of the service customization problem. We then present in detail a knowledge-based customizable service process model and the accompanying customization method. Finally we demonstrate the feasibility of the our approach through a case study of the well-known travel planning problem and present a prototype system that enables users to interactively organize a satisfying travel plan.  相似文献   

10.
Seamless integration of information systems plays a key role in the development and maintenance of products. In today’s dynamic market environments, change is the rule rather than the exception. Consequently, the ability to change products is an effective way to adapt to a changing information technology landscape which is an important competitive advantage of a successful company. The vision of service-oriented computing is to capture business relevant functionalities of existing software systems as services and use service composition to form composite applications. Unfortunately, this vision has yet to be achieved. We present here a high-level overview of a semantic service discovery, composition, and enactment system that realizes this vision. Rather than addressing a fully fledged industrial strength system, we present a research prototype that realizes this vision in a narrow application domain to show the general feasibility of automatic semantic discovery, composition and flexible enactment of services. The lessons we learned from implementing this prototype are: a) The requirements regarding the features of logical reasoners for the implementation of “real” scenarios are high. b) A formal and exact specification of the semantics of “real world” services is a laborious task. c) It is hard to find adequate scenarios, because people don’t trust this technology and they don’t like to give the control on business processes to a machine, because questions like “Who is responsible?” arise. Furthermore, the application of automated and flexible service discovery and composition at run-time is only cost-effective if changes and volatilities are frequent in the service landscape.  相似文献   

11.
We consider a formalized problem setting for designing possible configurations of a technical system with redundant components and its analytic solution. As the quality criterion for a specific configuration we use the constancy of a given set of functions fulfilled by the system. We formulate the problem of redundancy as finding possible values of the “integration” matrix in an equipment platform that relates input and output interfaces that would ensure the constancy of a collection of the system’s transition matrices intended to evaluate its quality. We obtain a full set of solutions and a version of a solution of the formulated problem which is easier to implement. We give an illustrative example of a real life application.  相似文献   

12.
In component-based software development, gluing of two software components is usually achieved by defining an interface specification, and creating wrappers on components to support the interface. We believe that interface specification provides useful information for specializing components. An interface may define constraints on a component's inputs, as well as on its outputs. In this paper, we propose a new approach to program specialization with respect to output constraints. We provide the form in which an efficient specialized program should be after such specialization, and consider a variant of partial evaluation to achieve it. In the process, we translate an output constraint into a characterization function for a component's input, and define a specializer that uses this characterization to guide the specialization process. We believe this work will broaden the scope of program specialization, and provide a framework for building more generic and versatile program adaptation techniques.  相似文献   

13.
大规模和复杂的实时系统可以显著获益于基于构件的软件开发方法,即通过已有的经过验证的可复用构件来构造实时系统,如能将这一集成过程自动化,将会显著提高实时系统的开发效率。通过对实时任务特性的分析,在Timed CSP等形式化工具的基础上,提出了一种具有精确语义的实时构件描述机制-RTCS,并探讨了在实时COR—BA架构内利用RTCS实现构件自动生成的方法。  相似文献   

14.
We investigate the space of singular curves associated to a distribution ofk-planes, or, what is the same thing, a nonlinear deterministic control system linear in controls. A singular curve is one for which the associated linearized system is not controllable. If a quadratic positive-definite cost function is introduced, then the corresponding optimal control problem is known as the sub-Riemannian geodesic problem. The original motivation for our work was the question “Are all sub-Riemannian minimizers smooth?” which is equivalent to the question “Are singular minimizers necessarily smooth?” Our main result concerns the singular curves for a class of homogeneous systems whose state spaces are compact Lie groups. We prove that for this class each singular curve lies in a lower-dimensional subgroup within which it is regular and we use this result to prove that all sub-Riemannian minimizers are smooth. A central ingredient of our proof is a symplectic-geometric characterization of singular curves formulated by Hsu. We extend this characterization to nonsmooth singular curves. We find that the symplectic point of view clarifies the situation and simplifies calculations.  相似文献   

15.
This paper presents the problem of Molecular Beam Epitaxy and Reflection High-Energy Electron Diffraction with the help of a unified, modern MDA approach. Model-Driven Architecture (MDA) constitutes a modern and unusually efficient method of improving the process of generating software. It was created at the beginning of the twenty-first century by the Object Management Group as an element of Model-Driven Development, a highly promoted trend in software engineering. In MDA a viewpoint on a system is a technique for abstraction using a selected set of architectural concepts and structuring rules, in order to focus on particular concerns within a system. In MDA, system design begins with defining the problem domain. Next, at a highly abstract level—independent of the system and programming platform—a Platform-Independent Model (PIM) is constructed as well as a general system specification. This specification is created with the help of Unified Modeling Language. The real implementation of the system is performed through the transformation of PIM to Platform-Specific Model (PSM). The essence of Model-Driven Architecture is the replacement of the twentieth century approach to programming, calling that “everything is an object”, to the modern—“everything is a model”.  相似文献   

16.
介绍了发展可重构仪器技术的必要性,总结了可重构仪器的设计思想,针对可重构仪器的设计重点:硬件方面,介绍了作为可重构处理的关键之处的FPGA的发展现状;软件方面,引入框架复用的概念,很好地解决了将可重用构件组装成应用程序的问题。给出了采用“应用程序框架+可复用构件”的开发方法的实例。  相似文献   

17.
《Graphical Models》2005,67(4):260-284
This paper deals with topological analysis of sets of tetrahedra (“tetrahedral meshes” of three-dimensional objects). We introduce a definition of simple elements for any normal tetrahedral representation. Then we prove a local characterization of simple tetrahedra in the case of a scene composed of one object and its background, based on homology groups and on relative homology. This allows us to define homotopic deformations of a tetrahedral representation. Using this characterization, we illustrate the problem of generating three-dimensional finite element meshes from medical voxel datasets.  相似文献   

18.
Automated prototyping tool-kit (APT) is an integrated set of software tools that generate source programs directly from real-time requirements. The APT system uses a fifth-generation prototyping language to model the communication structure, timing constraints, I/O control, and data buffering that comprise the requirements for an embedded software system. The language supports the specification of hard real-time systems with reusable components from domain specific component libraries. APT has been used successfully as a research tool in prototyping large war-fighter control systems (e.g. the command-and-control station, cruise missile flight control system, patriot missile defense systems) and demonstrated its capability to support the development of large complex embedded software.  相似文献   

19.
20.
In this paper, we propose a method to predict the presence or absence of correct classification results in classification problems with many classes and the output of the classifier is provided in the form of a ranking list. This problem differs from the “traditional” classification tasks encountered in pattern recognition. While the original problem of forming a ranking of the most likely classes can be solved by running several classification methods, the analysis presented here is moved one step further. The main objective is to analyse (classify) the provided rankings (an ordered list of rankings of a fixed length) and decide whether the “true” class is present on this list. With this regard, a two-class classification problem is formulated where the underlying feature space is built through a characterization of the ranking lists. Experimental results obtained for synthetic data as well as real world face identification data are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号