共查询到20条相似文献,搜索用时 0 毫秒
1.
The object constraint language (OCL) plays an important role in the elaboration of precise models. Although OCL was designed
to be both formal and simple, OCL specifications may be difficult to understand and evolve, particularly those containing
complex or duplicated expressions. In this paper, we discuss how refactoring techniques can be applied in order to improve
the understandability and maintainability of OCL specifications. In particular, we present several potentially bad constructions
often found in OCL specifications and a collection of refactorings that can be applied to replace such constructions by better
ones. We also briefly discuss how refactorings can be automated and how model regression testing can be used to increase our
confidence that the semantics of an OCL specification has been preserved after manually performed refactorings. 相似文献
2.
《IEEE transactions on pattern analysis and machine intelligence》1995,21(1):32-49
We present timing constraint Petri nets (or TCPN's for short) and describe how to use them to model a real-time system specification and determine whether the specification is schedulable with respect to imposed timing constraints. The strength of TCPN's over other time-related Petri nets is in the modeling and analysis of conflict structures. Schedulability analysis is conducted in three steps: specification modeling, reachability simulation, and timing analysis. First, we model a real-time system by transforming its system specification along with its imposed timing constraints into a TCPN; we call this net Ns. Then we simulate the reachability of Ns to verify whether a marking, Mn, is reachable from an initial marking, Mo. It is important to note that a reachable marking in Petri nets is not necessarily reachable in TCPN's due to the imposed timing constraints, Therefore, in the timing analysis step, a reachable marking Mn, found in the reachability simulation step is analyzed to verify whether Mn, is reachable with the timing constraints. Mn is said to be reachable in the TCPN's if and only if we can find at least one firing sequence σ so that all transitions in σ are strongly schedulable with respect to Mo under the timing constraints. If such Mn can be found, then we can assert that the specification is schedulable under the imposed timing constraints, otherwise the system specification needs to be modified or the timing constraints need to be relaxed. We also present a synthesis method for determining the best approximation of the earliest fire beginning time (EFBT) and the latest fire ending time (LFET) of each strongly schedulable transition 相似文献
3.
4.
Electronic Business using eXtensible Markup Language (ebXML) Business Process Specification Schema (BPSS) supports the specification of the set of elements required to configure a runtime system in order to execute a set of ebXML business transactions. The BPSS is available in two stand-alone representations; a UML version and an XML version. Due to the limitations of UML notations and XML syntax, however, the current ebXML BPSS specification is insufficient to formally specify semantic constraints of modeling elements. In this study, we propose a classification scheme for BPSS semantic constraints, and describe how to represent those semantic constraints formally using Object Constraint Language. As a way to verify a particular Business Process Specification (BPS) with formal semantic constraint modeling, we suggest a rule-based approach to represent the formal semantic constraints, and describe a detail mechanism to apply the rule-based specified constraints to the BPS in a prototype implementation. 相似文献
5.
Takuto Sakuma Kazuya Nishi Kaoru Kishimoto Kazuya Nakagawa Masayuki Karasuyama Yuta Umezu 《Advanced Robotics》2019,33(3-4):134-152
ABSTRACTRecent advances in robotics and measurement technologies have enabled biologists to record the trajectories created by animal movements. In this paper, we convert time series of animal trajectories into sequences of finite symbols, and then propose a machine learning method for gaining biological insight from the trajectory data in the form of symbol sequences. The proposed method is used for training a classifier which differentiates between the trajectories of two groups of animals such as male and female. The classifier is represented in the form of a sparse linear combination of subsequence patterns, and we call the classifier an S3P-classifier. The trained S3P-classifier is easy to interpret because each coefficient represents the specificity of the subsequence patterns in either of the two classes of animal trajectories. However, fitting an S3P-classifier is computationally challenging because the number of subsequence patterns is extremely large. The main technical contribution in this paper is the development of a novel algorithm for overcoming this computational difficulty by combining a sequential mining technique with a recently developed convex optimization technique called safe screening. We demonstrate the effectiveness of the proposed method by applying it to three animal trajectory data analysis tasks. 相似文献
6.
7.
This paper studies a family of optimization problems where a set of items, each requiring a possibly different amount of resource, must be assigned to different slots for which the price of the resource can vary. The objective is then to assign items such that the overall resource cost is minimized. Such problems arise commonly in domains such as production scheduling in the presence of fluctuating renewable energy costs or variants of the Travelling Salesman Problem. In Constraint Programming, this can be naturally modeled in two ways: (a) with a sum of element constraints; (b) with a MinimumAssignment constraint. Unfortunately the sum of element constraints obtains a weak filtering and the MinimumAssignment constraint does not scale well on large instances. This work proposes a third approach by introducing the ResourceCostAllDifferent constraint and an associated incremental and scalable filtering algorithm, running in (mathcal {O}(n cdot m)), where n is the number of unbound variables and m is the maximum domain size of unbound variables. Its goal is to compute the total cost in a scalable manner by dealing with the fact that all assignments must be different. We first evaluate the efficiency of the new filtering on a real industrial problem and then on the Product Matrix Travelling Salesman Problem, a special case of the Asymmetric Travelling Salesman Problem. The study shows experimentally that our approach generally outperforms the decomposition and the MinimumAssignment ones for the problems we considered. 相似文献
8.
Let F1,…,Fs∈R[X1,…,Xn] be polynomials of degree at most d, and suppose that F1,…,Fs are represented by a division free arithmetic circuit of non-scalar complexity size L. Let A be the arrangement of Rn defined by F1,…,Fs.For any point x∈Rn, we consider the task of determining the signs of the values F1(x),…,Fs(x) (sign condition query) and the task of determining the connected component of A to which x belongs (point location query). By an extremely simple reduction to the well-known case where the polynomials F1,…,Fs are affine linear (i.e., polynomials of degree one), we show first that there exists a database of (possibly enormous) size sO(L+n) which allows the evaluation of the sign condition query using only (Ln)O(1)log(s) arithmetic operations. The key point of this paper is the proof that this upper bound is almost optimal.By the way, we show that the point location query can be evaluated using dO(n)log(s) arithmetic operations. Based on a different argument, analogous complexity upper-bounds are exhibited with respect to the bit-model in case that F1,…,Fs belong to Z[X1,…,Xn] and satisfy a certain natural genericity condition. Mutatis mutandis our upper-bound results may be applied to the sparse and dense representations of F1,…,Fs. 相似文献
9.
F. Modugno N. G. Leveson J. D. Reese K. Partridge S. D. Sandys 《Requirements Engineering》1997,2(2):65-78
This paper describes an integrated approach to safety analysis of software requirements and demonstrates the feasibility and
utility of applying the individual techniques and the integrated approach on the requirements specification of a guidance
system for a high-speed civil transport being developed at NASA Ames. Each analysis found different types of errors in the
specification; thus together the techniques provided a more comprehensive safety analysis than any individual technique. We
also discovered that the more the analyst knew about the application and the model, the more successful they were in finding
errors. Our findings imply that the most effective safety-analysis tools will assist rather than replace the analyst.
A shorter version of this paper appeared in the Proceedings of the 3rd International Symposium on Requirements Engineering,
Annapolis, Maryland, January 1997. The research described has been partly funded by NASA/Langley Grant NAG-1-1495, NSF Grant
CCR-9396181, and the California PATH Program of the University of California 相似文献
10.
Integrating software components to produce large-scale software systems is an effective way to reuse experience and reduce cost. However, unexpected interactions among components when integrated into software systems are often the cause of failures. Discovering these composition errors early in the development process could lower the cost and effort in fixing them. This paper introduces a rigorous analysis approach to software design composition based on automated verification techniques. We show how to represent, instantiate and integrate design components, and how to find design composition errors using model checking techniques. We illustrate our approach with a Web-based hypermedia case study. 相似文献
11.
Detecting pattern-based outliers 总被引:6,自引:0,他引:6
Outlier detection targets those exceptional data that deviate from the general pattern. Besides high density clustering, there is another pattern called low density regularity. Thus, there are two types of outliers w.r.t. them. We propose two techniques: one to identify the two patterns and the other to detect the corresponding outliers. 相似文献
12.
Nabil Kamel Ph.D. Ping Wu Ph.D. Stanley Y. W. Su Ph.D. 《The VLDB Journal The International Journal on Very Large Data Bases》1994,3(1):53-76
Several object-oriented database management systems have been implemented without an accompanying theoretical foundation for constraint, query specification, and processing. The pattern-based object calculus presented in this article provides such a theoretical foundation for describing and processing objectoriented databases. We view an object-oriented database as a network of interrelated classes (i.e., the intension) and a collection of time-varying object association patterns (i.e., the extension). The object calculus is based on first-order logic. It provides the formalism for interpreting precisely and uniformly the semantics of queries and integrity constraints in object-oriented databases. The power of the object calculus is shown in four aspects. First, associations among objects are expressed explicitly in an object-oriented database. Second, the nonassociation operator is included in the object calculus. Third, set-oriented operations can be performed on both homogeneous and heterogeneous object association patterns. Fourth, our approach does not assume a specific form of database schema. A proposed formalism is also applied to the design of high-level object-oriented query and constraint languages. 相似文献
13.
Minxue Pan Xuandong Li 《International Journal on Software Tools for Technology Transfer (STTT)》2012,14(6):639-651
Message Sequence Chart (MSC) is a graphical and textual language for describing the interactions between system components, and MSC specifications (MSSs) are a combination of a set of basic MSCs (bMSCs) and a High-level MSC that describes potentially iterating and branching system behavior by specifying the compositions of basic MSCs, which offer an intuitive and visual way of specifying design requirements. With concurrent, timing, and asynchronous properties, MSSs are amenable to errors, and their analysis is important and difficult. This paper deals with timing analysis of MSC specifications with asynchronous concatenation. For an MSC specification, we require that for any loop, its first node be flexible in execution time and its any associated external timing constraint be enforced on the entire loop. Such an MSC specification is called a flexible loop-closed MSC specification (FLMSS). We show that for FLMSSs, the reachability analysis and bounded delay analysis problems can be solved efficiently by linear programming. The solutions have been implemented into our tool TASS and evaluated by experiments. 相似文献
14.
In this paper, we propose a dense stereo algorithm based on the census transform and improved dynamic programming (DP). Traditional
scanline-based DP algorithms are the most efficient ones among global algorithms, but are well-known to be affected by the
streak effect. To solve this problem, we improve the traditional three-state DP algorithm by taking advantage of an extended
version of sequential vertical consistency constraint. Using this method, we increase the accuracy of the disparity map greatly.
Optimizations have been made so that the computational cost is only increased by about 20%, and the additional memory needed
for the improvement is negligible. Experimental results show that our algorithm outperforms many state-of-the-art algorithms
with similar efficiency on Middlebury College’s stereo Web site. Besides, the algorithm is robust enough for image pairs with
utterly different contrasts by using of census transform as the basic match metric. 相似文献
15.
F. Hooshmand 《Optimization methods & software》2016,31(2):359-376
Multistage stochastic programming with endogenous uncertainty is a new topic in which the timing of uncertainty realization is decision-dependent. In this case, the number of nonanticipativity constraints (NACs) increases very quickly with the number of scenarios, making the problem computationally intractable. Fortunately, a large number of NACs are typically redundant and their elimination leads to a considerable reduction in the problem size. Identifying redundant NACs has been addressed in the literature only in the special case where the scenario set is equal to the Cartesian product of all possible outcomes for endogenous parameters; however, this is a scarce condition in practice. In this paper, we consider the general case where the scenario set is an arbitrary set; and two approaches, able to identify all redundant NACs, are proposed. The first approach is by mixed integer programming formulation and the second one is an exact polynomial time algorithm. Proving the fact that the proposed algorithm is able to make the uppermost reduction in the number of NACs is another novelty of this paper. Computational results evaluate the efficiency of the proposed approaches. 相似文献
16.
17.
Roberto Bagnara Roberta Gori Patricia M. Hill Enea Zaffanella 《Information and Computation》2004,193(2):84-116
Logic languages based on the theory of rational, possibly infinite, trees have much appeal in that rational trees allow for faster unification (due to the safe omission of the occurs-check) and increased expressivity (cyclic terms can provide very efficient representations of grammars and other useful objects). Unfortunately, the use of infinite rational trees has problems. For instance, many of the built-in and library predicates are ill-defined for such trees and need to be supplemented by run-time checks whose cost may be significant. Moreover, some widely used program analysis and manipulation techniques are correct only for those parts of programs working over finite trees. It is thus important to obtain, automatically, a knowledge of the program variables (the finite variables) that, at the program points of interest, will always be bound to finite terms. For these reasons, we propose here a new data-flow analysis, based on abstract interpretation, that captures such information. We present a parametric domain where a simple component for recording finite variables is coupled, in the style of the open product construction of Cortesi et al., with a generic domain (the parameter of the construction) providing sharing information. The sharing domain is abstractly specified so as to guarantee the correctness of the combined domain and the generality of the approach. This finite-tree analysis domain is further enhanced by coupling it with a domain of Boolean functions, called finite-tree dependencies, that precisely captures how the finiteness of some variables influences the finiteness of other variables. We also summarize our experimental results showing how finite-tree analysis, enhanced with finite-tree dependencies, is a practical means of obtaining precise finiteness information. 相似文献
18.
A general-purpose scene-analysis system is described which uses constraint-filtering techniques to apply domain knowledge in the interpretation of the regions extracted from a segmented image. An example is given of the configuration of the system for a particular domain, FLIR (Forward Looking InfraRed) images, as well as results of the system's performance on some typical images from this domain. A number of improvements to these techniques are proposed. 相似文献
19.
R. Joan-Arinyo Author Vitae A. Soto-Riera Author Vitae Author Vitae J. Vilaplana-Pastó Author Vitae 《Computer aided design》2004,36(2):123-140
Geometric problems defined by constraints can be represented by geometric constraint graphs whose nodes are geometric elements and whose arcs represent geometric constraints. Reduction and decomposition are techniques commonly used to analyze geometric constraint graphs in geometric constraint solving.In this paper we first introduce the concept of deficit of a constraint graph. Then we give a new formalization of the decomposition algorithm due to Owen. This new formalization is based on preserving the deficit rather than on computing triconnected components of the graph and is simpler. Finally we apply tree decompositions to prove that the class of problems solved by the formalizations studied here and other formalizations reported in the literature is the same. 相似文献
20.
ContextPatterns are used in different disciplines as a way to record expert knowledge for problem solving in specific areas. Their systematic use in Software Engineering promotes quality, standardization, reusability and maintainability of software artefacts. The full realisation of their power is however hindered by the lack of a standard formalization of the notion of pattern.ObjectiveOur goal is to provide a language-independent formalization of the notion of pattern, so that it allows its application to different modelling languages and tools, as well as generic methods to enable pattern discovery, instantiation, composition, and conflict analysis.MethodFor this purpose, we present a new visual and formal, language-independent approach to the specification of patterns. The approach is formulated in a general way, based on graphs and category theory, and allows the specification of patterns in terms of (nested) variable submodels, constraints on their allowed variance, and inter-pattern synchronization across several diagrams (e.g. class and sequence diagrams for UML design patterns).ResultsWe provide a formal notion of pattern satisfaction by models and propose mechanisms to suggest model transformations so that models become consistent with the patterns. We define methods for pattern composition, and conflict analysis. We illustrate our proposal on UML design patterns, and discuss its generality and applicability on different types of patterns, e.g. workflow patterns, enterprise integration patterns and interaction patterns.ConclusionThe approach has proven to be powerful enough to formalize patterns from different domains, providing methods to analyse conflicts and dependencies that usually are expressed only in textual form. Its language independence makes it suitable for integration in meta-modelling tools and for use in Model-Driven Engineering. 相似文献