首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 670 毫秒
1.
Defining operational semantics for a process algebra is often based either on labeled transition systems that account for interaction with a context or on the so-called reduction semantics: we assume to have a representation of the whole system and we compute unlabeled reduction transitions (leading to a distribution over states in the probabilistic case). In this paper we consider mixed models with states where the system is still open (towards interaction with a context) and states where the system is already closed. The idea is that (open) parts of a system “P” can be closed via an operator “PG” that turns already synchronized actions whose “handle” is specified inside “G” into prioritized reduction transitions (and, therefore, states performing them into closed states). We show that we can use the operator “PG” to express multi-level priorities and external probabilistic choices (by assigning weights to handles inside G), and that, by considering reduction transitions as the only unobservable τ transitions, the proposed technique is compatible, for process algebra with general recursion, with both standard (probabilistic) observational congruence and a notion of equivalence which aggregates reduction transitions in a (much more aggregating) trace based manner. We also observe that the trace-based aggregated transition system can be obtained directly in operational semantics and we present the “aggregating” semantics. Finally, we discuss how the open/closed approach can be used to also express discrete and continuous (exponential probabilistic) time and we show that, in such timed contexts, the trace-based equivalence can aggregate more with respect to traditional lumping based equivalences over Markov Chains.  相似文献   

2.
An essential prerequisite to construct a manifold trihedral polyhedron from a given natural (or partial-view) sketch is solution of the “wireframe sketch from a single natural sketch (WSS)” problem, which is the subject of this paper. Published solutions view WSS as an “image-processing”/“computer vision” problem where emphasis is placed on analyzing the given input (natural sketch) using various heuristics. This paper proposes a new WSS method based on robust tools from graph theory, solid modeling and Euclidean geometry. Focus is placed on producing a minimal wireframe sketch that corresponds to a topologically correct polyhedron.  相似文献   

3.
A given polynomial of degree less than or equal to n naturally “blossoms” into a function of n variables called its blossom. Considered as a polynomial function of degree less than or equal to (n+1) it “blossoms” into a “new” blossom which is now a function of (n+1) variables. A classical formula expresses any value of this new blossom as a strictly convex combination of (n+1) values of the initial one. We establish a similar formula for Chebyshevian blossoms.  相似文献   

4.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

5.
The current paper details results from the Girls and ICT survey phase of a three year study investigating factors associated with low participation rates by females in education pathways leading to professional level information and communications technology (ICT) professions. The study is funded through the Australian Research Council’s (ARC) Linkage Grants Scheme. It involves a research partnership between Education Queensland (EQ), industry partner Technology One and academic researchers at (affiliation removed for review purposes). Respondents to the survey were 1453 senior high school girls. Comparisons were drawn between Takers (n = 131) and Non Takers (n = 1322) of advanced level computing subjects. Significant differences between the groups were found on four questions: “The subjects are interesting”; “I am very interested in computers”; “The subject will be helpful to me in my chosen career path after school”; and “It suited my timetable”. The research has demonstrated that senior high school girls tend to perceive advanced computing subjects as boring and they express a strong aversion to computers.  相似文献   

6.
On the controllability of linear juggling mechanical systems   总被引:1,自引:0,他引:1  
This paper deals with the controllability of a class of nonsmooth complementarity mechanical systems. Due to their particular structure they can be decomposed into an “object” and a “robot”, consequently they are named juggling systems. It is shown that the accessibility of the “object” can be characterized by nonlinear constrained equations, or generalized equations. Examples are presented, including a simple model of backlash. The main focus of the work is about linear jugglers.  相似文献   

7.
An integrated multi-unit chemical plant presents a challenging control design problem due to the existence of recycling streams. In this paper, we develop a framework for analyzing the effects of recycling dynamics on closed-loop performance from which a systematic design of a decentralized control system for a recycled, multi-unit plant is established. In the proposed approach, the recycled streams are treated as unmodelled dynamics of the “unit” model so that their effects on closed-loop stability and performance can be analyzed using the robust control theory. As a result, two measures are produced: (1) the ν-gap metric, which quantifies the strength of recycling effects, and (2) the maximum stability margin of “unit” controller, which represents the ability of the “unit” controller to compensate for such effects. A simple rule for the “unit” control design is then established using the combined two measures in order to guarantee the attainment of good overall closed-loop performances. As illustrated by several design examples, the controllability of a recycled, multi unit process under a decentralized “unit” controller can be determined without requiring any detailed design of the “unit” controller because the simple rule is calculated from the open-loop information only.  相似文献   

8.
We present a novel “dynamic learning” approach for an intelligent image database system to automatically improve object segmentation and labeling without user intervention, as new examples become available, for object-based indexing. The proposed approach is an extension of our earlier work on “learning by example,” which addressed labeling of similar objects in a set of database images based on a single example. The proposed dynamic learning procedure utilizes multiple example object templates to improve the accuracy of existing object segmentations and labels. Multiple example templates may be images of the same object from different viewing angles, or images of related objects. This paper also introduces a new shape similarity metric called normalized area of symmetric differences (NASD), which has desired properties for use in the proposed “dynamic learning” scheme, and is more robust against boundary noise that results from automatic image segmentation. Performance of the dynamic learning procedures has been demonstrated by experimental results.  相似文献   

9.
10.
A simple method for specifying the shape and orientation of a convex polygon is described. The method utilizes eigenvalues and eigenvectors of a “moment of inertia” or mass matrix computed from the nodal coordinates of the polygon. The “shape” is characterized then by the parameter ( , where ξ1 and ξ2 are the eigenvalues (ξ1 ξ2), and the orientation by the axial direction of the first eigenvector. FORTRAN subroutines are provided for this algorithm.  相似文献   

11.
The bisimulation “up-to-…” technique provides an effective way to relieve the amount of work in proving bisimilarity of two processes. This paper develops a fresh and direct approach to generalize this set-theoretic “up-to-...” principle to the setting of coalgebra theory. The notion of consistent function is introduced, as a generalization of Sangiorgi's sound function. Then, in order to prove that there are only bisimilar pairs in a relation, it is sufficient to find a morphism from it to the “lifting” of its image under some consistent function. One example is given showing that every self-bisimulation in normed BPA is just such a relation. What's more, we investigate the connection between span-bisimulation and ref-bisimulation. As a result, λ-bisimulation turns out to be covered by our new principle.  相似文献   

12.
We show that a certain simple call-by-name continuation semantics of Parigot's λμ-calculus is complete. More precisely, for every λμ-theory we construct a cartesian closed category such that the ensuing continuation-style interpretation of λμ, which maps terms to functions sending abstract continuations to responses, is full and faithful. Thus, any λμ-category in the sense of L. Ong (1996, in “Proceedings of LICS '96,” IEEE Press, New York) is isomorphic to a continuation model (Y. Lafont, B. Reus, and T. Streicher, “Continuous Semantics or Expressing Implication by Negation,” Technical Report 93-21, University of Munich) derived from a cartesian-closed category of continuations. We also extend this result to a later call-by-value version of λμ developed by C.-H. L. Ong and C. A. Stewart (1997, in “Proceedings of ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Paris, January 1997,” Assoc. Comput. Mach. Press, New York).  相似文献   

13.
One shows that the steady-state solutions to Navier–Stokes equations in d-dimensional domains Ω, d=2,3 with Dirichlet and slip curl boundary conditions are exponentially stabilizable by proportional controllers with the support in open subsets ωΩ such that Ωω is sufficiently “small”.  相似文献   

14.
This paper describes a comparative study of a multidimensional visualisation technique and multivariate statistical process control (MSPC) for process historical data analysis. The visualisation technique uses parallel coordinates which visualise multidimensional data using two dimensional presentations and allow identification of clusters and outliers, therefore, can be used to detect abnormal events. The study is based on a database covering 527 days of operation of an industrial wastewater treatment plant. It was found that both the visualisation technique and MSPC based on T2 chart captured the same 17 days as “clearly abnormal” and another eight days as “likely abnormal”. Pattern recognition using K-means clustering was also applied to the same data in literature and was found to have identified 14 out of the 17 “clearly abnormal” days.  相似文献   

15.
This paper gives a fresh look at my previous work on “epistemic actions” and information updates in distributed systems, from a coalgebraic perspective. I show that the “relational” semantics of epistemic programs, given in [BMS2] in terms of epistemic updates, can be understood in terms of functors on the category of coalgebras and natural transformations associated to them. Then, I introduce a new, alternative, more refined semantics for epistemic programs: programs as “epistemic coalgebras”. I argue for the advantages of this second semantics, from a semantic, heuristic, syntactical and proof-theoretic point of view. Finally, as a step towards a generalization, I show these concepts make sense for other functors, and that apparently unrelated concepts, such as Bayesian belief updates and process transformations, can be seen to arise in the same way as our “epistemic actions”.  相似文献   

16.
We prove that a regular language defined by a boolean combination of generalized Σ1-sentences built using modular counting quantifiers can be defined by a boolean combination of Σ1-sentences in which only regular numerical predicates appear. The same statement, with “Σ1” replaced by “first-order,” is equivalent to the conjecture that the nonuniform circuit complexity class ACC is strictly contained in NC1. The argument introduces some new techniques, based on a combination of semigroup theory and Ramsey theory, which may shed some light on the general case.  相似文献   

17.
The quantitative μ-calculus qMμ extends the applicability of Kozen's standard μ-calculus [D. Kozen, Results on the propositional μ-calculus, Theoretical Computer Science 27 (1983) 333–354] to probabilistic systems. Subsequent to its introduction [C. Morgan, and A. McIver, A probabilistic temporal calculus based on expectations, in: L. Groves and S. Reeves, editors, Proc. Formal Methods Pacific '97 (1997), available at [PSG, Probabilistic Systems Group: Collected reports, http://web.comlab.ox.ac.uk/oucl/research/areas/probs/bibliography.html]; also appears at [A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, Chap. 9], M. Huth, and M. Kwiatkowska, Quantitative analysis and model checking, in: Proceedings of 12th annual IEEE Symposium on Logic in Computer Science, 1997] it has been developed by us [A. McIver, and C. Morgan, Games, probability and the quantitative μ-calculus qMu, in: Proc. LPAR, LNAI 2514 (2002), pp. 292–310, revised and expanded at [A. McIver, and C. Morgan, Results on the quantitative μ-calculus qMμ (2005), to appear in ACM TOCL]; also appears at [A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, Chap. 11], A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, A. McIver, and C. Morgan, Results on the quantitative μ-calculus qMμ (2005), to appear in ACM TOCL] and by others [L. de Alfaro, and R. Majumdar, Quantitative solution of omega-regular games, Journal of Computer and System Sciences 68 (2004) 374–397]. Beyond its natural application to define probabilistic temporal logic [C. Morgan, and A. McIver, An expectation-based model for probabilistic temporal logic, Logic Journal of the IGPL 7 (1999), pp. 779–804, also appears at [A. McIver, and C. Morgan, “Abstraction, Refinement and Proof for Probabilistic Systems,” Technical Monographs in Computer Science, Springer, New York, 2005, Chap.10]], there are a number of other areas that benefit from its use.One application is stochastic two-player games, and the contribution of this paper is to depart from the usual notion of “absolute winning conditions” and to introduce a novel game in which players can “draw”.The extension is motivated by examples based on economic games: we propose an extension to qMμ so that they can be specified; we show that the extension can be expressed via a reduction to the original logic; and, via that reduction, we prove that the players can play optimally in the extended game using memoryless strategies.  相似文献   

18.
In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.  相似文献   

19.
Bisimulation for Higher-Order Process Calculi   总被引:3,自引:0,他引:3  
Ahigher-order process calculusis a calculus for communicating systems which contains higher-order constructs like communication of terms. We analyse the notion ofbisimulationin these calculi. We argue that both the standard definition of bisimulation (i.e., the one for CCS and related calculi), as well ashigher-order bisimulation[E. Astesiano, A. Giovini, and G. Reggio,in“STACS '88,” Lecture Notes in Computer Science, Vol. 294, pp. 207–226, Springer-Verlag, Berlin/New York, 1988; G. Boudol,in“TAPSOFT '89,” Lecture Notes in Computer Science, Vol. 351, pp. 149–161, Springer-Verlag, Berlin/New York, 1989; B. Thomsen, Ph.D. thesis, Dept. of Computing, Imperial College, 1990] are in general unsatisfactory, because of their over-discrimination. We propose and study a new form of bisimulation for such calculi, calledcontext bisimulation, which yields a more satisfactory discriminanting power. A drawback of context bisimulation is the heavy use of universal quantification in its definition, which is hard to handle in practice. To resolve this difficulty we introducetriggered bisimulationandnormal bisimulation, and we prove that they both coincide with context bisimulation. In the proof, we exploit thefactorisation theorem: When comparing the behaviour of two processes, it allows us to “isolate” subcomponents which might give differences, so that the analysis can be concentrated on them  相似文献   

20.
A multidatabase system (MDBS) is a software system for integration of preexisting and independent local database management systems (DBMSs). The transaction management problem in MDBSs consists of designing appropriate software, on top of local DBMSs, such that users can execute transactions that span multiple local DBMSs without jeopardizing database consistency. The difficulty in transaction management in MDBSs arises due to the heterogeneity of the transaction management algorithms used by the local DBMSs, and the desire to preserve their local autonomy. In this paper, we develop a framework for designing fault-tolerant transaction management algorithms for MDBS environments that effectively overcomes the heterogeneity- and autonomy-induced problems. The developed framework builds on our previous work. It uses the approach described in S. Mehrotra et al. (1992, in “Proceedings of ACM–SIGMOD 1992 International Conference on Management of Data, San Diego, CA”) to overcome the problems in ensuring serializability that arise due to heterogeneity of the local concurrency control protocols. Furthermore, it uses a redo approach to recovery for ensuring transaction atomicity (Y. Breitbart et al., 1990, in “Proceedings of ACM–SIGMOD 1990 International Conference on Management of Data, Atlantic City, NJ;” Mehrotra et al., 1992, in “Proceedings of the Eleventh ACM SIGACT–SIGMOD–SIGART Symposium on Principles of Database Systems, San Diego, CA;” and A. Wolski and J. Veijalainen, 1990, in “Proceedings of the International Conference on Databases, Parallel Architectures and Their Applications”, pp. 321–330), that strives to ensure atomicity of transactions without the usage of the 2PC protocol. We reduce the task of ensuring serializability in MDBSs in the presence of failures to solving three independent subproblems, solutions to which together constitute a complete strategy for failure-resilient transaction management in MDBS environments. We develop mechanisms with which each of the three subproblems can be solved without requiring any changes be made to the preexisting software of the local DBMSs and without compromising their autonomy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号