首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A term rewriting system is called growing if each variable occurring on both the left-hand side and the right-hand side of a rewrite rule occurs at depth zero or one in the left-hand side. Jacquemard showed that the reachability and the sequentiality of linear (i.e., left-right-linear) growing term rewriting systems are decidable. In this paper we show that Jacquemard's result can be extended to left-linear growing rewriting systems that may have right-nonlinear rewrite rules. This implies that the reachability and the joinability of some class of right-linear term rewriting systems are decidable, which improves the results for right-ground term rewriting systems by Oyamaguchi. Our result extends the class of left-linear term rewriting systems having a decidable call-by-need normalizing strategy. Moreover, we prove that the termination property is decidable for almost orthogonal growing term rewriting systems.  相似文献   

2.
The region calculus of Tofte and Talpin is a polymorphically typed lambda calculus with annotations that make memory allocation and deallocation explicit. It is intended as an intermediate language for implementing Hindley-Milner typed functional languages such as ML without traditional trace-based garbage collection. Static region and effect inference can be used to annotate a statically typed ML program with memory management primitives. Soundness of the calculus with respect to the region and effect system is crucial to guarantee safe deallocation of regions, i.e., deallocation should only take place for objects which are provably dead. The original soundness proof by Tofte and Talpin requires a complex co-inductive safety relation. In this paper, we present two small-step operational semantics for the region calculus and prove their type soundness with respect to the region and effect system. Following the standard syntactic approach of Wright, Felleisen, and Harper, we obtain simple inductive proofs. The first semantics is store-less. It is simple and elegant and gives rise to perspicuous proofs. The second semantics provides a store-based model for the region calculus. Albeit slightly more complicated, its additional expressiveness allows us to model operations on references with destructive update. A pure fragment of both small-step semantics is then proven equivalent to the original big-step operational approach of Tofte and Talpin. This leads to an alternative soundness proof for their evaluation-style formulation.  相似文献   

3.
This paper studies a system of m robots operating in a set of n work locations connected by aisles in a × grid, where mn. From time to time the robots need to move along the aisles, in order to visit disjoint sets of locations. The movement of the robots must comply with the following constraints: (1) no two robots can collide at a grid node or traverse a grid edge at the same time; (2) a robot's sensory capability is limited to detecting the presence of another robot at a neighboring node. We present a deterministic protocol that, for any small constant ε>0, allows m≤(1-ε)n robots to visit their target locations in O( ) time, where each robot visits no more than dn targets and no target is visited by more than one robot. We also prove a lower bound showing that our protocol is optimal. Prior to this paper, no optimal protocols were known for d>1. For d=1, optimal protocols were known only for m≤ , while for general mn only a suboptimal randomized protocol was known.  相似文献   

4.
By reduction from the halting problem for Minsky's two-register machines we prove that there is no algorithm capable of deciding the -theory of one step rewriting of an arbitrary finite linear confluent finitely terminating term rewriting system (weak undecidability). We also present a fixed such system with undecidable *-theory of one step rewriting (strong undecidability). This improves over all previously known results of the same kind.  相似文献   

5.
Obliq is a lexically scoped, distributed, object-based programming language. In Obliq, the migration of an object is proposed as creating a clone of the object at the target site, whereafter the original object is turned into an alias for the clone. Obliq has only an informal semantics, so there is no proof that this style of migration is safe, i.e., transparent to object clients. In previous work, we introduced Ø, an abstraction of Obliq, where, by lexical scoping, sites have been abstracted away. We used Ø in order to exhibit how the semantics behind Obliq's implementation renders migration unsafe. We also suggested a modified semantics that we conjectured instead to be safe. In this paper, we rewrite our modified semantics of Ø in terms of the π-calculus, and we use it to formally prove the correctness of object surrogation, the abstraction of object migration in Ø.  相似文献   

6.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

7.
8.
Voxelization is the transformation of geometric surfaces into voxels. Up to date this process has been done essentially using incremental algorithms. Incremental algorithms have the reputation of being efficient but they lack an important property: robustness. The voxelized representation should envelop its continuous model. However, without robust methods this cannot be guaranteed. This article describes novel techniques of robust voxelization and visualization of implicit surfaces. First of all our recursive subdivision voxelization algorithm is reviewed. This algorithm was initially inspired by Duff's image space subdivision method. Then, we explain the algorithm to voxelize implicit surfaces defined in spherical or cylindrical coordinates. Next, we show a new technique to produce infinite replications of implicit objects and their voxelization method. Afterward, we comment on the parallelization of our voxelization procedure. Finally we present our voxel visualization algorithm based on point display. Our voxelization algorithms can be used with any data structure, thanks to the fact that a voxel is only stored once the last subdivision level is reached. We emphasize the use of the octree, though, because it is a convenient way to store the discrete model hierarchically. In a hierarchy the discrete model refinement is simple and possible from any previous voxelized scene thanks to the fact that the voxelization algorithms are robust.  相似文献   

9.
We study trace and may-testing equivalences in the asynchronous versions of CCS and π-calculus. We start from the operational definition of the may-testing preorder and provide finitary and fully abstract trace-based characterizations for it, along with a complete in-equational proof system. We also touch upon two variants of this theory by first considering a more demanding equivalence notion (must-testing) and then a richer version of asynchronous CCS. The results throw light on the difference between synchronous and asynchronous communication and on the weaker testing power of asynchronous observations.  相似文献   

10.
11.
We work with an extension of Resolution, called Res(2), that allows clauses with conjunctions of two literals. In this system there are rules to introduce and eliminate such conjunctions. We prove that the weak pigeonhole principle PHPcnn and random unsatisfiable CNF formulas require exponential-size proofs in this system. This is the strongest system beyond Resolution for which such lower bounds are known. As a consequence to the result about the weak pigeonhole principle, Res(log) is exponentially more powerful than Res(2). Also we prove that Resolution cannot polynomially simulate Res(2) and that Res(2) does not have feasible monotone interpolation solving an open problem posed by Krají ek.  相似文献   

12.
The exponential output size problem is to determine whether the size of output trees of a tree transducer grows exponentially in the size of input trees. In this paper the complexity of this problem is studied. It is shown to be NL-complete for total top-down tree transducers, DEXPTIME-complete for general top-down tree transducers, and P-complete for bottom-up tree transducers.  相似文献   

13.
The study of the computational power of randomized computations is one of the central tasks of complexity theory. The main goal of this paper is the comparison of the power of Las Vegas computation and deterministic respectively nondeterministic computation. We investigate the power of Las Vegas computation for the complexity measures of one-way communication, ordered binary decision diagrams, and finite automata.(i) For the one-way communication complexity of two-party protocols we show that Las Vegas communication can save at most one half of the deterministic one-way communication complexity. We also present a language for which this gap is tight.(ii) The result (i) is applied to show an at most polynomial gap between determinism and Las Vegas for ordered binary decision diagrams.(iii) For the size (i.e., the number of states) of finite automata we show that the size of Las Vegas finite automata recognizing a language L is at least the square root of the size of the minimal deterministic finite automaton recognizing L. Using a specific language we verify the optimality of this lower bound.  相似文献   

14.
This paper describes the mathematical basis and application of a probabilistic model for recovering the direction of camera translation (heading) from optical flow. According to the theorem that heading cannot lie between two converging points in a stationary environment, one can compute the posterior probability distribution of heading across the image and choose the heading with maximum a posteriori (MAP). The model requires very simple computation, provides confidence level of the judgments, applies to both linear and curved trajectories, functions in the presence of camera rotations, and exhibited high accuracy up to 0.1°–0.2° in random dot simulations.  相似文献   

15.
16.
Simon as extended by Brassard and Høyer shows that there are tasks on which polynomial-time quantum machines are exponentially faster than each classical machine infinitely often. The present paper shows that there are tasks on which polynomial-time quantum machines are exponentially faster than each classical machine almost everywhere.  相似文献   

17.
The role of perceptual organization in motion analysis has heretofore been minimal. In this work we present a simple but powerful computational model and associated algorithms based on the use of perceptual organizational principles, such as temporal coherence (or common fate) and spatial proximity, for motion segmentation. The computational model does not use the traditional frame by frame motion analysis; rather it treats an image sequence as a single 3D spatio-temporal volume. It endeavors to find organizations in this volume of data over three levels—signal, primitive, and structural. The signal level is concerned with detecting individual image pixels that are probably part of a moving object. The primitive level groups these individual pixels into planar patches, which we call the temporal envelopes. Compositions of these temporal envelopes describe the spatio-temporal surfaces that result from object motion. At the structural level, we detect these compositions of temporal envelopes by utilizing the structure and organization among them. The algorithms employed to realize the computational model include 3D edge detection, Hough transformation, and graph based methods to group the temporal envelopes based on Gestalt principles. The significance of the Gestalt relationships between any two temporal envelopes is expressed in probabilistic terms. One of the attractive features of the adopted algorithm is that it does not require the detection of special 2D features or the tracking of these features across frames. We demonstrate that even with simple grouping strategies, we can easily handle drastic illumination changes, occlusion events, and multiple moving objects, without the use of training and specific object or illumination models. We present results on a large variety of motion sequences to demonstrate this robustness.  相似文献   

18.
This paper presents a general information-theoretic approach for obtaining lower bounds on the number of examples required for Probably Approximately Correct (PAC) learning in the presence of noise. This approach deals directly with the fundamental information quantities, avoiding a Bayesian analysis. The technique is applied to several different models, illustrating its generality and power. The resulting bounds add logarithmic factors to (or improve the constants in) previously known lower bounds.  相似文献   

19.
While deterministic finite automata seem to be well understood, surprisingly many important problems concerning nondeterministic finite automata (nfa's) remain open. One such problem area is the study of different measures of nondeterminism in finite automata and the estimation of the sizes of minimal nondeterministic finite automata. In this paper the concept of communication complexity is applied in order to achieve progress in this problem area. The main results are as follows:1. Deterministic communication complexity provides lower bounds on the size of nfa's with bounded unambiguity. Applying this fact, the proofs of several results about nfa's with limited ambiguity can be simplified and presented in a uniform way.2. There is a family of languages KONk2 with an exponential size gap between nfa's with polynomial leaf number/ambiguity and nfa's with ambiguity k. This partially provides an answer to the open problem posed by B. Ravikumar and O. Ibarra (1989, SIAM J. Comput.18, 1263–1282) and H. Leung (1998, SIAM J. Comput.27, 1073–1082).  相似文献   

20.
This paper describes the theory and algorithms of distance transform for fuzzy subsets, called fuzzy distance transform (FDT). The notion of fuzzy distance is formulated by first defining the length of a path on a fuzzy subset and then finding the infimum of the lengths of all paths between two points. The length of a path π in a fuzzy subset of the n-dimensional continuous space n is defined as the integral of fuzzy membership values along π. Generally, there are infinitely many paths between any two points in a fuzzy subset and it is shown that the shortest one may not exist. The fuzzy distance between two points is defined as the infimum of the lengths of all paths between them. It is demonstrated that, unlike in hard convex sets, the shortest path (when it exists) between two points in a fuzzy convex subset is not necessarily a straight line segment. For any positive number θ≤1, the θ-support of a fuzzy subset is the set of all points in n with membership values greater than or equal to θ. It is shown that, for any fuzzy subset, for any nonzero θ≤1, fuzzy distance is a metric for the interior of its θ-support. It is also shown that, for any smooth fuzzy subset, fuzzy distance is a metric for the interior of its 0-support (referred to as support). FDT is defined as a process on a fuzzy subset that assigns to a point its fuzzy distance from the complement of the support. The theoretical framework of FDT in continuous space is extended to digital cubic spaces and it is shown that for any fuzzy digital object, fuzzy distance is a metric for the support of the object. A dynamic programming-based algorithm is presented for computing FDT of a fuzzy digital object. It is shown that the algorithm terminates in a finite number of steps and when it does so, it correctly computes FDT. Several potential applications of fuzzy distance transform in medical imaging are presented. Among these are the quantification of blood vessels and trabecular bone thickness in the regime of limited special resolution where these objects become fuzzy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号