首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We work with an extension of Resolution, called Res(2), that allows clauses with conjunctions of two literals. In this system there are rules to introduce and eliminate such conjunctions. We prove that the weak pigeonhole principle PHPcnn and random unsatisfiable CNF formulas require exponential-size proofs in this system. This is the strongest system beyond Resolution for which such lower bounds are known. As a consequence to the result about the weak pigeonhole principle, Res(log) is exponentially more powerful than Res(2). Also we prove that Resolution cannot polynomially simulate Res(2) and that Res(2) does not have feasible monotone interpolation solving an open problem posed by Krají ek.  相似文献   

2.
In this paper, we investigate several extensions of the linear time hierarchy (denoted by LTH). We first prove that it is not necessary to erase the oracle tape between two successive oracle calls, thereby lifting a common restriction on LTH machines. We also define a natural counting extension of LTH and show that it corresponds to a robust notion of counting bounded arithmetic predicates. Finally, we show that the computational power of the majority operator is equivalent to that of the exact counting operator in both contexts.  相似文献   

3.
A generic integrated line detection algorithm (GILDA) is presented and demonstrated. GILDA is based on the generic graphics recognition approach, which abstracts the graphics recognition as a stepwise recovery of the multiple components of the graphic objects and is specified by the object–process methodology. We define 12 classes of lines which appear in engineering drawings and use them to construct a class inheritance hierarchy. The hierarchy highly abstracts the line features that are relevant to the line detection process. Based on the “Hypothesis and Test” paradigm, lines are detected by a stepwise extension to both ends of a selected first key component. In each extension cycle, one new component which best meets the current line's shape and style constraints is appended to the line. Different line classes are detected by controlling the line attribute values. As we show in the experiments, the algorithm demonstrates high performance on clear synthetic drawings as well as on noisy, complex, real-world drawings.  相似文献   

4.
We present an approach to attention in active computer vision. The notion of attention plays an important role in biological vision. In recent years, and especially with the emerging interest in active vision, computer vision researchers have been increasingly concerned with attentional mechanisms as well. The basic principles behind these efforts are greatly influenced by psychophysical research. That is the case also in the work presented here, which adapts to the model of Treisman (1985, Comput. Vision Graphics Image Process. Image Understanding31, 156–177), with an early parallel stage with preattentive cues followed by a later serial stage where the cues are integrated. The contributions in our approach are (i) the incorporation of depth information from stereopsis, (ii) the simple implementation of low level modules such as disparity and flow by local phase, and (iii) the cue integration along pursuit and saccade mode that allows us a proper target selection based on nearness and motion. We demonstrate the technique by experiments in which a moving observer selectively masks out different moving objects in real scenes.  相似文献   

5.
We propose a sculpture metaphor based on a multiresolution volumetric representation. It allows the user to model both precise and coarse features while maintaining interactive updates and display rates. The modelled surface is an iso-surface of a scalar field, which is sampled on an adaptive hierarchical grid that dynamically subdivides or undivides itself. Field modifications are transparent to the user: The user feels as if he were directly interacting with the surface via a tool that either adds or removes “material.” Meanwhile, the tool modifies the scalar field around the surface, its size and shape automatically guiding the underlying grid subdivision. In order to give an interactive feedback whatever the tool's size, tools are applied in an adaptive way, the grid being always updated from coarse to fine levels. This maintains interactive rates even for large tool sizes. It also enables the user to continuously apply a tool, with an immediate coarse-scale feedback of the multiple actions being provided. A dynamic level-of-detail (LOD) mechanism ensures that the iso-surface is displayed at interactive rates regardeless of the zoom value; surface elements, generated and stored at each level of resolution, are displayed depending on their size on the screen. The system may switch to a coarser surface display during user actions, thus always ensuring interactive visual feedback. Two applications illustrate the use of this system: First, complex shapes with both coarse and fine features can be sculpted from scratch. Second, we show that the system can be used to edit models that have been converted from a mesh representation.  相似文献   

6.
We consider the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones. First, we describe a general method how to utilize the decidability of bisimulation problems to solve (certain instances of) the corresponding simulation problems. For certain process classes, the method allows us to design effective reductions of simulation problems to their bisimulation counterparts and some new decidability results for simulation have already been obtained in this way. Then we establish the decidability border for the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones w.r.t. the hierarchy of process rewrite systems. In particular, we show that simulation preorder (in both directions) and simulation equivalence are decidable in EXPTIME between pushdown processes and finite-state ones. On the other hand, simulation preorder is undecidable between PA and finite-state processes in both directions. These results also hold for those PA and finite-state processes which are deterministic and normed, and thus immediately extend to trace preorder. Regularity (finiteness) w.r.t. simulation and trace equivalence is also shown to be undecidable for PA. Finally, we prove that simulation preorder (in both directions) and simulation equivalence are intractable between all classes of infinite-state systems (in the hierarchy of process rewrite systems) and finite-state ones. This result is obtained by showing that the problem whether a BPA (or BPP) process simulates a finite-state one is PSPACE-hard and the other direction is co -hard; consequently, simulation equivalence between BPA (or BPP) and finite-state processes is also co -hard.  相似文献   

7.
While deterministic finite automata seem to be well understood, surprisingly many important problems concerning nondeterministic finite automata (nfa's) remain open. One such problem area is the study of different measures of nondeterminism in finite automata and the estimation of the sizes of minimal nondeterministic finite automata. In this paper the concept of communication complexity is applied in order to achieve progress in this problem area. The main results are as follows:1. Deterministic communication complexity provides lower bounds on the size of nfa's with bounded unambiguity. Applying this fact, the proofs of several results about nfa's with limited ambiguity can be simplified and presented in a uniform way.2. There is a family of languages KONk2 with an exponential size gap between nfa's with polynomial leaf number/ambiguity and nfa's with ambiguity k. This partially provides an answer to the open problem posed by B. Ravikumar and O. Ibarra (1989, SIAM J. Comput.18, 1263–1282) and H. Leung (1998, SIAM J. Comput.27, 1073–1082).  相似文献   

8.
We study trace and may-testing equivalences in the asynchronous versions of CCS and π-calculus. We start from the operational definition of the may-testing preorder and provide finitary and fully abstract trace-based characterizations for it, along with a complete in-equational proof system. We also touch upon two variants of this theory by first considering a more demanding equivalence notion (must-testing) and then a richer version of asynchronous CCS. The results throw light on the difference between synchronous and asynchronous communication and on the weaker testing power of asynchronous observations.  相似文献   

9.
The study of the computational power of randomized computations is one of the central tasks of complexity theory. The main goal of this paper is the comparison of the power of Las Vegas computation and deterministic respectively nondeterministic computation. We investigate the power of Las Vegas computation for the complexity measures of one-way communication, ordered binary decision diagrams, and finite automata.(i) For the one-way communication complexity of two-party protocols we show that Las Vegas communication can save at most one half of the deterministic one-way communication complexity. We also present a language for which this gap is tight.(ii) The result (i) is applied to show an at most polynomial gap between determinism and Las Vegas for ordered binary decision diagrams.(iii) For the size (i.e., the number of states) of finite automata we show that the size of Las Vegas finite automata recognizing a language L is at least the square root of the size of the minimal deterministic finite automaton recognizing L. Using a specific language we verify the optimality of this lower bound.  相似文献   

10.
We present a method for automatically estimating the motion of an articulated object filmed by two or more fixed cameras. We focus our work on the case where the quality of the images is poor, and where only an approximation of a geometric model of the tracked object is available. Our technique uses physical forces applied to each rigid part of a kinematic 3D model of the object we are tracking. These forces guide the minimization of the differences between the pose of the 3D model and the pose of the real object in the video images. We use a fast recursive algorithm to solve the dynamical equations of motion of any 3D articulated model. We explain the key parts of our algorithms: how relevant information is extracted from the images, how the forces are created, and how the dynamical equations of motion are solved. A study of what kind of information should be extracted in the images and of when our algorithms fail is also presented. Finally we present some results about the tracking of a person. We also show the application of our method to the tracking of a hand in sequences of images, showing that the kind of information to extract from the images depends on their quality and of the configuration of the cameras.  相似文献   

11.
For decades, there has been an intensive research effort in the Computer Vision community to deal with video sequences. In this paper, we present a new method for recovering a maximum of information on displacement and projection parameters in monocular video sequences without calibration. This work follows previous studies on particular cases of displacement, scene geometry, and camera analysis and focuses on the particular forms of homographic matrices. It is already known that the number of particular cases involved in a complete study precludes an exhaustive test. To lower the algorithmic complexity, some authors propose to decompose all possible cases in a hierarchical tree data structure but these works are still in development (T. Viéville and D. Lingrand, Internat. J. Comput. Vision31, 1999, 5–L29). In this paper, we propose a new way to deal with the huge number of particular cases: (i) we use simple rules in order to eliminate some redundant cases and some physically impossible cases, and (ii) we divide the cases into subsets corresponding to particular forms determined by simple rules leading to a computationally efficient discrimination method. Finally, some experiments were performed on image sequences acquired either using a robotic system or manually in order to demonstrate that when several models are valid, the model with the fewer parameters gives the best estimation, regarding the free parameters of the problem. The experiments presented in this paper show that even if the selected case is an approximation of reality, the method is still robust.  相似文献   

12.
Computation of approximate polynomial greatest common divisors (GCDs) is important both theoretically and due to its applications to control linear systems, network theory, and computer-aided design. We study two approaches to the solution so far omitted by the researchers, despite intensive recent work in this area. Correlation to numerical Padé approximation enabled us to improve computations for both problems (GCDs and Padé). Reduction to the approximation of polynomial zeros enabled us to obtain a new insight into the GCD problem and to devise effective solution algorithms. In particular, unlike the known algorithms, we estimate the degree of approximate GCDs at a low computational cost, and this enables us to obtain certified correct solution for a large class of input polynomials. We also restate the problem in terms of the norm of the perturbation of the zeros (rather than the coefficients) of the input polynomials, which leads us to the fast certified solution for any pair of input polynomials via the computation of their roots and the maximum matchings or connected components in the associated bipartite graph.  相似文献   

13.
This paper studies a system of m robots operating in a set of n work locations connected by aisles in a × grid, where mn. From time to time the robots need to move along the aisles, in order to visit disjoint sets of locations. The movement of the robots must comply with the following constraints: (1) no two robots can collide at a grid node or traverse a grid edge at the same time; (2) a robot's sensory capability is limited to detecting the presence of another robot at a neighboring node. We present a deterministic protocol that, for any small constant ε>0, allows m≤(1-ε)n robots to visit their target locations in O( ) time, where each robot visits no more than dn targets and no target is visited by more than one robot. We also prove a lower bound showing that our protocol is optimal. Prior to this paper, no optimal protocols were known for d>1. For d=1, optimal protocols were known only for m≤ , while for general mn only a suboptimal randomized protocol was known.  相似文献   

14.
We consider finite hypergraphs with hyperedges defined as sets of vertices of unbounded cardinality. Each such hypergraph has a unique modular decomposition, which is a tree, the nodes of which correspond to certain subhypergraphs (induced by certain sets of vertices called strong modules) of the considered hypergraph. One can define this decomposition by monadic second-order (MS) logical formulas. Such a hypergraph is convex if the vertices are linearly ordered in such a way that the hyperedges form intervals. Our main result says that the unique linear order witnessing the convexity of a prime hypergraph (i.e., of one, the modular decomposition of which is trivial) can be defined in MS logic. As a consequence, we obtain that if a set of bipartite graphs that correspond (in the usual way) to convex hypergraphs has a decidable monadic second-order theory (which means that one can decide whether a given MS formula is satisfied in some graph of the set) then it has bounded clique-width. This yields a new case of validity of a conjecture which is still open.  相似文献   

15.
This paper describes the theory and algorithms of distance transform for fuzzy subsets, called fuzzy distance transform (FDT). The notion of fuzzy distance is formulated by first defining the length of a path on a fuzzy subset and then finding the infimum of the lengths of all paths between two points. The length of a path π in a fuzzy subset of the n-dimensional continuous space n is defined as the integral of fuzzy membership values along π. Generally, there are infinitely many paths between any two points in a fuzzy subset and it is shown that the shortest one may not exist. The fuzzy distance between two points is defined as the infimum of the lengths of all paths between them. It is demonstrated that, unlike in hard convex sets, the shortest path (when it exists) between two points in a fuzzy convex subset is not necessarily a straight line segment. For any positive number θ≤1, the θ-support of a fuzzy subset is the set of all points in n with membership values greater than or equal to θ. It is shown that, for any fuzzy subset, for any nonzero θ≤1, fuzzy distance is a metric for the interior of its θ-support. It is also shown that, for any smooth fuzzy subset, fuzzy distance is a metric for the interior of its 0-support (referred to as support). FDT is defined as a process on a fuzzy subset that assigns to a point its fuzzy distance from the complement of the support. The theoretical framework of FDT in continuous space is extended to digital cubic spaces and it is shown that for any fuzzy digital object, fuzzy distance is a metric for the support of the object. A dynamic programming-based algorithm is presented for computing FDT of a fuzzy digital object. It is shown that the algorithm terminates in a finite number of steps and when it does so, it correctly computes FDT. Several potential applications of fuzzy distance transform in medical imaging are presented. Among these are the quantification of blood vessels and trabecular bone thickness in the regime of limited special resolution where these objects become fuzzy.  相似文献   

16.
Convexity Rule for Shape Decomposition Based on Discrete Contour Evolution   总被引:2,自引:0,他引:2  
We concentrate here on decomposition of 2D objects into meaningfulparts of visual form, orvisual parts. It is a simple observation that convex parts of objects determine visual parts. However, the problem is that many significant visual parts are not convex, since a visual part may have concavities. We solve this problem by identifying convex parts at different stages of a proposed contour evolution method in which significant visual parts will become convex object parts at higher stages of the evolution. We obtain a novel rule for decomposition of 2D objects into visual parts, called the hierarchical convexity rule, which states that visual parts are enclosed by maximal convex (with respect to the object) boundary arcs at different stages of the contour evolution. This rule determines not only parts of boundary curves but directly the visual parts of objects. Moreover, the stages of the evolution hierarchy induce a hierarchical structure of the visual parts. The more advanced the stage of contour evolution, the more significant is the shape contribution of the obtained visual parts.  相似文献   

17.
This paper presents an original method for analyzing, in an unsupervised way, images supplied by high resolution sonar. We aim at segmenting the sonar image into three kinds of regions: echo areas (due to the reflection of the acoustic wave on the object), shadow areas (corresponding to a lack of acoustic reverberation behind an object lying on the sea-bed), and sea-bottom reverberation areas. This unsupervised method estimates the parameters of noise distributions, modeled by a Weibull probability density function (PDF), and the label field parameters, modeled by a Markov random field (MRF). For the estimation step, we adopt a maximum likelihood technique for the noise model parameters and a least-squares method to estimate the MRF prior model. Then, in order to obtain an accurate segmentation map, we have designed a two-step process that finds the shadow and the echo regions separately, using the previously estimated parameters. First, we introduce a scale-causal and spatial model called SCM (scale causal multigrid), based on a multigrid energy minimization strategy, to find the shadow class. Second, we propose a MRF monoscale model using a priori information (at different level of knowledge) based on physical properties of each region, which allows us to distinguish echo areas from sea-bottom reverberation. This technique has been successfully applied to real sonar images and is compatible with automatic processing of massive amounts of data.  相似文献   

18.
The role of perceptual organization in motion analysis has heretofore been minimal. In this work we present a simple but powerful computational model and associated algorithms based on the use of perceptual organizational principles, such as temporal coherence (or common fate) and spatial proximity, for motion segmentation. The computational model does not use the traditional frame by frame motion analysis; rather it treats an image sequence as a single 3D spatio-temporal volume. It endeavors to find organizations in this volume of data over three levels—signal, primitive, and structural. The signal level is concerned with detecting individual image pixels that are probably part of a moving object. The primitive level groups these individual pixels into planar patches, which we call the temporal envelopes. Compositions of these temporal envelopes describe the spatio-temporal surfaces that result from object motion. At the structural level, we detect these compositions of temporal envelopes by utilizing the structure and organization among them. The algorithms employed to realize the computational model include 3D edge detection, Hough transformation, and graph based methods to group the temporal envelopes based on Gestalt principles. The significance of the Gestalt relationships between any two temporal envelopes is expressed in probabilistic terms. One of the attractive features of the adopted algorithm is that it does not require the detection of special 2D features or the tracking of these features across frames. We demonstrate that even with simple grouping strategies, we can easily handle drastic illumination changes, occlusion events, and multiple moving objects, without the use of training and specific object or illumination models. We present results on a large variety of motion sequences to demonstrate this robustness.  相似文献   

19.
A major problem in using iterative number generators of the form xi=f(xi−1) is that they can enter unexpectedly short cycles. This is hard to analyze when the generator is designed, hard to detect in real time when the generator is used, and can have devastating cryptanalytic implications. In this paper we define a measure of security, called sequence diversity, which generalizes the notion of cycle-length for noniterative generators. We then introduce the class of counter-assisted generators and show how to turn any iterative generator (even a bad one designed or seeded by an adversary) into a counter-assisted generator with a provably high diversity, without reducing the quality of generators which are already cryptographically strong.  相似文献   

20.
By reduction from the halting problem for Minsky's two-register machines we prove that there is no algorithm capable of deciding the -theory of one step rewriting of an arbitrary finite linear confluent finitely terminating term rewriting system (weak undecidability). We also present a fixed such system with undecidable *-theory of one step rewriting (strong undecidability). This improves over all previously known results of the same kind.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号