首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Vector-City Vector Distance Transform   总被引:1,自引:0,他引:1  
This paper will examine the current chamfer and vector distance transforms for encoding objects as distance fields. A new vector distance transform is introduced which uses the city-block chamfer distance transform as a basis. Detailed error analysis using real CT data is presented, demonstrating the improved accuracy of the new approach over existing methods. The production of a subvoxel accurate distance field is also demonstrated by employing an improved classification. Distance fields are shown for skull and chess piece datasets.  相似文献   

2.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

3.
We present a simple algorithm for the Euclidean distance transform of a binary image that runs more efficiently than other algorithms in the literature. We show that our algorithm runs in optimal time for many architectures and has optimal cost for the RAM and EREW PRAM.  相似文献   

4.
This paper introduces formative processes, composed by transitive partitions. Given a family of sets, a formative process ending in the Venn partition Σ of is shown to exist. Sufficient criteria are also singled out for a transitive partition to model (via a function from set variables to unions of sets in the partition) all set-literals modeled by Σ. On the basis of such criteria a procedure is designed that mimics a given formative process by another where sets have finite rank bounded by C(|Σ|), with C a specific computable function. As a by-product, one of the core results on decidability in computable set theory is rediscovered, namely the one that regards the satisfiability of unquantified set-theoretic formulae involving Boolean operators, the singleton-former, and the powerset operator. The method described (which is able to exhibit a set-solution when the answer is affirmative) can be extended to solve the satisfiability problem for broader fragments of set theory.  相似文献   

5.
We introduce new algorithms for deciding the satisfiability of constraints for the full recursive path ordering with status (RPO), and hence as well for other path orderings like LPO, MPO, KNS and RDO, and for all possible total precedences and signatures. The techniques are based on a new notion of solved form, where fundamental properties of orderings like transitivity and monotonicity are taken into account. Apart from simplicity and elegance from the theoretical point of view, the main contribution of these algorithms is on efficiency in practice. Since guessing is minimized, and, in particular, no linear orderings between the subterms are guessed, a practical improvement in performance of several orders of magnitude over previous algorithms is obtained, as shown by our experiments.  相似文献   

6.
In this paper, a wavelet-based off-line handwritten signature verification system is proposed. The proposed system can automatically identify useful and common features which consistently exist within different signatures of the same person and, based on these features, verify whether a signature is a forgery or not. The system starts with a closed-contour tracing algorithm. The curvature data of the traced closed contours are decomposed into multiresolutional signals using wavelet transforms. Then the zero-crossings corresponding to the curvature data are extracted as features for matching. Moreover, a statistical measurement is devised to decide systematically which closed contours and their associated frequency data of a writer are most stable and discriminating. Based on these data, the optimal threshold value which controls the accuracy of the feature extraction process is calculated. The proposed approach can be applied to both on-line and off-line signature verification systems. Experimental results show that the average success rates for English signatures and Chinese signatures are 92.57% and 93.68%, respectively.  相似文献   

7.
The role of perceptual organization in motion analysis has heretofore been minimal. In this work we present a simple but powerful computational model and associated algorithms based on the use of perceptual organizational principles, such as temporal coherence (or common fate) and spatial proximity, for motion segmentation. The computational model does not use the traditional frame by frame motion analysis; rather it treats an image sequence as a single 3D spatio-temporal volume. It endeavors to find organizations in this volume of data over three levels—signal, primitive, and structural. The signal level is concerned with detecting individual image pixels that are probably part of a moving object. The primitive level groups these individual pixels into planar patches, which we call the temporal envelopes. Compositions of these temporal envelopes describe the spatio-temporal surfaces that result from object motion. At the structural level, we detect these compositions of temporal envelopes by utilizing the structure and organization among them. The algorithms employed to realize the computational model include 3D edge detection, Hough transformation, and graph based methods to group the temporal envelopes based on Gestalt principles. The significance of the Gestalt relationships between any two temporal envelopes is expressed in probabilistic terms. One of the attractive features of the adopted algorithm is that it does not require the detection of special 2D features or the tracking of these features across frames. We demonstrate that even with simple grouping strategies, we can easily handle drastic illumination changes, occlusion events, and multiple moving objects, without the use of training and specific object or illumination models. We present results on a large variety of motion sequences to demonstrate this robustness.  相似文献   

8.
An atomic representation of a Herbrand model (ARM) is a finite set of (not necessarily ground) atoms over a given Herbrand universe. Each ARM represents a possibly infinite Herbrand interpretation. This concept has emerged independently in different branches of computer science as a natural and useful generalization of the concept of finite Herbrand interpretation. It was shown that several recursively decidable problems on finite Herbrand models (or interpretations) remain decidable on ARMs.The following problems are essential when working with ARMs: Deciding the equivalence of two ARMs, deciding subsumption between ARMs, and evaluating clauses over ARMs. These problems were shown to be decidable, but their computational complexity has remained obscure so far. The previously published decision algorithms require exponential space. In this paper, we prove that all mentioned problems are coNP-complete.  相似文献   

9.
A Continuous Probabilistic Framework for Image Matching   总被引:1,自引:0,他引:1  
In this paper we describe a probabilistic image matching scheme in which the image representation is continuous and the similarity measure and distance computation are also defined in the continuous domain. Each image is first represented as a Gaussian mixture distribution and images are compared and matched via a probabilistic measure of similarity between distributions. A common probabilistic and continuous framework is applied to the representation as well as the matching process, ensuring an overall system that is theoretically appealing. Matching results are investigated and the application to an image retrieval system is demonstrated.  相似文献   

10.
We present an approach to attention in active computer vision. The notion of attention plays an important role in biological vision. In recent years, and especially with the emerging interest in active vision, computer vision researchers have been increasingly concerned with attentional mechanisms as well. The basic principles behind these efforts are greatly influenced by psychophysical research. That is the case also in the work presented here, which adapts to the model of Treisman (1985, Comput. Vision Graphics Image Process. Image Understanding31, 156–177), with an early parallel stage with preattentive cues followed by a later serial stage where the cues are integrated. The contributions in our approach are (i) the incorporation of depth information from stereopsis, (ii) the simple implementation of low level modules such as disparity and flow by local phase, and (iii) the cue integration along pursuit and saccade mode that allows us a proper target selection based on nearness and motion. We demonstrate the technique by experiments in which a moving observer selectively masks out different moving objects in real scenes.  相似文献   

11.
The image template matching problem is one of the fundamental problems of and has many practical applications in image processing, pattern recognition, and computer vision. It is a useful operation for filtering, edge detection, image registration, and object detection [13]. In this paper, we first design twoO[(M2/p2)log logM] andO[(M2/p2)+(M/p)log logp] time parallel image template matching algorithms on a 3-D processor array with a reconfigurable bus system usingp2N2processors with each processor containingO(1) andO(M/p) restricted memory for 1 ≤pMN, respectively, for anN×Ndigital image and anM×Mtemplate. By increasing the number of processors, these two proposed algorithms can be run inO(M2/p2) time for speeding up the time complexity usingp2M1/cN2andp2+1/cN2processors, respectively, wherecis a constant andc≥1. Furthermore, anO(1) time can be also obtained from these two proposed algorithms by usingM2+1/cN2processors. These results improve the best known bounds and achieve both optimal and optimal speed-up in their time and processor complexities.  相似文献   

12.
The study of the computational power of randomized computations is one of the central tasks of complexity theory. The main goal of this paper is the comparison of the power of Las Vegas computation and deterministic respectively nondeterministic computation. We investigate the power of Las Vegas computation for the complexity measures of one-way communication, ordered binary decision diagrams, and finite automata.(i) For the one-way communication complexity of two-party protocols we show that Las Vegas communication can save at most one half of the deterministic one-way communication complexity. We also present a language for which this gap is tight.(ii) The result (i) is applied to show an at most polynomial gap between determinism and Las Vegas for ordered binary decision diagrams.(iii) For the size (i.e., the number of states) of finite automata we show that the size of Las Vegas finite automata recognizing a language L is at least the square root of the size of the minimal deterministic finite automaton recognizing L. Using a specific language we verify the optimality of this lower bound.  相似文献   

13.
There have been two thrusts in the development of optical flow algorithms. One has emphasized higher accuracy; the other faster implementation. These two thrusts, however, have been independently pursued, without addressing the accuracy vs efficiency trade-offs. Although the accuracy–efficiency characteristic is algorithm dependent, an understanding of a general pattern is crucial in evaluating an algorithm as far as real-world tasks are concerned, which often pose various performance requirements. This paper addresses many implementation issues that have often been neglected in previous research, including temporal filtering of the output stream, algorithms' flexibility, and robustness to noise, subsampling, etc. Their impacts on accuracy and/or efficiency are emphasized. We present a survey of different approaches toward the goal of higher performance and present experimental studies on accuracy vs efficiency trade-offs. A detailed analysis of how this trade-off affects algorithm design is manifested in a case study involving two state-of-the-art optical flow algorithms: a gradient and a correlation-based method. The goal of this paper is to bridge the gap between the accuracy- and the efficiency-oriented approaches.  相似文献   

14.
The exponential output size problem is to determine whether the size of output trees of a tree transducer grows exponentially in the size of input trees. In this paper the complexity of this problem is studied. It is shown to be NL-complete for total top-down tree transducers, DEXPTIME-complete for general top-down tree transducers, and P-complete for bottom-up tree transducers.  相似文献   

15.
In this paper, we investigate several extensions of the linear time hierarchy (denoted by LTH). We first prove that it is not necessary to erase the oracle tape between two successive oracle calls, thereby lifting a common restriction on LTH machines. We also define a natural counting extension of LTH and show that it corresponds to a robust notion of counting bounded arithmetic predicates. Finally, we show that the computational power of the majority operator is equivalent to that of the exact counting operator in both contexts.  相似文献   

16.
Considering a checkpoint and communication pattern, the rollback-dependency trackability (RDT) property stipulates that there is no hidden dependency between local checkpoints. In other words, if there is a dependency between two checkpoints due to a noncausal sequence of messages (Z-path), then there exists a causal sequence of messages (C-path) that doubles the noncausal one and that establishes the same dependency.This paper introduces the notion of RDT-compliance. A property defined on Z-paths is RDT-compliant if the causal doubling of Z-paths having this property is sufficient to ensure RDT. Based on this notion, the paper provides examples of such properties. Moreover, these properties are visible, i.e., they can be tested on the fly. One of these properties is shown to be minimal with respect to visible and RDT-compliant properties. In other words, this property defines a minimal visible set of Z-paths that have to be doubled for the RDT property to be satisfied.Then, a family of communication-induced checkpointing protocols that ensure on-the-fly RDT properties is considered. Assuming processes take local checkpoints independently (called basic checkpoints), protocols of this family direct them to take on-the-fly additional local checkpoints (called forced checkpoints) in order that the resulting checkpoint and communication pattern satisfies the RDT property. The second contribution of this paper is a new communication-induced checkpointing protocol . This protocol, based on a condition derived from the previous characterization, tracks a minimal set of Z-paths and breaks those not perceived as being doubled. Finally, a set of communication-induced checkpointing protocols are derived from . Each of these derivations considers a particular weakening of the general condition used by . It is interesting to note that some of these derivations produce communication-induced checkpointing protocols that have already been proposed in the literature.  相似文献   

17.
Aiming at the use of hand gestures for human–computer interaction, this paper presents a real-time approach to the spotting, representation, and recognition of hand gestures from a video stream. The approach exploits multiple cues including skin color, hand motion, and shape. Skin color analysis and coarse image motion detection are joined to perform reliable hand gesture spotting. At a higher level, a compact spatiotemporal representation is proposed for modeling appearance changes in image sequences containing hand gestures. The representation is extracted by combining robust parameterized image motion regression and shape features of a segmented hand. For efficient recognition of gestures made at varying rates, a linear resampling technique for eliminating the temporal variation (time normalization) while maintaining the essential information of the original gesture representations is developed. The gesture is then classified according to a training set of gestures. In experiments with a library of 12 gestures, the recognition rate was over 90%. Through the development of a prototype gesture-controlled panoramic map browser, we demonstrate that a vocabulary of predefined hand gestures can be used to interact successfully with applications running on an off-the-shelf personal computer equipped with a home video camera.  相似文献   

18.
A contribution to the automatic 3-D reconstruction of complex urban scenes from aerial stereo pairs is proposed. It consists of segmenting the scene into two different kinds of components: the ground and the above-ground objects. The above-ground objects are classified either as buildings or as vegetation. The idea is to define appropriate regions of interest in order to achieve a relevant 3-D reconstruction. For that purpose, a digital elevation model of the scene is first computed and segmented into above-ground regions using a Markov random field model. Then a radiometric analysis is used to classify above-ground regions as building or vegetation, leading to the determination of the final above-ground objects. The originality of the method is its ability to cope with extended above-ground areas, even in case of a sloping ground surface. This characteristic is necessary in a urban environment. Results are very robust to image and scene variability, and they enable the utilization of appropriate local 3-D reconstruction algorithms.  相似文献   

19.
Let R be a commutative ring with 1, let RX1,…,Xn/I be the polynomial algebra in the n≥4 noncommuting variables X1,…,Xn over R modulo the set of commutator relations I={(X1+···+Xn)*Xi=Xi*(X1+···+Xn)|1≤in}. Furthermore, let G be an arbitrary group of permutations operating on the indeterminates X1,…,Xn, and let RX1,…,Xn/IG be the R-algebra of G-invariant polynomials in RX1,…,Xn/I. The first part of this paper is about an algorithm, which computes a representation for any fRX1,…,Xn/IG as a polynomial in multilinear G-invariant polynomials, i.e., the maximal variable degree of the generators of RX1,…,Xn/IG is at most 1. The algorithm works for any ring R and for any permutation group G. In addition, we present a bound for the number of necessary generators for the representation of all G-invariant polynomials in RX1,…,Xn/IG with a total degree of at most d. The second part contains a first but promising analysis of G-invariant polynomials of solvable polynomial rings.  相似文献   

20.
This paper presents a general information-theoretic approach for obtaining lower bounds on the number of examples required for Probably Approximately Correct (PAC) learning in the presence of noise. This approach deals directly with the fundamental information quantities, avoiding a Bayesian analysis. The technique is applied to several different models, illustrating its generality and power. The resulting bounds add logarithmic factors to (or improve the constants in) previously known lower bounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号