首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents an original method for analyzing, in an unsupervised way, images supplied by high resolution sonar. We aim at segmenting the sonar image into three kinds of regions: echo areas (due to the reflection of the acoustic wave on the object), shadow areas (corresponding to a lack of acoustic reverberation behind an object lying on the sea-bed), and sea-bottom reverberation areas. This unsupervised method estimates the parameters of noise distributions, modeled by a Weibull probability density function (PDF), and the label field parameters, modeled by a Markov random field (MRF). For the estimation step, we adopt a maximum likelihood technique for the noise model parameters and a least-squares method to estimate the MRF prior model. Then, in order to obtain an accurate segmentation map, we have designed a two-step process that finds the shadow and the echo regions separately, using the previously estimated parameters. First, we introduce a scale-causal and spatial model called SCM (scale causal multigrid), based on a multigrid energy minimization strategy, to find the shadow class. Second, we propose a MRF monoscale model using a priori information (at different level of knowledge) based on physical properties of each region, which allows us to distinguish echo areas from sea-bottom reverberation. This technique has been successfully applied to real sonar images and is compatible with automatic processing of massive amounts of data.  相似文献   

2.
We consider the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones. First, we describe a general method how to utilize the decidability of bisimulation problems to solve (certain instances of) the corresponding simulation problems. For certain process classes, the method allows us to design effective reductions of simulation problems to their bisimulation counterparts and some new decidability results for simulation have already been obtained in this way. Then we establish the decidability border for the problem of simulation preorder/equivalence between infinite-state processes and finite-state ones w.r.t. the hierarchy of process rewrite systems. In particular, we show that simulation preorder (in both directions) and simulation equivalence are decidable in EXPTIME between pushdown processes and finite-state ones. On the other hand, simulation preorder is undecidable between PA and finite-state processes in both directions. These results also hold for those PA and finite-state processes which are deterministic and normed, and thus immediately extend to trace preorder. Regularity (finiteness) w.r.t. simulation and trace equivalence is also shown to be undecidable for PA. Finally, we prove that simulation preorder (in both directions) and simulation equivalence are intractable between all classes of infinite-state systems (in the hierarchy of process rewrite systems) and finite-state ones. This result is obtained by showing that the problem whether a BPA (or BPP) process simulates a finite-state one is PSPACE-hard and the other direction is co -hard; consequently, simulation equivalence between BPA (or BPP) and finite-state processes is also co -hard.  相似文献   

3.
Voxelization is the transformation of geometric surfaces into voxels. Up to date this process has been done essentially using incremental algorithms. Incremental algorithms have the reputation of being efficient but they lack an important property: robustness. The voxelized representation should envelop its continuous model. However, without robust methods this cannot be guaranteed. This article describes novel techniques of robust voxelization and visualization of implicit surfaces. First of all our recursive subdivision voxelization algorithm is reviewed. This algorithm was initially inspired by Duff's image space subdivision method. Then, we explain the algorithm to voxelize implicit surfaces defined in spherical or cylindrical coordinates. Next, we show a new technique to produce infinite replications of implicit objects and their voxelization method. Afterward, we comment on the parallelization of our voxelization procedure. Finally we present our voxel visualization algorithm based on point display. Our voxelization algorithms can be used with any data structure, thanks to the fact that a voxel is only stored once the last subdivision level is reached. We emphasize the use of the octree, though, because it is a convenient way to store the discrete model hierarchically. In a hierarchy the discrete model refinement is simple and possible from any previous voxelized scene thanks to the fact that the voxelization algorithms are robust.  相似文献   

4.
In this paper, we investigate several extensions of the linear time hierarchy (denoted by LTH). We first prove that it is not necessary to erase the oracle tape between two successive oracle calls, thereby lifting a common restriction on LTH machines. We also define a natural counting extension of LTH and show that it corresponds to a robust notion of counting bounded arithmetic predicates. Finally, we show that the computational power of the majority operator is equivalent to that of the exact counting operator in both contexts.  相似文献   

5.
A major problem in using iterative number generators of the form xi=f(xi−1) is that they can enter unexpectedly short cycles. This is hard to analyze when the generator is designed, hard to detect in real time when the generator is used, and can have devastating cryptanalytic implications. In this paper we define a measure of security, called sequence diversity, which generalizes the notion of cycle-length for noniterative generators. We then introduce the class of counter-assisted generators and show how to turn any iterative generator (even a bad one designed or seeded by an adversary) into a counter-assisted generator with a provably high diversity, without reducing the quality of generators which are already cryptographically strong.  相似文献   

6.
In wide area distributed systems it is now common for higher-order code to be transferred from one domain to another; the receiving host may initialise parameters and then execute the code in its local environment. In this paper we propose a fine-grained typing system for a higher-order π-calculus which can be used to control the effect of such migrating code on local environments. Processes may be assigned different types depending on their intended use. This is in contrast to most of the previous work on typing processes where all processes are typed by a unique constant type, indicating essentially that they are well typed relative to a particular environment. Our fine-grained typing facilitates the management of access rights and provides host protection from potentially malicious behaviour. Our process type takes the form of an interface limiting the resources to which it has access and the types at which they may be used. Allowing resource names to appear both in process types and process terms, as interaction ports, complicates the typing system considerably. For the development of a coherent typing system, we use a kinding technique, similar to that used by the subtyping of the system F, and order-theoretic properties of our subtyping relation. Various examples of this paper illustrate the usage of our fine-grained process types in distributed systems.  相似文献   

7.
In recent years there has been an increased interest in the modeling and recognition of human activities involving highly structured and semantically rich behavior such as dance, aerobics, and sign language. A novel approach for automatically acquiring stochastic models of the high-level structure of an activity without the assumption of any prior knowledge is presented. The process involves temporal segmentation into plausible atomic behavior components and the use of variable-length Markov models for the efficient representation of behaviors. Experimental results that demonstrate the synthesis of realistic sample behaviors and the performance of models for long-term temporal prediction are presented.  相似文献   

8.
We introduce new algorithms for deciding the satisfiability of constraints for the full recursive path ordering with status (RPO), and hence as well for other path orderings like LPO, MPO, KNS and RDO, and for all possible total precedences and signatures. The techniques are based on a new notion of solved form, where fundamental properties of orderings like transitivity and monotonicity are taken into account. Apart from simplicity and elegance from the theoretical point of view, the main contribution of these algorithms is on efficiency in practice. Since guessing is minimized, and, in particular, no linear orderings between the subterms are guessed, a practical improvement in performance of several orders of magnitude over previous algorithms is obtained, as shown by our experiments.  相似文献   

9.
A Continuous Probabilistic Framework for Image Matching   总被引:1,自引:0,他引:1  
In this paper we describe a probabilistic image matching scheme in which the image representation is continuous and the similarity measure and distance computation are also defined in the continuous domain. Each image is first represented as a Gaussian mixture distribution and images are compared and matched via a probabilistic measure of similarity between distributions. A common probabilistic and continuous framework is applied to the representation as well as the matching process, ensuring an overall system that is theoretically appealing. Matching results are investigated and the application to an image retrieval system is demonstrated.  相似文献   

10.
A term rewriting system is called growing if each variable occurring on both the left-hand side and the right-hand side of a rewrite rule occurs at depth zero or one in the left-hand side. Jacquemard showed that the reachability and the sequentiality of linear (i.e., left-right-linear) growing term rewriting systems are decidable. In this paper we show that Jacquemard's result can be extended to left-linear growing rewriting systems that may have right-nonlinear rewrite rules. This implies that the reachability and the joinability of some class of right-linear term rewriting systems are decidable, which improves the results for right-ground term rewriting systems by Oyamaguchi. Our result extends the class of left-linear term rewriting systems having a decidable call-by-need normalizing strategy. Moreover, we prove that the termination property is decidable for almost orthogonal growing term rewriting systems.  相似文献   

11.
In this paper, a wavelet-based off-line handwritten signature verification system is proposed. The proposed system can automatically identify useful and common features which consistently exist within different signatures of the same person and, based on these features, verify whether a signature is a forgery or not. The system starts with a closed-contour tracing algorithm. The curvature data of the traced closed contours are decomposed into multiresolutional signals using wavelet transforms. Then the zero-crossings corresponding to the curvature data are extracted as features for matching. Moreover, a statistical measurement is devised to decide systematically which closed contours and their associated frequency data of a writer are most stable and discriminating. Based on these data, the optimal threshold value which controls the accuracy of the feature extraction process is calculated. The proposed approach can be applied to both on-line and off-line signature verification systems. Experimental results show that the average success rates for English signatures and Chinese signatures are 92.57% and 93.68%, respectively.  相似文献   

12.
We present an approach to attention in active computer vision. The notion of attention plays an important role in biological vision. In recent years, and especially with the emerging interest in active vision, computer vision researchers have been increasingly concerned with attentional mechanisms as well. The basic principles behind these efforts are greatly influenced by psychophysical research. That is the case also in the work presented here, which adapts to the model of Treisman (1985, Comput. Vision Graphics Image Process. Image Understanding31, 156–177), with an early parallel stage with preattentive cues followed by a later serial stage where the cues are integrated. The contributions in our approach are (i) the incorporation of depth information from stereopsis, (ii) the simple implementation of low level modules such as disparity and flow by local phase, and (iii) the cue integration along pursuit and saccade mode that allows us a proper target selection based on nearness and motion. We demonstrate the technique by experiments in which a moving observer selectively masks out different moving objects in real scenes.  相似文献   

13.
Computation of approximate polynomial greatest common divisors (GCDs) is important both theoretically and due to its applications to control linear systems, network theory, and computer-aided design. We study two approaches to the solution so far omitted by the researchers, despite intensive recent work in this area. Correlation to numerical Padé approximation enabled us to improve computations for both problems (GCDs and Padé). Reduction to the approximation of polynomial zeros enabled us to obtain a new insight into the GCD problem and to devise effective solution algorithms. In particular, unlike the known algorithms, we estimate the degree of approximate GCDs at a low computational cost, and this enables us to obtain certified correct solution for a large class of input polynomials. We also restate the problem in terms of the norm of the perturbation of the zeros (rather than the coefficients) of the input polynomials, which leads us to the fast certified solution for any pair of input polynomials via the computation of their roots and the maximum matchings or connected components in the associated bipartite graph.  相似文献   

14.
Face Detection: A Survey   总被引:5,自引:0,他引:5  
In this paper we present a comprehensive and critical survey of face detection algorithms. Face detection is a necessary first-step in face recognition systems, with the purpose of localizing and extracting the face region from the background. It also has several applications in areas such as content-based image retrieval, video coding, video conferencing, crowd surveillance, and intelligent human–computer interfaces. However, it was not until recently that the face detection problem received considerable attention among researchers. The human face is a dynamic object and has a high degree of variability in its apperance, which makes face detection a difficult problem in computer vision. A wide variety of techniques have been proposed, ranging from simple edge-based algorithms to composite high-level approaches utilizing advanced pattern recognition methods. The algorithms presented in this paper are classified as either feature-based or image-based and are discussed in terms of their technical approach and performance. Due to the lack of standardized tests, we do not provide a comprehensive comparative evaluation, but in cases where results are reported on common datasets, comparisons are presented. We also give a presentation of some proposed applications and possible application areas.  相似文献   

15.
We study trace and may-testing equivalences in the asynchronous versions of CCS and π-calculus. We start from the operational definition of the may-testing preorder and provide finitary and fully abstract trace-based characterizations for it, along with a complete in-equational proof system. We also touch upon two variants of this theory by first considering a more demanding equivalence notion (must-testing) and then a richer version of asynchronous CCS. The results throw light on the difference between synchronous and asynchronous communication and on the weaker testing power of asynchronous observations.  相似文献   

16.
We present a method for automatically estimating the motion of an articulated object filmed by two or more fixed cameras. We focus our work on the case where the quality of the images is poor, and where only an approximation of a geometric model of the tracked object is available. Our technique uses physical forces applied to each rigid part of a kinematic 3D model of the object we are tracking. These forces guide the minimization of the differences between the pose of the 3D model and the pose of the real object in the video images. We use a fast recursive algorithm to solve the dynamical equations of motion of any 3D articulated model. We explain the key parts of our algorithms: how relevant information is extracted from the images, how the forces are created, and how the dynamical equations of motion are solved. A study of what kind of information should be extracted in the images and of when our algorithms fail is also presented. Finally we present some results about the tracking of a person. We also show the application of our method to the tracking of a hand in sequences of images, showing that the kind of information to extract from the images depends on their quality and of the configuration of the cameras.  相似文献   

17.
18.
19.
The role of perceptual organization in motion analysis has heretofore been minimal. In this work we present a simple but powerful computational model and associated algorithms based on the use of perceptual organizational principles, such as temporal coherence (or common fate) and spatial proximity, for motion segmentation. The computational model does not use the traditional frame by frame motion analysis; rather it treats an image sequence as a single 3D spatio-temporal volume. It endeavors to find organizations in this volume of data over three levels—signal, primitive, and structural. The signal level is concerned with detecting individual image pixels that are probably part of a moving object. The primitive level groups these individual pixels into planar patches, which we call the temporal envelopes. Compositions of these temporal envelopes describe the spatio-temporal surfaces that result from object motion. At the structural level, we detect these compositions of temporal envelopes by utilizing the structure and organization among them. The algorithms employed to realize the computational model include 3D edge detection, Hough transformation, and graph based methods to group the temporal envelopes based on Gestalt principles. The significance of the Gestalt relationships between any two temporal envelopes is expressed in probabilistic terms. One of the attractive features of the adopted algorithm is that it does not require the detection of special 2D features or the tracking of these features across frames. We demonstrate that even with simple grouping strategies, we can easily handle drastic illumination changes, occlusion events, and multiple moving objects, without the use of training and specific object or illumination models. We present results on a large variety of motion sequences to demonstrate this robustness.  相似文献   

20.
Considering a checkpoint and communication pattern, the rollback-dependency trackability (RDT) property stipulates that there is no hidden dependency between local checkpoints. In other words, if there is a dependency between two checkpoints due to a noncausal sequence of messages (Z-path), then there exists a causal sequence of messages (C-path) that doubles the noncausal one and that establishes the same dependency.This paper introduces the notion of RDT-compliance. A property defined on Z-paths is RDT-compliant if the causal doubling of Z-paths having this property is sufficient to ensure RDT. Based on this notion, the paper provides examples of such properties. Moreover, these properties are visible, i.e., they can be tested on the fly. One of these properties is shown to be minimal with respect to visible and RDT-compliant properties. In other words, this property defines a minimal visible set of Z-paths that have to be doubled for the RDT property to be satisfied.Then, a family of communication-induced checkpointing protocols that ensure on-the-fly RDT properties is considered. Assuming processes take local checkpoints independently (called basic checkpoints), protocols of this family direct them to take on-the-fly additional local checkpoints (called forced checkpoints) in order that the resulting checkpoint and communication pattern satisfies the RDT property. The second contribution of this paper is a new communication-induced checkpointing protocol . This protocol, based on a condition derived from the previous characterization, tracks a minimal set of Z-paths and breaks those not perceived as being doubled. Finally, a set of communication-induced checkpointing protocols are derived from . Each of these derivations considers a particular weakening of the general condition used by . It is interesting to note that some of these derivations produce communication-induced checkpointing protocols that have already been proposed in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号