首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 627 毫秒
1.
In this paper, we discuss an appearance-matching approach to the difficult problem of interpreting color scenes containing occluded objects. We have explored the use of an iterative, coarse-to-fine sum-squared-error method that uses information from hypothesized occlusion events to perform run-time modification of scene-to-template similarity measures. These adjustments are performed by using a binary mask to adaptively exclude regions of the template image from the squared-error computation. At each iteration higher resolution scene data as well as information derived from the occluding interactions between multiple object hypotheses are used to adjust these masks. We present results which demonstrate that such a technique is reasonably robust over a large database of color test scenes containing objects at a variety of scales, and tolerates minor 3D object rotations and global illumination variations. Received: 21 November 1996 / Accepted: 14 October 1997  相似文献   

2.
We present a system for classifying the color aspect of textured surfaces having a nearly constant hue (such as wooden boards, textiles, wallpaper, etc.). The system is designed to compensate for small fluctuations (over time) of the light source and for inhomogeneous illumination conditions (shading correction). This is an important feature because even in industrial environments where the lighting conditions are controlled, a constant and homogeneous illumination cannot be guaranteed. Together with an appropriate camera calibration (which includes a periodic update), our approach offers a robust system which is able to “distinguish” (i.e., classify correctly) between surface classes which exhibit visually barely perceptible color variations. In particular, our approach is based on relative (not absolute) color measurements. In this paper, we outline the classification algorithm while focusing in detail on the camera calibration and a method for compensating for fluctuations of the light source. Received: 1 September 1998 / Accepted: 16 March 2000  相似文献   

3.
This paper describes a method for recognizing partially occluded objects under different levels of illumination brightness by using the eigenspace analysis. In our previous work, we developed the “eigenwindow” method to recognize the partially occluded objects in an assembly task, and demonstrated with sufficient high performance for the industrial use that the method works successfully for multiple objects with specularity under constant illumination. In this paper, we modify the eigenwindow method for recognizing objects under different illumination conditions, as is sometimes the case in manufacturing environments, by using additional color information. In the proposed method, a measured color in the RGB color space is transformed into one in the HSV color space. Then, the hue of the measured color, which is invariant to change in illumination brightness and direction, is used for recognizing multiple objects under different illumination conditions. The proposed method was applied to real images of multiple objects under various illumination conditions, and the objects were recognized and localized successfully.  相似文献   

4.
This paper describes a complete stereovision system, which was originally developed for planetary applications, but can be used for other applications such as object modeling. A new effective on-site calibration technique has been developed, which can make use of the information from the surrounding environment as well as the information from the calibration apparatus. A correlation-based stereo algorithm is used, which can produce sufficient dense range maps with an algorithmic structure for fast implementations. A technique based on iterative closest-point matching has been developed for registration of successive depth maps and computation of the displacements between successive positions. A statistical method based on the distance distribution is integrated into this registration technique, which allows us to deal with such important problems as outliers, occlusion, appearance, and disappearance. Finally, the registered maps are expressed in the same coordinate system and are fused, erroneous data are eliminated through consistency checking, and a global digital elevation map is built incrementally.  相似文献   

5.
In this paper, we present an approach to global transaction management in workflow environments. The transaction mechanism is based on the well-known notion of compensation, but extended to deal with both arbitrary process structures to allow cycles in processes and safepoints to allow partial compensation of processes. We present a formal specification of the transaction model and transaction management algorithms in set and graph theory, providing clear, unambiguous transaction semantics. The specification is straightforwardly mapped to a modular architecture, the implementation of which is first applied in a testing environment, then in the prototype of a commercial workflow management system. The modular nature of the resulting system allows easy distribution using middleware technology. The path from abstract semantics specification to concrete, real-world implementation of a workflow transaction mechanism is thus covered in a complete and coherent fashion. As such, this paper provides a complete framework for the application of well-founded transactional workflows. Received: 16 November 1999 / Accepted 29 August 2001 Published online: 6 November 2001  相似文献   

6.
A new adaptive thresholding algorithm concerning extraction of targets from the background in a given image sequence is proposed. The conventional histogram-based or fixed-value thresholdings are deficient in detecting targets due to the poor contrast between targets and the background, or to the change of illumination. This research solves the problems mentioned above by learning the characteristics of the background from the given images and determines the proper thresholds based on this information. Experiments show that the proposed algorithm is superior to the optimal layering algorithm in target detection and tracking. Received: 28 December 1999 / Accepted: 8 August 2000  相似文献   

7.
inverse subdivision algorithms , with linear time and space complexity, to detect and reconstruct uniform Loop, Catmull–Clark, and Doo–Sabin subdivision structure in irregular triangular, quadrilateral, and polygonal meshes. We consider two main applications for these algorithms. The first one is to enable interactive modeling systems that support uniform subdivision surfaces to use popular interchange file formats which do not preserve the subdivision structure, such as VRML, without loss of information. The second application is to improve the compression efficiency of existing lossless connectivity compression schemes, by optimally compressing meshes with Loop subdivision connectivity. Our Loop inverse subdivision algorithm is based on global connectivity properties of the covering mesh, a concept motivated by the covering surface from Algebraic Topology. Although the same approach can be used for other subdivision schemes, such as Catmull–Clark, we present a Catmull–Clark inverse subdivision algorithm based on a much simpler graph-coloring algorithm and a Doo–Sabin inverse subdivision algorithm based on properties of the dual mesh. Straightforward extensions of these approaches to other popular uniform subdivision schemes are also discussed. Published online: 3 July 2002  相似文献   

8.
We present a method that makes the use of photon tracing methods feasible for complex scenes when a totally accurate solution is not essential. This is accomplished by using orientation lightmaps, which average the illumination of complex objects depending on the surface normal. Through this averaging, they considerably reduce the variance of the stochastic solution. In order to use these specialised lightmaps, which consume comparatively small amounts of memory, no changes have to be made to the basic photon-tracing algorithm. Also, they can be freely mixed with normal lightmaps. This gives the user good control over the amount of inaccuracy he introduces by their application. The area computations necessary for their insertion are performed using a stochastic sampling method that performs well for highly complex objects.  相似文献   

9.
Conformance testing is still the main industrial validation technique for telecommunication protocols. In practice, the automatic construction of test cases based on finite-state models is hindered by the state explosion problem. We try to reduce its magnitude by using static analysis techniques in order to obtain smaller but equivalent models. Published online: 24 January 2003  相似文献   

10.
11.
We present a novel approach to the robust classification of arbitrary object classes in complex, natural scenes. Starting from a re-appraisal of Marr's ‘primal sketch’, we develop an algorithm that (1) employs local orientations as the fundamental picture primitives, rather than the more usual edge locations, (2) retains and exploits the local spatial arrangement of features of different complexity in an image and (3) is hierarchically arranged so that the level of feature abstraction increases at each processing stage. The resulting, simple technique is based on the accumulation of evidence in binary channels, followed by a weighted, non-linear sum of the evidence accumulators. The steps involved in designing a template for recognizing a simple object are explained. The practical application of the algorithm is illustrated, with examples taken from a broad range of object classification problems. We discuss the performance of the algorithm and describe a hardware implementation. First successful attempts to train the algorithm, automatically, are presented. Finally, we compare our algorithm with other object classification algorithms described in the literature.  相似文献   

12.
Fast techniques for the optimal smoothing of stored video   总被引:3,自引:0,他引:3  
Work-ahead smoothing is a technique whereby a server, transmitting stored compressed video to a client, utilizes client buffer space to reduce the rate variability of the transmitted stream. The technique requires the server to compute a schedule of transfer under the constraints that the client buffer neither overflows nor underflows. Recent work established an optimal off-line algorithm (which minimizes peak, variance and rate variability of the transmitted stream) under the assumptions of fixed client buffer size, known worst case network jitter, and strict playback of the client video. In this paper, we examine the practical considerations of heterogeneous and dynamically variable client buffer sizes, variable worst case network jitter estimates, and client interactivity. These conditions require on-line computation of the optimal transfer schedule. We focus on techniques for reducing on-line computation time. Specifically, (i) we present an algorithm for precomputing and storing the optimal schedules for all possible client buffer sizes in a compact manner; (ii) we show that it is theoretically possible to precompute and store compactly the optimal schedules for all possible estimates of worst case network jitter; (iii) in the context of playback resumption after client interactivity, we show convergence of the recomputed schedule with the original schedule, implying greatly reduced on-line computation time; and (iv) we propose and empirically evaluate an “approximation scheme” that produces a schedule close to optimal but takes much less computation time.  相似文献   

13.
One important step in the analysis of digitized land use map images is the separation of the information in layers. In this paper we present a technique called Selective Attention Filter which is able to extract or enhance some features of the image that correspond to conceptual layers in the map by extracting information from results of clustering of local regions on the map. Different parameters can be used to extract or enhance different information on the image. Details on the algorithm, examples of application of the filter and results are also presented. Received: October 1, 1997 / Revised June 16, 1998  相似文献   

14.
This paper introduces a new method for the coordination of human motion based on planning and AI techniques. Motions are considered as black boxes that are activated according to preconditions and produce postconditions in a hybrid, continuous and discrete world. Each part of the body is an autonomous entity that cooperates with the others as determined by global criteria, such as occupation rate and distance to a goal (common to all the entities). With this technique, we can easily specify and solve the motion coordination problem of a juggler that juggles with a dynamic number of balls in real time.  相似文献   

15.
In this paper, we present a placement algorithm that interleaves multi-resolution video streams on a disk array and enables a video server to efficiently support playback of these streams at different resolution levels. We then combine this placement algorithm with a scalable compression technique to efficiently support interactive scan operations (i.e., fast-forward and rewind). We present an analytical model for evaluating the impact of the scan operations on the performance of disk-arr ay-based servers. Our experiments demonstrate that: (1) employing our placement algorithm substantially reduces seek and rotational latency overhead during playback, and (2) exploiting the characteristics of video streams and human perceptual tolerances enables a server to support interactive scan operations without any additional overhead.  相似文献   

16.
17.
Performance evaluation is crucial for improving the performance of OCR systems. However, this is trivial and sophisticated work to do by hand. Therefore, we have developed an automatic performance evaluation system for a printed Chinese character recognition (PCCR) system. Our system is characterized by using real-world data as test data and automatically obtaining the performance of the PCCR system by comparing the correct text and the recognition result of the document image. In addition, our performance evaluation system also provides some evaluation of performance for the segmentation module, the classification module, and the post-processing module of the PCCR system. For this purpose, a segmentation error-tolerant character-string matching algorithm is proposed to obtain the correspondence between the correct text and the recognition result. The experiments show that our performance evaluation system is an accurate and powerful tool for studying deficiencies in the PCCR system. Although our approach is aimed at the PCCR system, the idea also can be applied to other OCR systems.  相似文献   

18.
Abstract. This paper describes an unsupervised algorithm for estimating the 3D profile of potholes in the highway surface, using structured illumination. Structured light is used to accelerate computation and to simplify the estimation of range. A low-resolution edge map is generated so that further processing may be focused on relevant regions of interest. Edge points in each region of interest are used to initialise open, active contour models, which are propagated and refined, via a pyramid, to a higher resolution. At each resolution, internal and external constraints are applied to a snake; the internal constraint is a smoothness function and the external one is a maximum-likelihood estimate of the grey-level response at the edge of each light stripe. Results of a provisional evaluation study indicate that this automated procedure provides estimates of pothole dimension suitable for use in a first, screening, assessment of highway condition. Received: 9 October 1998 / Accepted: 22 February 2000  相似文献   

19.
Approximate query mapping: Accounting for translation closeness   总被引:2,自引:0,他引:2  
In this paper we present a mechanism for approximately translating Boolean query constraints across heterogeneous information sources. Achieving the best translation is challenging because sources support different constraints for formulating queries, and often these constraints cannot be precisely translated. For instance, a query [score>8] might be “perfectly” translated as [rating>0.8] at some site, but can only be approximated as [grade=A] at another. Unlike other work, our general framework adopts a customizable “closeness” metric for the translation that combines both precision and recall. Our results show that for query translation we need to handle interdependencies among both query conjuncts as well as disjuncts. As the basis, we identify the essential requirements of a rule system for users to encode the mappings for atomic semantic units. Our algorithm then translates complex queries by rewriting them in terms of the semantic units. We show that, under practical assumptions, our algorithm generates the best approximate translations with respect to the closeness metric of choice. We also present a case study to show how our technique may be applied in practice. Received: 15 October 2000 / Accepted: 15 April 2001 Published online: 28 June 2001  相似文献   

20.
We present part of an industrial project where mechanized theorem proving is used for the validation of a translator which generates safety critical software. In this project, the mechanized proof is decomposed in two parts: one is done “online”, at each run of the translator, by a custom prover which checks automatically that the result of each translation meets some verification conditions; the other is done “offline”, once for all, interactively with a general purpose prover; the offline proof shows that the verification conditions checked by the online prover are sufficient to guarantee the correctness of each translation. The provably correct verification conditions can thus be seen as specifications for the online prover. This approach is called mechanized result verification. This paper describes the project requirements and explains the motivations to formal validation by mechanized result verification, provides an overview of the formalization of the specifications for the online prover and discusses in detail some issues we have addressed in the mechanized offline proof.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号