首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Lambertian reflectance and linear subspaces   总被引:23,自引:0,他引:23  
We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.  相似文献   

2.
3.
《Real》2004,10(1):23-30
A color correction method for balancing the color appearances, among a group of images, about a specified object or scene, such as panoramic images and object movies, is developed and tested in this paper. In order to increase the running speed of color correction and reduce the out-of-gamut color pixels, we introduce the selection of principal regions. The average color values of principal regions are used to construct the low-degree (up to degree two) polynomial mapping functions from the source images to the corrected images. The functions are run in the decorrelated color spaces. Our method is tested using real and synthetic images. The results of these tests show the proposed method can get a better performance than other existing methods.  相似文献   

4.
计算机三维物体实体组建方法的研究   总被引:3,自引:0,他引:3  
在计算机绘图和三维图像显示,例如激光扫描共焦显微图像、CT图像、MRI图像等的绘制和显示中,三维物体的组建是必不可少的。目前,三维物体的组建都是基于二维物体组建的基础之上。二维物体组建是基于连通性进行二维连通区域检出的,因而三维物体的组建速度比较慢。为此,文章提出一种新方法:围线积分法区域标号和链接表法三维叠片,用于三维图像中三维物体实体的组建。这种方法的基本原理是在三维二值图像中,先采用围线积分法区域标号,组建二维物体切片,再采用链接表法三维叠片,由二维物体切片组建三维实体。这种方法的优点是组建速度比较快。  相似文献   

5.
6.
7.
Contour extraction of moving objects in complex outdoor scenes   总被引:29,自引:1,他引:29  
This paper presents a new approach to the extraction of the contour of a moving object. The method is based on the fusion of a motion segmentation technique using image subtraction and a color segmentation technique based on the split-and-merge paradigm and edge information obtained from using the Canny edge detector. The advantages of this method are the following: it can detect large moving objects, the background can be arbitrarily complicated and contain many nonmoving objects, and it requires only three image frames that need not be consecutive provided that the moving object is entirely contained in the three frames. It is assumed that there is only one moving object in the image and the objects are not blurred by their motion so that the edges in the image are sharp. The method was applied to road images containing a moving vehicle, and the results show that the contour was correctly extracted in 18 of the 20 cases. We show that this contour extraction method gives good results for other types of moving objects as well. We also describe how the extracted contour can be used to classify a given vehicle into five generic categories. In this study, 19 out of the 20 vehicles were correctly classified. These results demonstrate that integration of multiple cues obtained from relatively simple image analysis techniques leads to a robust extraction of the object of interest in complex outdoor scenes.Research supported by a grant from the U.S. Department of Transportation through the Great Lakes Center for Truck Transportation Research and by a grant from the National Science Foundation (CDA-8806599).  相似文献   

8.
9.
Visual context provides cues about an object’s presence, position and size within the observed scene, which should be used to increase the performance of object detection techniques. However, in computer vision, object detectors typically ignore this information. We therefore present a framework for visual-context-aware object detection. Methods for extracting visual contextual information from still images are proposed, which are then used to calculate a prior for object detection. The concept is based on a sparse coding of contextual features, which are based on geometry and texture. In addition, bottom-up saliency and object co-occurrences are exploited, to define auxiliary visual context. To integrate the individual contextual cues with a local appearance-based object detector, a fully probabilistic framework is established. In contrast to other methods, our integration is based on modeling the underlying conditional probabilities between the different cues, which is done via kernel density estimation. This integration is a crucial part of the framework which is demonstrated within the detailed evaluation. Our method is evaluated using a novel demanding image data set and compared to a state-of-the-art method for context-aware object detection. An in-depth analysis is given discussing the contributions of the individual contextual cues and the limitations of visual context for object detection.  相似文献   

10.
A major problem in object recognition is that a novel image of a given object can be different from all previously seen images. Images can vary considerably due to changes in viewing conditions such as viewing position and illumination. In this paper we distinguish between three types of recognition schemes by the level at which generalization to novel images takes place: universal, class, and model-based. The first is applicable equally to all objects, the second to a class of objects, and the third uses known properties of individual objects. We derive theoretical limitations on each of the three generalization levels. For the universal level, previous results have shown that no invariance can be obtained. Here we show that this limitation holds even when the assumptions made on the objects and the recognition functions are relaxed. We also extend the results to changes of illumination direction. For the class level, previous studies presented specific examples of classes of objects for which functions invariant to viewpoint exist. Here, we distinguish between classes that admit such invariance and classes that do not. We demonstrate that there is a tradeoff between the set of objects that can be discriminated by a given recognition function and the set of images from which the recognition function can recognize these objects. Furthermore, we demonstrate that although functions that are invariant to illumination direction do not exist at the universal level, when the objects are restricted to belong to a given class, an invariant function to illumination direction can be defined. A general conclusion of this study is that class-based processing, that has not been used extensively in the past, is often advantageous for dealing with variations due to viewpoint and illuminant changes.  相似文献   

11.
In order for a robot to operate autonomously in its environment, it must be able to perceive its environment and take actions based on these perceptions. Recognizing the functionalities of objects is an important component of this ability. In this paper, we look into a new area of functionality recognition: determining the function of an object from its motion. Given a sequence of images of a known object performing some function, we attempt to determine what that function is. We show that the motion of an object, when combined with information about the object and its normal uses, provides us with strong constraints on possible functions that the object might be performing  相似文献   

12.
We constructed a universal system for object recognition, which uses preliminary training based on sample images of “objects” and “non-objects.” The images are represented by separate points in multi-dimensional space of features. A recognition system which uses the features space obtained on the basis of lateral-inhibition type functions is described. The article is published in the original.  相似文献   

13.
We consider the two-frame problem of estimating the motion of a rigid object in 3D from its noisy 2D perspective images. We assume that the object has straight edges meeting at right angles. The typical object is a rectangle, perhaps partly occluded, or a corner of a box. The accuracy of the obtained motion parameters is compared with that obtained from an object defined as a set of points.  相似文献   

14.
Image-based and model-based methods are two representative rendering methods for generating virtual images of objects from their real images. However, both methods still have several drawbacks when we attempt to apply them to mixed reality where we integrate virtual images with real background images. To overcome these difficulties, we propose a new method, which we refer to as the Eigen-Texture method. The proposed method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface generated from a sequence of range images. The Eigen-Texture method is an example of a view-dependent texturing approach which combines the advantages of image-based and model-based approaches. No reflectance analysis of the object surface is needed, while an accurate 3D geometric model facilitates integration with other scenes. The paper describes the method and reports on its implementation  相似文献   

15.
Implicit Surface-Based Geometric Fusion   总被引:1,自引:0,他引:1  
This paper introduces a general purpose algorithm for reliable integration of sets of surface measurements into a single 3D model. The new algorithm constructs a single continuous implicit surface representation which is the zero-set of a scalar field function. An explicit object model is obtained using any implicit surface polygonization algorithm. Object models are reconstructed from both multiple view conventional 2.5D range images and hand-held sensor range data. To our knowledge this is the first geometric fusion algorithm capable of reconstructing 3D object models from noisy hand-held sensor range data.This approach has several important advantages over existing techniques. The implicit surface representation allows reconstruction of unknown objects of arbitrary topology and geometry. A continuous implicit surface representation enables reliable reconstruction of complex geometry. Correct integration of overlapping surface measurements in the presence of noise is achieved using geometric constraints based on measurement uncertainty. The use of measurement uncertainty ensures that the algorithm is robust to significant levels of measurement noise. Previous implicit surface-based approaches use discrete representations resulting in unreliable reconstruction for regions of high curvature or thin surface sections. Direct representation of the implicit surface boundary ensures correct reconstruction of arbitrary topology object surfaces. Fusion of overlapping measurements is performed using operations in 3D space only. This avoids the local 2D projection required for many previous methods which results in limitations on the object surface geometry that is reliably reconstructed. All previous geometric fusion algorithms developed for conventional range sensor data are based on the 2.5D image structure preventing their use for hand-held sensor data. Performance evaluation of the new integration algorithm against existing techniques demonstrates improved reconstruction of complex geometry.  相似文献   

16.
17.
We study the problem of detecting objects in still, gray-scale images. Our primary focus is the development of a learning-based approach to the problem that makes use of a sparse, part-based representation. A vocabulary of distinctive object parts is automatically constructed from a set of sample images of the object class of interest; images are then represented using parts from this vocabulary, together with spatial relations observed among the parts. Based on this representation, a learning algorithm is used to automatically learn to detect instances of the object class in new images. The approach can be applied to any object with distinguishable parts in a relatively fixed spatial configuration; it is evaluated here on difficult sets of real-world images containing side views of cars, and is seen to successfully detect objects in varying conditions amidst background clutter and mild occlusion. In evaluating object detection approaches, several important methodological issues arise that have not been satisfactorily addressed in the previous work. A secondary focus of this paper is to highlight these issues, and to develop rigorous evaluation standards for the object detection problem. A critical evaluation of our approach under the proposed standards is presented.  相似文献   

18.
The appearance of an object greatly changes under different lighting conditions. Even so, previous studies have demonstrated that the appearance of an object under varying illumination conditions can be represented by a linear subspace. A set of basis images spanning such a linear subspace can be obtained by applying the principal component analysis (PCA) for a large number of images taken under different lighting conditions. Since little is known about how to sample the appearance of an object in order to correctly obtain its basis images, it was a common practice to use as many input images as possible. In this study, we present a novel method for analytically obtaining a set of basis images of an object for varying illumination from input images of the object taken properly under a set of light sources, such as point light sources or extended light sources. Our proposed method incorporates the sampling theorem of spherical harmonics for determining a set of lighting directions to efficiently sample the appearance of an object. We further consider the issue of aliasing caused by insufficient sampling of the object's appearance. In particular, we investigate the effectiveness of using extended light sources for modeling the appearance of an object under varying illumination without suffering the aliasing caused by insufficient sampling of its appearance.  相似文献   

19.
20.
In this study, we propose a novel object-oriented change detection method for remote-sensing images. First, the Gabor texture and Markov random field texture are extracted based on the remote-sensing images, and an initial pixel-level change detection result is produced. Second, in order to reduce the influence of feature uncertainty on the change detection results, the weights of different features are calculated by the Relief algorithm based on the initial pixel-level change detection result, and several difference images are fused to obtain a single comprehensive difference image. Third, different pixel-level change detection results are obtained using diverse change detection methods. The two-temporal images are then stacked and segmented, and to ensure change detection method separability, the weighted object change probability is obtained by fusing five different object change probabilities, which are calculated from the pixel-level change detection results. Finally, the objects are labelled as the class with a higher weighted object change probability. Our experimental results showed that the accuracy of change detection results obtained using the weighted object change probability was higher than that of the change detection results produced using the independent object change probability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号