首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The colours of chromatically homogeneous object surfaces measured by a sensor vary with the illuminant colour used to illuminate the objects. In contrast, colour constancy enables humans to identify the true colours of the surfaces under varying illumination. This paper proposes an adaptive colour constancy algorithm which estimates the illuminant colour from wavelet coefficients at each scale of the decomposition by discrete wavelet transform of the input image. The angular error between the estimated illuminant colours in consecutive scales are used to determine the optimum scale for the best estimate of the true illuminant colour. The estimated illuminant colour is then used to modify the approximation subbands of the image so as to generate the illuminant-colour corrected image via inverse discrete wavelet transform. The experiments show that the colour constancy results generated by the proposed algorithm are comparable or better than those of the state-of-the-art colour constancy algorithms that use low-level image features.  相似文献   

2.
Accurate illuminant estimation from digital image data is a fundamental step of practically every image colour correction. Combinational illuminant estimation schemes have been shown to improve estimation accuracy significantly compared to other colour constancy algorithms. These schemes combine individual estimates of simpler colour constancy algorithms in some ‘intelligent’ manner into a joint and, usually, more efficient illuminant estimation. Among them, a combinational method based on Support Vector Regression (SVR) was proposed recently, demonstrating the more accurate illuminant estimation (Li et al. IEEE Trans. Image Process. 23(3), 1194–1209, 2014). We extended this method by our previously introduced convolutional framework, in which the illuminant was estimated by a set of image-specific filters generated using a linear analysis. In this work, the convolutional framework was reformulated, so that each image-specific filter obtained by principal component analysis (PCA) produced one illuminant estimate. All these individual estimates were then combined into a joint illuminant estimation by using SVR. Each illuminant estimation by using a single image-specific PCA filter within the convolutional framework actually represented one base algorithm for the combinational method based on SVR. The proposed method was validated on the well-known Gehler image dataset, reprocessed and prepared by author Shi, and, as well, on the NUS multi-camera dataset. It was shown that the median and trimean angular errors were (non-significantly) lower for our proposed method compared to the original combinational method based on SVR for which our method utilized just 6 image-specific PCA filters, while the original combinational method required 12 base algorithms for similar results. Nevertheless, a proposed method unified grey edge framework, PCA analysis, linear filtering theory, and SVR regression formally for the combinational illuminant estimation.  相似文献   

3.
Statistics-based colour constancy algorithms work well as long as there are many colours in a scene, they fail however when the encountering scenes comprise few surfaces. In contrast, physics-based algorithms, based on an understanding of physical processes such as highlights and interreflections, are theoretically able to solve for colour constancy even when there are as few as two surfaces in a scene. Unfortunately, physics-based theories rarely work outside the lab. In this paper we show that a combination of physical and statistical knowledge leads to a surprisingly simple and powerful colour constancy algorithm, one that also works well for images of natural scenes.From a physical standpoint we observe that given the dichromatic model of image formation the colour signals coming from a single uniformly-coloured surface are mapped to a line in chromaticity space. One component of the line is defined by the colour of the illuminant (i.e. specular highlights) and the other is due to its matte, or Lambertian, reflectance. We then make the statistical observation that the chromaticities of common light sources all follow closely the Planckian locus of black-body radiators. It follows that by intersecting the dichromatic line with the Planckian locus we can estimate the chromaticity of the illumination. We can solve for colour constancy even when there is a single surface in the scene. When there are many surfaces in a scene the individual estimates from each surface are averaged together to improve accuracy.In a set of experiments on real images we show our approach delivers very good colour constancy. Moreover, performance is significantly better than previous dichromatic algorithms.  相似文献   

4.
In this paper, we describe a probabilistic voxel mapping algorithm using an adaptive confidence measure of stereo matching. Most of the 3D mapping algorithms based on stereo matching usually generate a map formed by point cloud. There are many reconstruction errors. The reconstruction errors are due to stereo reconstruction error factors such as calibration errors, stereo matching errors, and triangulation errors. A point cloud map with reconstruction errors cannot accurately represent structures of environments and needs large memory capacity. To solve these problems, we focused on the confidence of stereo matching and probabilistic representation. For evaluation of stereo matching, we propose an adaptive confidence measure that is suitable for outdoor environments. The confidence of stereo matching can be reflected in the probability of restoring structures. For probabilistic representation, we propose a probabilistic voxel mapping algorithm. The proposed probabilistic voxel map is a more reliable representation of environments than the commonly used voxel map that just contains the occupancy information. We test the proposed confidence measure and probabilistic voxel mapping algorithm in outdoor environments.  相似文献   

5.
6.
Stereo using monocular cues within the tensor voting framework   总被引:3,自引:0,他引:3  
We address the fundamental problem of matching in two static images. The remaining challenges are related to occlusion and lack of texture. Our approach addresses these difficulties within a perceptual organization framework, considering both binocular and monocular cues. Initially, matching candidates for all pixels are generated by a combination of matching techniques. The matching candidates are then embedded in disparity space, where perceptual organization takes place in 3D neighborhoods and, thus, does not suffer from problems associated with scanline or image neighborhoods. The assumption is that correct matches produce salient, coherent surfaces, while wrong ones do not. Matching candidates that are consistent with the surfaces are kept and grouped into smooth layers. Thus, we achieve surface segmentation based on geometric and not photometric properties. Surface overextensions, which are due to occlusion, can be corrected by removing matches whose projections are not consistent in color with their neighbors of the same surface in both images. Finally, the projections of the refined surfaces on both images are used to obtain disparity hypotheses for unmatched pixels. The final disparities are selected after a second tensor voting stage, during which information is propagated from more reliable pixels to less reliable ones. We present results on widely used benchmark stereo pairs.  相似文献   

7.
Hierarchical stereo and motion correspondence using feature groupings   总被引:5,自引:1,他引:4  
Hierarchical feature based stereo matching and motion correspondence algorithms are presented. The hierarchy consists of lines, vertices, edges and surfaces. Matching starts at the highest level of the hierarchy (surfaces) and proceeds to the lowest (lines). Higher level features are easier to match, because they are fewer in number and more distinct in form. These matches then constrain the matches at lower levels. Perceptual and structural relations are used to group matches into islands of certainty. A Truth Maintenance System (TMS) is used to enforce grouping constraints and eliminate inconsistent match groupings. The TMS is also used to carry out belief revisions necessitiated by additions, deletions and confirmations of feature and match hypotheses.The support of Defense Advanced Research Projects Agency (ARPA Order No. 8979) and the U.S. Army Engineer Topographic Laboratories under contract DACA 76-92-C-0024 is gratefully acknowledged.  相似文献   

8.
A standard method for handling Bayesian models is to use Markov chain Monte Carlo methods to draw samples from the posterior. We demonstrate this method on two core problems in computer vision—structure from motion and colour constancy. These examples illustrate a samplers producing useful representations for very large problems. We demonstrate that the sampled representations are trustworthy, using consistency checks in the experimental design. The sampling solution to structure from motion is strictly better than the factorisation approach, because: it reports uncertainty on structure and position measurements in a direct way; it can identify tracking errors; and its estimates of covariance in marginal point position are reliable. Our colour constancy solution is strictly better than competing approaches, because: it reports uncertainty on surface colour and illuminant measurements in a direct way; it incorporates all available constraints on surface reflectance and on illumination in a direct way; and it integrates a spatial model of reflectance and illumination distribution with a rendering model in a natural way. One advantage of a sampled representation is that it can be resampled to take into account other information. We demonstrate the effect of knowing that, in our colour constancy example, a surface viewed in two different images is in fact the same object. We conclude with a general discussion of the strengths and weaknesses of the sampling paradigm as a tool for computer vision.  相似文献   

9.
《自动化学报》1999,25(6):1
提出一种由明暗信息复原形状的鲁棒算法.此算法能有效地估计照明方向、漫反射系数、照明天顶角以及沿图像轮廓由明暗信息复原形状,并且在考虑自阴影影响情况下,用新方法从图像的统计特征估计照明的仰角和表面反射系数,使重建强度梯度接近输入图像梯度实现平滑约束.该方法为数据驱动,稳定可靠,能同时更新表面斜率与高度图,大大减小发射项与可积分项内的剩余误差.最后给出SFS(Shape from Shading)算法的分层实现.  相似文献   

10.
Learning and feature selection in stereo matching   总被引:10,自引:0,他引:10  
We present a novel stereo matching algorithm which integrates learning, feature selection, and surface reconstruction. First, a new instance based learning (IBL) algorithm is used to generate an approximation to the optimal feature set for matching. In addition, the importance of two separate kinds of knowledge, image dependent knowledge and image independent knowledge, is discussed. Second, we develop an adaptive method for refining the feature set. This adaptive method analyzes the feature error to locate areas of the image that would lead to false matches. Then these areas are used to guide the search through feature space towards maximizing the class separation distance between the correct match and the false matches. Third, we introduce a self-diagnostic method for determining when apriori knowledge is necessary for finding the correct match. If the a priori knowledge is necessary then we use a surface reconstruction model to discriminate between match possibilities. Our algorithm is comprehensively tested against fixed feature set algorithms and against a traditional pyramid algorithm. Finally, we present and discuss extensive empirical results of our algorithm based on a large set of real images  相似文献   

11.
基于照明参数与反射系数的分层SFS算法   总被引:5,自引:0,他引:5  
杨敬安 《自动化学报》1999,25(6):735-742
提出一种由明暗信息复原形状的鲁棒算法.此算法能有效地估计照明方向、漫反射 系数、照明天顶角以及沿图像轮廓由明暗信息复原形状,并且在考虑自阴影影响情况下,用新 方法从图像的统计特征估计照明的仰角和表面反射系数,使重建强度梯度接近输入图像梯度 实现平滑约束.该方法为数据驱动,稳定可靠,能同时更新表面斜率与高度图,大大减小发射 项与可积分项内的剩余误差.最后给出SFS(Shape from Shading)算法的分层实现.  相似文献   

12.
The problem of automatic segmentation of magnetic resonance (MR) images of human brain into anatomical structures is considered. Currently, the most popular segmentation algorithms are based on the registration (matching) of the input image with (to) an atlas—an image for which an expert labeling is known. Segmentation on the basis of registration with multiple atlases allows one to better take into account anatomical variability and thereby to compensate, to some extent, for the errors of matching to each individual atlas. In this work, a more efficient (in speed and memory) implementation is proposed of one of the best multiatlas label fusion algorithms in order to obtain a labeling of the input image. The algorithm is applied to the problem of segmentation of brain MR images into 43 anatomical regions with the use of the publicly available IBSR database, in contrast to the original work, where the authors provide test results for the problem of extraction of a single anatomical structure, the hippocampus.  相似文献   

13.
A method is presented which, given the RGB values of a colour stimulus displayed on a given cathode ray tube (to which a CIE XYZ triple corresponds), makes it possible to find a spectral reflectance function characterizing an object colour. Such an object when “illuminated” by a given illuminant produces a metameric spectral power distribution, that is, one with the same XYZ tristimulus values. The method is useful in the colorimetric matching of colours on different media or supports.  相似文献   

14.
The rank transform is a nonparametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to this problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives one constraint which must be satisfied for a correct match. This has been termed the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. A novel matching algorithm incorporating the rank constraint has also been proposed. This modified algorithm consistently resulted in an increased percentage of correct matches, for all test imagery used. Furthermore, the rank constraint has been used to devise a method of identifying regions of an image where the rank transform, and hence matching outcome, is more susceptible to noise. Experimental results have shown that the errors predicted using this technique are consistent with the actual errors which result when images are corrupted with noise. Such a method could be used to identify matches which are likely to be incorrect and/or provide a measure of confidence in a match.  相似文献   

15.
The authors show how large efficiencies can be achieved in model-based 3-D vision by combining the notions of discrete relaxation and bipartite matching. The computational approach presented is capable of pruning large segments of search space-an indispensable step when the number of objects in the model library is large and when recognition of complex objects with a large number of surfaces is called for. Bipartite matching is used for quick wholesale rejection of inapplicable models and for the determination of compatibility of a scene surface with a potential model surface taking into account relational considerations. The time complexity function associated with those aspects of the procedure that are implemented via bipartite matching is provided. The algorithms do not take more than a couple of iterations, even for objects with more than 30 surfaces  相似文献   

16.
The Internet has led to the development of online markets, and computer scientists have designed various auction algorithms, as well as automated exchanges for standardized commodities; however, they have done little work on exchanges for complex non-standard goods. We present an exchange system for trading complex goods, such as used cars or non-standard financial securities. The system allows traders to represent their buy and sell orders by multiple attributes; for example, a car buyer can specify a model, options, colour, and other desirable features. Traders can also provide complex price constraints, along with preferences among acceptable trades; for instance, a car buyer can specify dependency of an acceptable price on the model, year of production, and mileage. We describe the representation and indexing of orders, and give algorithms for fast identification of matches between buy and sell orders. The system identifies the most preferable matches, which maximize trader satisfaction, and it allows control over the trade-off between speed and optimality of matching. It supports markets with up to 300?000 orders, and processes hundreds of new orders per second.  相似文献   

17.
In this paper, we propose a simple, flexible, and efficient hybrid spell checking methodology based upon phonetic matching, supervised learning, and associative matching in the AURA neural system. We integrate Hamming Distance and n-gram algorithms that have high recall for typing errors and a phonetic spell-checking algorithm in a single novel architecture. Our approach is suitable for any spell checking application though aimed toward isolated word error correction, particularly spell checking user queries in a search engine. We use a novel scoring scheme to integrate the retrieved words from each spelling approach and calculate an overall score for each matched word. From the overall scores, we can rank the possible matches. We evaluate our approach against several benchmark spellchecking algorithms for recall accuracy. Our proposed hybrid methodology has the highest recall rate of the techniques evaluated. The method has a high recall rate and low-computational cost.  相似文献   

18.
Timothy Bell  David Kulp 《Software》1993,23(7):757-771
Ziv-Lempel coding is currently one of the more practical data compression schemes. It operates by replacing a substring of a text with a pointer to its longest previous occurrence in the input, for each coding step. Decoding a compressed file is very fast, but encoding involves searching at each coding step to find the longest match for the next few characters. This paper presents eight data structures that can be used to accelerate the searching, including adaptations of four methods normally used for exact matching searching. The algorithms are evaluated analytically and empirically, indicating the trade-offs available between compression speed and memory consumption. Two of the algorithms are well-known methods of finding the longest match—the time-consuming linear search, and the storage-intensive trie (digital search tree). The trie is adapted along the lines of a PATRICIA tree to operate economically. Hashing, binary search trees, splay trees and the Boyer-Moore searching algorithm are traditionally used to search for exact matches, but we show how these can be adapted to find longest matches. In addition, two data structures specifically designed for the application are presented.  相似文献   

19.
常用的特征点匹配算法通常设置严苛的阈值以剔除错误匹配,但这样也会导致过多的正确匹配被删除。针对这一问题,提出了一种采用双约束的特征点匹配方法。首先,在局部上统计特征点匹配数量,运用网格对应的方法过滤部分错误匹配;然后,在全局上运用RANSAC方法计算基础矩阵,通过极线约束对匹配进行再一次筛选。实验表明,相比于传统的匹配算法,该算法能在不增加算法运行时间的前提下,获得更高数量和更高质量的匹配集合。  相似文献   

20.
Many traditional two-view stereo algorithms explicitly or implicitly use the frontal parallel plane assumption when exploiting contextual information since, e.g., the smoothness prior biases toward constant disparity (depth) over a neighborhood. This introduces systematic errors to the matching process for slanted or curved surfaces. These errors are nonnegligible for detailed geometric modeling of natural objects such as a human face. We show how to use contextual information geometrically to avoid such errors. A differential geometric study of smooth surfaces allows contextual information to be encoded in Cartan's moving frame model over local quadratic approximations, providing a framework of geometric consistency for both depth and surface normals; the accuracy of our reconstructions argues for the sufficiency of the approximation. In effect, Cartan's model provides the additional constraint necessary to move beyond the frontal parallel plane assumption in stereo reconstruction. It also suggests how geometry can extend surfaces to account for unmatched points due to partial occlusion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号