首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The appearance manifold [WTL*06] is an efficient approach for modeling and editing time‐variant appearance of materials from the BRDF data captured at single time instance. However, this method is difficult to apply in images in which weathering and shading variations are combined. In this paper, we present a technique for modeling and editing the weathering effects of an object in a single image with appearance manifolds. In our approach, we formulate the input image as the product of reflectance and illuminance. An iterative method is then developed to construct the appearance manifold in color space (i.e., Lab space) for modeling the reflectance variations caused by weathering. Based on the appearance manifold, we propose a statistical method to robustly decompose reflectance and illuminance for each pixel. For editing, we introduce a “pixel‐walking” scheme to modify the pixel reflectance according to its position on the manifold, by which the detailed reflectance variations are well preserved. We illustrate our technique in various applications, including weathering transfer between two images that is first enabled by our technique. Results show that our technique can produce much better results than existing methods, especially for objects with complex geometry and shading effects.  相似文献   

2.
Cdric  Nicolas  Michel 《Neurocomputing》2008,71(7-9):1274-1282
Mixtures of probabilistic principal component analyzers model high-dimensional nonlinear data by combining local linear models. Each mixture component is specifically designed to extract the local principal orientations in the data. An important issue with this generative model is its sensitivity to data lying off the low-dimensional manifold. In order to address this problem, the mixtures of robust probabilistic principal component analyzers are introduced. They take care of atypical points by means of a long tail distribution, the Student-t. It is shown that the resulting mixture model is an extension of the mixture of Gaussians, suitable for both robust clustering and dimensionality reduction. Finally, we briefly discuss how to construct a robust version of the closely related mixture of factor analyzers.  相似文献   

3.
This paper deals with the super-resolution (SR) problem based on a single low-resolution (LR) image. Inspired by the local tangent space alignment algorithm in [16] for nonlinear dimensionality reduction of manifolds, we propose a novel patch-learning method using locally affine patch mapping (LAPM) to solve the SR problem. This approach maps the patch manifold of low-resolution image to the patch manifold of the corresponding high-resolution (HR) image. This patch mapping is learned by a training set of pairs of LR/HR images, utilizing the affine equivalence between the local low-dimensional coordinates of the two manifolds. The latent HR image of the input (an LR image) is estimated by the HR patches which are generated by the proposed patch mapping on the LR patches of the input. We also give a simple analysis of the reconstruction errors of the algorithm LAPM. Furthermore we propose a global refinement technique to improve the estimated HR image. Numerical results are given to show the efficiency of our proposed methods by comparing these methods with other existing algorithms.  相似文献   

4.
基于概率扩散的多光谱遥感图像分类模型   总被引:1,自引:0,他引:1       下载免费PDF全文
为了提高遥感图像分类精度,提出了一种基于概率扩散模型的多光谱遥感图像自动分类技术。该方法首先通过比较模糊C均值分类器(FCM)的有效性函数来自动确定最优分类数目,然后利用基于形态学的各向异性概率扩散模型来调整中心像元隶属类别的概率,最后根据概率扩散的隶属概率向量图,并按照最大后验概率估计(MAP)对像元进行分类。由于各向异性扩散具有保边缘平滑的特点,因此,该概率扩散模型不仅能够有效地抑制同质区域内部“斑点”的产生。而且使得图像上重要的边缘特征得到了较好地保留。实验结果表明,该分类算法不仅能够避免分类图像中“斑点”噪声的影响,而且分类后的总体精度达到了77.76%和Kappa系数达到了0.7198,均优于未经过概率扩散的最大后验概率估计分类算法,因而具有一定的实用价值。  相似文献   

5.
基于遥感影像的建筑物自动提取方法容易受混合像元影响,目标提取精度不高。亚像元定位可以提取亚像元尺度地物分布信息,减轻混合像元对目标提取结果造成的影响。传统亚像元定位模型采用各向同性邻域描述地物的空间相关性,并没有考虑地物特有的形状信息,难以满足建筑物提取的需要。在考虑建筑物光谱特征的基础上,建立了平行与垂直于目标建筑物主方向的各向异性邻域,并采用基于各向异性Markov随机场的亚像元定位模型进行了亚像元尺度的建筑物提取。基于QuickBird多光谱数据与AVIRIS高光谱数据的实验结果表明,该模型提取的建筑物不仅具有更高的空间分辨率,而且能够较好地保持建筑物边缘与角点的形状信息,是一种有效的亚像元尺度建筑物提取方法。  相似文献   

6.
A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.  相似文献   

7.
In this paper, we present a probabilistic generative approach for constructing topographic maps of tree-structured data. Our model defines a low-dimensional manifold of local noise models, namely, (hidden) Markov tree models, induced by a smooth mapping from low-dimensional latent space. We contrast our approach with that of topographic map formation using recursive neural-based techniques, namely, the self-organizing map for structured data (SOMSD) (Hagenbuchner , 2003). The probabilistic nature of our model brings a number of benefits: 1) naturally defined cost function that drives the model optimization; 2) principled model comparison and testing for overfitting; 3) a potential for transparent interpretation of the map by inspecting the underlying local noise models; 4) natural accommodation of alternative local noise models implicitly expressing different notions of structured data similarity. Furthermore, in contrast with the recursive neural-based approaches, the smooth nature of the mapping from the latent space to the local model space allows for calculation of magnification factors—a useful tool for the detection of data clusters. We demonstrate our approach on three data sets: a toy data set, an artificially generated data set, and on a data set of images represented as quadtrees.   相似文献   

8.
Visual secret sharing (VSS) scheme is an encryption technique that utilizes the human visual system in recovering the secret image and does not require any cryptographic computation. Pixel expansion has been a major issue of VSS schemes. A number of probabilistic VSS schemes with minimum pixel expansion have been proposed for binary secret images. This paper presents a general probabilistic (kn)-VSS scheme for grey-scale images and another scheme for color images. With our schemes, the pixel expansion can be set to a user-defined value. When this value is 1, there is no pixel expansion at all. The quality of reconstructed secret images, measured by average contrast (or average relative difference), is equivalent to the contrast of existing deterministic VSS schemes. Previous probabilistic VSS schemes for black-and-white images can be viewed as special cases in the schemes proposed here.  相似文献   

9.
Numerical and computer-graphic methods for conformal image mapping between two simply connected regions are described. The immediate motivation for this application is that the visual field is represented in the brain by mappings which are, at least approximately, conformal. Thus, to simulate the imaging properties of the human visual system (and perhaps other sensory systems), conformal image mapping is a necessary technique. For generating the conformal map, a method for analytic mappings and an implementation of the Symm algorithm for numerical conformal mapping are shown. The first method evaluates the inverse mapping function at each pixel of the range, with antialiasing by multiresolution texture prefiltering and bilinear interpolation. The second method is based on constructing a piecewise affine approximation of the mapping in the form of a joint triangulation, or triangulation map, in which only the nodes of the triangulation are conformally mapped. The texture is then mapped by a local affine transformation on each pixel of the range triangulation with the same antialiasing used in the first method. The algorithms are illustrated with examples of conformal mappings constructed analytically from elementary mappings, such as the linear fractional map, the complex algorithm, etc. Applications of numerically generated maps between highly irregular regions and an example of the visual field mapping that motivates this work are also shown  相似文献   

10.
融合颜色属性和空间信息的显著性物体检测   总被引:1,自引:1,他引:0       下载免费PDF全文
摘 要:目的:提出了一种基于颜色属性和空间信息的显著性物体检测算法,并将其用于交通标志的检测。方法:首先,训练颜色属性得到颜色-像素值分布,据此将图像划分为不同的颜色聚簇,每个聚簇的显著性取决于其空间紧致性。其次,将每个聚簇分割为多个区域,用颜色属性描述子表示每个区域,计算区域的全局对比度。最后,同时考虑区域对比度和相应聚簇的空间紧致性,得到最终的显著图。在此基础上,将交通标志的先验知识转换为自上而下的显著性图,形成任务驱动的显著性模型,对交通标志进行检测。结果:在公开数据集上的测试结果表明,该算法取得最高92%的查准率,优于其它流行的显著性算法。在交通标志数据集上的检测取得了90.7%的正确率。结论:本文提出了一种新的显著性检测算法,同时考虑了区域的颜色和空间信息,在公开数据集上取得了较高的查准率和查全率。在对交通标志的检测中也取得了较好的结果。  相似文献   

11.
基于区域清晰度的纺织纤维图像融合   总被引:1,自引:0,他引:1  
针对多焦面纺织纤维图像,提出一种基于区域清晰度的图像融合方法。用像素点灰度的模值衡量像素点的清晰度。首先通过对多焦面图像搜索像素点最大模值的方法,确定每个最清晰像素点(即灰度的模值最大)所在的图层号,并保存在图层号矩阵中。再针对图像中的噪声干扰,根据局部区域模值的最大值,确定区域阈值进行去噪处理,并修正图层号矩阵。然后根据图层号矩阵,用对应图层像素点的灰度值合成得到多焦面融合图像。最后对融合方法提出改进措施,以进一步提高图像处理的速度。实验表明所提出的多焦面图像融合方法行之有效。  相似文献   

12.
基于多项流形的黎曼几何,提出一个在矩阵流形框架下度量颜色共生矩阵信息差异并将其应用于目标识别的新方法。对于给定的颜色量化水平和每个像素局部邻域,该方法将一幅彩色图像的任意两个颜色通道中共生的颜色建模为一个潜在的多项分布的概率实现。通过基于紧化的共生频率嵌入,可将每幅图像等同为一个积矩阵流形上的一点,其中每个因子流形被赋予了从对应的多项流形上诱导的Fisher信息距离度量。对于一个识别任务,测试样本与训练样本间的匹配通过先在每个因子流形上使用最近邻分类器进行标签预测然后在积流形上进行多数投票完成。在GT彩色人脸库和COIL-100目标库上获得的出色的识别效果验证了该方法的有效性。  相似文献   

13.
The gamut mapping algorithm is one of the most promising methods to achieve computational color constancy. However, so far, gamut mapping algorithms are restricted to the use of pixel values to estimate the illuminant. Therefore, in this paper, gamut mapping is extended to incorporate the statistical nature of images. It is analytically shown that the proposed gamut mapping framework is able to include any linear filter output. The main focus is on the local n-jet describing the derivative structure of an image. It is shown that derivatives have the advantage over pixel values to be invariant to disturbing effects (i.e. deviations of the diagonal model) such as saturated colors and diffuse light. Further, as the n-jet based gamut mapping has the ability to use more information than pixel values alone, the combination of these algorithms are more stable than the regular gamut mapping algorithm. Different methods of combining are proposed. Based on theoretical and experimental results conducted on large scale data sets of hyperspectral, laboratory and real-world scenes, it can be derived that (1) in case of deviations of the diagonal model, the derivative-based approach outperforms the pixel-based gamut mapping, (2) state-of-the-art algorithms are outperformed by the n-jet based gamut mapping, (3) the combination of the different n-jet based gamut mappings provide more stable solutions, and (4) the fusion strategy based on the intersection of feasible sets provides better color constancy results than the union of the feasible sets.  相似文献   

14.
There has been growing interest in subspace data modeling over the past few years. Methods such as principal component analysis, factor analysis, and independent component analysis have gained in popularity and have found many applications in image modeling, signal processing, and data compression, to name just a few. As applications and computing power grow, more and more sophisticated analyses and meaningful representations are sought. Mixture modeling methods have been proposed for principal and factor analyzers that exploit local gaussian features in the subspace manifolds. Meaningful representations may be lost, however, if these local features are nongaussian or discontinuous. In this article, we propose extending the gaussian analyzers mixture model to an independent component analyzers mixture model. We employ recent developments in variational Bayesian inference and structure determination to construct a novel approach for modeling nongaussian, discontinuous manifolds. We automatically determine the local dimensionality of each manifold and use variational inference to calculate the optimum number of ICA components needed in our mixture model. We demonstrate our framework on complex synthetic data and illustrate its application to real data by decomposing functional magnetic resonance images into meaningful-and medically useful-features.  相似文献   

15.
We present a probabilistic framework namely, multiscale generative models known as dynamic trees (DT), for unsupervised image segmentation and subsequent matching of segmented regions in a given set of images. Beyond these novel applications of DTs, we propose important additions for this modeling paradigm. First, we introduce a novel DT architecture, where multilayered observable data are incorporated at all scales of the model. Second, we derive a novel probabilistic inference algorithm for DTs, structured variational approximation (SVA), which explicitly accounts for the statistical dependence of node positions and model structure in the approximate posterior distribution, thereby relaxing poorly justified independence assumptions in previous work. Finally, we propose a similarity measure for matching dynamic-tree models, representing segmented image regions, across images. Our results for several data sets show that DTs are capable of capturing important component-subcomponent relationships among objects and their parts, and that DTs perform well in segmenting images into plausible pixel clusters. We demonstrate the significantly improved properties of the SVA algorithm, both in terms of substantially faster convergence rates and larger approximate posteriors for the inferred models, when compared with competing inference algorithms. Furthermore, results on unsupervised object recognition demonstrate the viability of the proposed similarity measure for matching dynamic-structure statistical models.  相似文献   

16.
In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy.  相似文献   

17.
In this paper, we propose a scheme for texture classification and segmentation. The methodology involves an extraction of texture features using the wavelet packet frame decomposition. This is followed by a Gaussian-mixture-based classifier which assigns each pixel to the class. Each subnet of the classifier is modeled by a Gaussian mixture model and each texture image is assigned to the class to which pixels of the image most belong. This scheme shows high recognition accuracy in the classification of Brodatz texture images. It can also be expanded to an unsupervised texture segmentation using a Kullback-Leibler divergence between two Gaussian mixtures. The proposed method was successfully applied to Brodatz mosaic image segmentation and fabric defect detection.  相似文献   

18.
Binarization plays an important role in document image processing, especially in degraded documents. For degraded document images, adaptive binarization methods often incorporate local information to determine the binarization threshold for each individual pixel in the document image. We propose a two-stage parameter-free window-based method to binarize the degraded document images. In the first stage, an incremental scheme is used to determine a proper window size beyond which no substantial increase in the local variation of pixel intensities is observed. In the second stage, based on the determined window size, a noise-suppressing scheme delivers the final binarized image by contrasting two binarized images which are produced by two adaptive thresholding schemes which incorporate the local mean gray and gradient values. Empirical results demonstrate that the proposed method is competitive when compared to the existing adaptive binarization methods and achieves better performance in precision, accuracy, and F-measure.  相似文献   

19.
一种灵敏的文本图像认证混沌脆弱水印技术   总被引:10,自引:0,他引:10  
提出了一种适合二值文本图像产品认证、完整性证明和内容篡改证明的脆弱数字水印算法.文中定义了区域的最不重要像素块LSPB(Least Significant Pixel Block)概念,将一个区域中非嵌入水印点的像素值映射成混沌初值,经过混沌迭代生成水印信息,然后将水印比特嵌入到LSPB的中心像素.实验表明,嵌入水印后的二值图像视觉质量好。算法能够准确地检测并定位对含水印图像的篡改.是一种灵敏的完全盲水印方案.  相似文献   

20.
在基于内容图像检索中,图像的底层视觉特征和高层语义概念之间存在着较大的语义间隔。使用机器学习方法学习图像特征,自动建立图像类的模型成为一种有效的方法。本文提出了一种用支持向量机(SVM)实现自然图像自动语义归类的方法,基于块划分聚类得到特征向量作为SVM训练样本,实现语义分类器。由于参与聚类的是某类图像所有块的特征,提取的特征更能反映某一类图像特征。实验证明这种方法是有效的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号