首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
小波域中双稀疏的单幅图像超分辨   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 过去几年,基于稀疏表示的单幅图像超分辨获得了广泛的研究,提出了一种小波域中双稀疏的图像超分辨方法。方法 由小波域中高频图像的稀疏性及高频图像块在空间冗余字典下表示系数的稀疏性,建立了双稀疏的超分辨模型,恢复出高分辨率图像的细节系数;然后利用小波的多尺度性及低分辨率图像可作为高分辨率图像低频系数的逼近的假设,超分辨图像由低分辨率图像的小波分解和估计的高分辨率图像的高频系数经过二层逆小波变换来重构。结果 通过大量的实验发现,双稀疏的方法不仅较好地恢复了图像的局部纹理与边缘,且在噪声图像的超分辨上也获得了不错的效果。结论 与现在流行的使用稀疏表示的超分辨方法相比,双稀疏的方法对噪声图像的超分辨效果更好,且计算复杂度减小。  相似文献   

2.
It is generally agreed that learning, either supervised or unsupervised, can provide the best possible specification of known classes and offer inference for outlier detection by a dissimilarity threshold from the nominal feature space. Novel percept detection can take a step further by investigating whether these outliers form new dense clusters in both the feature space and the image space. By defining a novel percept to be a pattern group that has not been seen before in the feature space and the image space, in this paper, a non-conventional approach is proposed for multiple-novel-percept detection problem in robotic applications. Based on a computer vision system inspired loosely by neurobiological evidence, our approach can work in near real time for highly sparse high-dimensional feature vectors extracted from image patches while maintaining robustness to image transformations. Experiments conducted in an indoor environment and an outdoor environment demonstrate the efficacy of our method.  相似文献   

3.
In this paper, we propose a novel method for fast face recognition called L 1/2-regularized sparse representation using hierarchical feature selection. By employing hierarchical feature selection, we can compress the scale and dimension of global dictionary, which directly contributes to the decrease of computational cost in sparse representation that our approach is strongly rooted in. It consists of Gabor wavelets and extreme learning machine auto-encoder (ELM-AE) hierarchically. For Gabor wavelets’ part, local features can be extracted at multiple scales and orientations to form Gabor-feature-based image, which in turn improves the recognition rate. Besides, in the presence of occluded face image, the scale of Gabor-feature-based global dictionary can be compressed accordingly because redundancies exist in Gabor-feature-based occlusion dictionary. For ELM-AE part, the dimension of Gabor-feature-based global dictionary can be compressed because high-dimensional face images can be rapidly represented by low-dimensional feature. By introducing L 1/2 regularization, our approach can produce sparser and more robust representation compared to L 1-regularized sparse representation-based classification (SRC), which also contributes to the decrease of the computational cost in sparse representation. In comparison with related work such as SRC and Gabor-feature-based SRC, experimental results on a variety of face databases demonstrate the great advantage of our method for computational cost. Moreover, we also achieve approximate or even better recognition rate.  相似文献   

4.
A number of edge-aware filters can efficiently boost the appearance of an image by detail decomposition and enhancement. However, they often fail to produce photographic enhanced appearance due to some visible artifacts, especially noise, halos and unnatural contrast. The essential reason is that the guidance and the constraint of high-quality appearance are not sufficient enough in the process of enhancement. Thus our idea is to train a detail dictionary from a lot of high-quality patches in order to constrain and control the entire appearance enhancement. In this paper, we propose a novel learningbased enhancement method for photographic appearance, which includes two main stages: dictionary training and sparse reconstruction. In the training stage, we construct a training set of detail patches extracted from some high-quality photos, and then train an overcomplete detail dictionary by iteratively minimizing an ? 1-norm energy function. In the reconstruction stage, we employ the trained dictionary to reconstruct the boosted detail layer, and further formalize a gradient-guided optimization function to improve the local coherence between patches. Moreover, we propose two evaluation metrics to measure the performance of appearance enhancement. The final experimental results have demonstrated the effectiveness of our learning-based enhancement method.  相似文献   

5.
We present an adaptive tessellation scheme for surfaces consisting of parametric patches. The resulting tessellations are topologically uniform, yet consistent and watertight across boundaries of patches with different tessellation levels. Our scheme is simple to implement, requires little memory and is well suited for instancing, a feature available on current Graphical Processing Units that allows a substantial performance increase. We describe how the scheme can be implemented efficiently and give performance benchmarks comparing it to some other approaches.  相似文献   

6.
稀疏子空间聚类综述   总被引:32,自引:7,他引:25  
稀疏子空间聚类(Sparse subspace clustering, SSC)是一种基于谱聚类的数据聚类框架. 高维数据通常分布于若干个低维子空间的并上, 因此高维数据在适当字典下的表示具有稀疏性. 稀疏子空间聚类利用高维数据的稀疏表示系数构造相似度矩阵, 然后利用谱聚类方法得到数据的子空间聚类结果. 其核心是设计能够揭示高维数据真实子空间结构的表示模型, 使得到的表示系数及由此构造的相似度矩阵有助于精确的子空间聚类. 稀疏子空间聚类在机器学习、计算机视觉、图像处理和模式识别等领域已经得到了广泛的研究和应用, 但仍有很大的发展空间. 本文对已有稀疏子空间聚类方法的模型、算法和应用等方面进行详细阐述, 并分析存在的不足, 指出进一步研究的方向.  相似文献   

7.
We propose a new dynamic index structure called the GC-tree (or the grid cell tree) for efficient similarity search in image databases. The GC-tree is based on a special subspace partitioning strategy which is optimized for a clustered high-dimensional image dataset. The basic ideas are threefold: 1) we adaptively partition the data space based on a density function that identifies dense and sparse regions in a data space; 2) we concentrate the partition on the dense regions, and the objects in the sparse regions of a certain partition level are treated as if they lie within a single region; and 3) we dynamically construct an index structure that corresponds to the space partition hierarchy. The resultant index structure adapts well to the strongly clustered distribution of high-dimensional image datasets. To demonstrate the practical effectiveness of the GC-tree, we experimentally compared the GC-tree with the IQ-tree, LPC-file, VA-file, and linear scan. The result of our experiments shows that the GC-tree outperforms all other methods.  相似文献   

8.
尚敬文  王朝坤  辛欣  应翔 《软件学报》2017,28(3):648-662
社区结构是复杂网络的一个重要特征,社区发现对研究网络结构有重要的应用价值.k-均值等经典聚类算法是解决社区发现问题的一类基本方法.然而,在处理网络的高维矩阵时,使用这些经典聚类方法得到的社区往往不够准确.提出一种基于深度稀疏自动编码器的社区发现算法CoDDA,尝试提高使用这些经典方法处理高维邻接矩阵进行社区发现的准确性.首先,提出基于跳数的处理方法,对稀疏的邻接矩阵进行优化处理.得到的相似度矩阵不仅能反映网络拓扑结构中相连节点间的相似关系,同时能反映不相连节点间的相似关系.接着,基于无监督深度学习方法,构建深度稀疏自动编码器,对相似度矩阵进行特征提取,得到低维的特征矩阵.与邻接矩阵相比,特征矩阵对网络拓扑结构有更强的特征表达能力.最后,使用k-均值算法对低维特征矩阵聚类得到社区结构.实验结果显示,与6种典型的社区发现算法相比,CoDDA算法能够发现更准确的社区结构.同时,参数实验结果显示,CoDDA算法发现的社区结构比直接使用高维邻接矩阵的基本k-均值算法发现的社区结构更为准确.  相似文献   

9.
基于广义超曲面树的相似性搜索算法   总被引:2,自引:0,他引:2  
张兆功  李建中 《软件学报》2002,13(10):1969-1976
相似性搜索是数据挖掘的主要领域之一.它在数据库中检索出相似的数据,发现数据间的相似性.它可以应用于图像数据库、空间数据库和时间序列分析.对于欧氏空间(一种特殊的度量空间),相似性搜索算法中基于R-tree的方法,在低维时是高效的,当维数增加时,R-tre e的方法将退化为线性扫描.该现象被称为维数灾难(dimensionality curse),主要原因是存在数据重复.当数据量很大且维数很高时,距离计算和I/O操作将非常费时.提出了度量空间上新的空间分割方法和索引结构rgh-tree,利用数据库的数据对象与很少几个固定参考对象的距离信息进行数据分割和分布,产生一个各节点没有数据重复的平衡树.另外,在rgh-tree的基础上提出了相应的相似性搜索算法,该算法具有较小的I/O代价和距离计算次数,平均复杂性近似为o(n0.58).解决了目前算法存在的一些问题.  相似文献   

10.
基于增强稀疏性特征选择的网络图像标注   总被引:1,自引:0,他引:1  
史彩娟  阮秋琦 《软件学报》2015,26(7):1800-1811
面对网络图像的爆炸性增长,网络图像标注成为近年来一个热点研究内容,稀疏特征选择在提升网络图像标注效率和性能方面发挥着重要的作用.提出了一种增强稀疏性特征选择算法,即,基于l2,1/2矩阵范数和共享子空间的半监督稀疏特征选择算法(semi-supervised sparse feature selection based on l2,1/2-matix norm with shared subspace learning,简称SFSLS)进行网络图像标注.在SFSLS算法中,应用l2,1/2矩阵范数来选取最稀疏和最具判别性的特征,通过共享子空间学习,考虑不同特征之间的关联信息.另外,基于图拉普拉斯的半监督学习,使SFSLS算法同时利用了有标签数据和无标签数据.设计了一种有效的迭代算法来最优化目标函数.SFSLS算法与其他稀疏特征选择算法在两个大规模网络图像数据库上进行了比较,结果表明,SFSLS算法更适合于大规模网络图像的标注.  相似文献   

11.
传统方法在对高维稀疏数据进行检测的过程中,受到高维特征扰动的影响,数据误差较大,因此提出一种基于深度学习的高维稀疏数据组合推荐算法。采用相空间重构方法进行高维稀疏数据的特征重构,根据重构结果结合非线性统计序列分析方法进行高维稀疏数据的回归分析和点云结构重组,在此基础上提取高维稀疏数据的组合特征量;依据特征量提取结果采用特征提取技术抽取高维稀疏数据的平均互信息特征量,并结合关联规则挖掘方法进行高维稀疏数据的主成分分析,挖掘高维稀疏数据的相似度属性类别成分,最终采用深度学习方法进行高维稀疏数据组合推荐过程中的自适应寻优,实现高维稀疏数据的组合推荐。仿真结果表明,采用该算法进行高维稀疏数据推荐的属性归类辨识性较好,特征分辨能力较强,提高了数据的检测和识别能力。  相似文献   

12.
Zhang  Leyuan  Li  Yangding  Zhang  Jilian  Li  Pengqing  Li  Jiaye 《Multimedia Tools and Applications》2019,78(23):33319-33337

The characteristics of non-linear, low-rank, and feature redundancy often appear in high-dimensional data, which have great trouble for further research. Therefore, a low-rank unsupervised feature selection algorithm based on kernel function is proposed. Firstly, each feature is projected into the high-dimensional kernel space by the kernel function to solve the problem of linear inseparability in the low-dimensional space. At the same time, the self-expression form is introduced into the deviation term and the coefficient matrix is processed with low rank and sparsity. Finally, the sparse regularization factor of the coefficient vector of the kernel matrix is introduced to implement feature selection. In this algorithm, kernel matrix is used to solve linear inseparability, low rank constraints to consider the global information of the data, and self-representation form determines the importance of features. Experiments show that comparing with other algorithms, the classification after feature selection using this algorithm can achieve good results.

  相似文献   

13.
We present a novel framework for efficiently computing the indirect illumination in diffuse and moderately glossy scenes using density estimation techniques. Many existing global illumination approaches either quickly compute an overly approximate solution or perform an orders of magnitude slower computation to obtain high-quality results for the indirect illumination. The proposed method improves photon density estimation and leads to significantly better visual quality in particular for complex geometry, while only slightly increasing the computation time. We perform direct splatting of photon rays, which allows us to use simpler search data structures. Since our density estimation is carried out in ray space rather than on surfaces, as in the commonly used photon mapping algorithm, the results are more robust against geometrically incurred sources of bias. This holds also in combination with final gathering where photon mapping often overestimates the illumination near concave geometric features. In addition, we show that our photon splatting technique can be extended to handle moderately glossy surfaces and can be combined with traditional irradiance caching for sparse sampling and filtering in image space.  相似文献   

14.
谷瑞军  须文波 《计算机应用》2006,26(9):2063-2064
彩色图像量化是指将一幅具有N种颜色的图像用K(K<相似文献   

15.
在强干扰复杂环境下,有效的特征选择对于目标跟踪模型的可解释性至关重要.针对这一问题,本文基于再生核Hilbert空间(RKHS)理论,对特征空间构建生成式的两阶段稀疏表示(TSSR)模型,从而描述图像样本与字典之间的非线性关系,避免了在字典中引入大量的琐碎模板.在第1阶段,首先建立图像样本与字典在原始低维空间中的关系,然后利用批处理最小二乘算法求得稀疏表示系数的初值,根据观测模型确定初始跟踪位置的分布;在第2阶段,首先利用核方法将原始低维空间映射到高维特征空间,然后提出一种基于核的加速近端梯度算法(KAPG),从而求得字典元素系数的核稀疏表示,最终确定跟踪目标.最后实验结果证明了本文所提出的TSSR方法在面对视角变化和部分遮挡时的有效性.  相似文献   

16.
Qiao  Chen  Yang  Lan  Shi  Yan  Fang  Hanfeng  Kang  Yanmei 《Applied Intelligence》2022,52(1):237-253

To have the sparsity of deep neural networks is crucial, which can improve the learning ability of them, especially for application to high-dimensional data with small sample size. Commonly used regularization terms for keeping the sparsity of deep neural networks are based on L1-norm or L2-norm; however, they are not the most reasonable substitutes of L0-norm. In this paper, based on the fact that the minimization of a log-sum function is one effective approximation to that of L0-norm, the sparse penalty term on the connection weights with the log-sum function is introduced. By embedding the corresponding iterative re-weighted-L1 minimization algorithm with k-step contrastive divergence, the connections of deep belief networks can be updated in a way of sparse self-adaption. Experiments on two kinds of biomedical datasets which are two typical small sample size datasets with a large number of variables, i.e., brain functional magnetic resonance imaging data and single nucleotide polymorphism data, show that the proposed deep belief networks with self-adaptive sparsity can learn the layer-wise sparse features effectively. And results demonstrate better performances including the identification accuracy and sparsity capability than several typical learning machines.

  相似文献   

17.
Automatic decomposition of intrinsic images, especially for complex real‐world images, is a challenging under‐constrained problem. Thus, we propose a new algorithm that generates and combines multi‐scale properties of chromaticity differences and intensity contrast. The key observation is that the estimation of image reflectance, which is neither a pixel‐based nor a region‐based property, can be improved by using multi‐scale measurements of image content. The new algorithm iteratively coarsens a graph reflecting the reflectance similarity between neighbouring pixels. Then multi‐scale reflectance properties are aggregated so that the graph reflects the reflectance property at different scales. This is followed by a L0 sparse regularization on the whole reflectance image, which enforces the variation in reflectance images to be high‐frequency and sparse. We formulate this problem through energy minimization which can be solved efficiently within a few iterations. The effectiveness of the new algorithm is tested with the Massachusetts Institute of Technology (MIT) dataset, the Intrinsic Images in the Wild (IIW) dataset, and various natural images.  相似文献   

18.
Kernel functions are typically viewed as providing an implicit mapping of points into a high-dimensional space, with the ability to gain much of the power of that space without incurring a high cost if the result is linearly-separable by a large margin γ. However, the Johnson-Lindenstrauss lemma suggests that in the presence of a large margin, a kernel function can also be viewed as a mapping to a low-dimensional space, one of dimension only $\tilde{O}(1/\gamma^2)Kernel functions are typically viewed as providing an implicit mapping of points into a high-dimensional space, with the ability to gain much of the power of that space without incurring a high cost if the result is linearly-separable by a large margin γ. However, the Johnson-Lindenstrauss lemma suggests that in the presence of a large margin, a kernel function can also be viewed as a mapping to a low-dimensional space, one of dimension only . In this paper, we explore the question of whether one can efficiently produce such low-dimensional mappings, using only black-box access to a kernel function. That is, given just a program that computes K(x,y) on inputs x,y of our choosing, can we efficiently construct an explicit (small) set of features that effectively capture the power of the implicit high-dimensional space? We answer this question in the affirmative if our method is also allowed black-box access to the underlying data distribution (i.e., unlabeled examples). We also give a lower bound, showing that if we do not have access to the distribution, then this is not possible for an arbitrary black-box kernel function; we leave as an open problem, however, whether this can be done for standard kernel functions such as the polynomial kernel. Our positive result can be viewed as saying that designing a good kernel function is much like designing a good feature space. Given a kernel, by running it in a black-box manner on random unlabeled examples, we can efficiently generate an explicit set of features, such that if the data was linearly separable with margin γ under the kernel, then it is approximately separable in this new feature space. Editor: Philip Long A preliminary version of this paper appeared in Proceedings of the 15th International Conference on Algorithmic Learning Theory. Springer LNAI 3244, pp. 194–205, 2004.  相似文献   

19.
基于SURF(Speeded UpRobust Features)特征点提取是目前比较流行的图像配准方法.本文在SURF基础上,提出一种基于分块策略的改进方法:首先采用分水岭分割法确定图像的分块数量,然后对图像进行分块,每个子块提取一定数量的特征点,以便实现特征点的均匀提取;再通过稀疏特征树法找出匹配的特征点对;最后用RANSAC算法剔除错误匹配特征点对,同时计算参考图像与待配准图像的变换关系.实验表明,该方法能够高效、快速地解决遥感图像的自动配准问题.  相似文献   

20.
目的 客观评价作为图像融合的重要研究领域,是评价融合算法性能的有力工具。目前,已有几十种不同类型的评价指标,但各应用领域包括可见光与红外图像融合,仍缺少统一的选择依据。为了方便比较不同融合算法性能,提出一种客观评价指标的通用分析方法并应用于可见光与红外图像融合。方法 将可见光与红外图像基准数据集中的客观评价指标分为两类,分别是基于融合图像的评价指标与基于源图像和融合图像的评价指标。采用Kendall相关系数分析融合指标间的相关性,聚类得到指标分组;采用Borda计数排序法统计算法的综合排序,分析单一指标排序和综合排序的相关性,得到一致性较高的指标集合;采用离散系数分析指标均值随不同算法的波动程度,选择充分体现不同算法间差异的指标;综合相关性分析、一致性分析及离散系数分析,总结具有代表性的建议指标集合。结果 在13对彩色可见光与红外和8对灰度可见光与红外两组图像源中,分别统计分析不同图像融合算法的客观评价数据,得到可见光与红外图像融合的建议指标集(标准差、边缘保持度),作为融合算法性能评估的重要参考。相较于现有方法,实验覆盖20种融合算法和13种客观评价指标,并且不依赖主观评价结果。结论...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号