首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Low-rank matrix approximation is used in many applications of computer vision, and is frequently implemented by singular value decomposition under L2-norm sense. To resist outliers and handle matrix with missing entries, a few methods have been proposed for low-rank matrix approximation in L1 norm. However, the methods suffer from computational efficiency or optimization capability. Thus, in this paper we propose a solution using dynamic system to perform low-rank approximation under L1-norm sense. From the state vector of the system, two low-rank matrices are distilled, and the product of the two low-rank matrices approximates to the given measurement matrix with missing entries, in L1 norm. With the evolution of the system, the approximation accuracy improves step by step. The system involves a parameter, whose influences on the computational time and the final optimized two low-rank matrices are theoretically studied and experimentally valuated. The efficiency and approximation accuracy of the proposed algorithm are demonstrated by a large number of numerical tests on synthetic data and by two real datasets. Compared with state-of-the-art algorithms, the newly proposed one is competitive.  相似文献   

2.
This paper considers the problem of factorizing a matrix with missing components into a product of two smaller matrices, also known as principal component analysis with missing data (PCAMD). The Wiberg algorithm is a numerical algorithm developed for the problem in the community of applied mathematics. We argue that the algorithm has not been correctly understood in the computer vision community. Although there are many studies in our community, almost every one of which refers to the Wiberg study, as far as we know, there is no literature in which the performance of the Wiberg algorithm is investigated or the detail of the algorithm is presented. In this paper, we present derivation of the algorithm along with a problem in its implementation that needs to be carefully considered, and then examine its performance. The experimental results demonstrate that the Wiberg algorithm shows a considerably good performance, which should contradict the conventional view in our community, namely that minimization-based algorithms tend to fail to converge to a global minimum relatively frequently. The performance of the Wiberg algorithm is such that even starting with random initial values, it converges in most cases to a correct solution, even when the matrix has many missing components and the data are contaminated with very strong noise. Our conclusion is that the Wiberg algorithm can also be used as a standard algorithm for the problems of computer vision.  相似文献   

3.
Association Rule Mining algorithms operate on a data matrix (e.g., customers products) to derive association rules [AIS93b, SA96]. We propose a new paradigm, namely, Ratio Rules, which are quantifiable in that we can measure the “goodness” of a set of discovered rules. We also propose the “guessing error” as a measure of the “goodness”, that is, the root-mean-square error of the reconstructed values of the cells of the given matrix, when we pretend that they are unknown. Another contribution is a novel method to guess missing/hidden values from the Ratio Rules that our method derives. For example, if somebody bought $10 of milk and $3 of bread, our rules can “guess” the amount spent on butter. Thus, unlike association rules, Ratio Rules can perform a variety of important tasks such as forecasting, answering “what-if” scenarios, detecting outliers, and visualizing the data. Moreover, we show that we can compute Ratio Rules in a single pass over the data set with small memory requirements (a few small matrices), in contrast to association rule mining methods which require multiple passes and/or large memory. Experiments on several real data sets (e.g., basketball and baseball statistics, biological data) demonstrate that the proposed method: (a) leads to rules that make sense; (b) can find large itemsets in binary matrices, even in the presence of noise; and (c) consistently achieves a “guessing error” of up to 5 times less than using straightforward column averages. Received: March 15, 1999 / Accepted: November 1, 1999  相似文献   

4.
In this paper, we present a novel algorithm that combines the power of expression of Geometric Algebra with the robustness of Tensor Voting to find the correspondences between two sets of 2D points with an underlying rigid transformation. Unlike other popular algorithms for point registration (like the Iterated Closest Points), our algorithm does not require an initialization, works equally well with small and large transformations between the data sets, performs even in the presence of large amounts of outliers (90% and more), and have less chance to be trapped in “local minima”. Furthermore, we will show how this algorithm can be easily extended to account for multiple overlapping motions and certain non-rigid transformations.  相似文献   

5.
M. Bebendorf  Y. Chen 《Computing》2007,81(4):239-257
Summary The numerical solution of nonlinear problems is usually connected with Newton’s method. Due to its computational cost, variants (so-called inexact and quasi–Newton methods) have been developed in which the arising inverse of the Jacobian is replaced by an approximation. In this article we present a new approach which is based on Broyden updates. This method does not require to store the update history since the updates are explicitly added to the matrix. In addition to updating the inverse we introduce a method which constructs updates of the LU decomposition. To this end, we present an algorithm for the efficient multiplication of hierarchical and semi-separable matrices. Since an approximate LU decomposition of finite element stiffness matrices can be efficiently computed in the set of hierarchical matrices, the complexity of the proposed method scales almost linearly. Numerical examples demonstrate the effectiveness of this new approach. This work was supported by the DFG priority program SPP 1146 “Modellierung inkrementeller Umformverfahren”.  相似文献   

6.
Large collections of images can be indexed by their projections on a few “primary” images. The optimal primary images are the eigenvectors of a large covariance matrix. We address the problem of computing primary images when access to the images is expensive. This is the case when the images cannot be kept locally, but must be accessed through slow communication such as the Internet, or stored in a compressed form. A distributed algorithm that computes optimal approximations to the eigenvectors (known as Ritz vectors) in one pass through the image set is proposed. When iterated, the algorithm can recover the exact eigenvectors. The widely used SVD technique for computing the primary images of a small image set is a special case of the proposed algorithm. In applications to image libraries and learning, it is necessary to compute different primary images for several sub-categories of the image set. The proposed algorithm can compute these additional primary images “offline”, without the image data. Similar computation by other algorithms is impractical even when access to the images is inexpensive.  相似文献   

7.
We address the problem of finding the correspondences of two point sets in 3D undergoing a rigid transformation. Using these correspondences the motion between the two sets can be computed to perform registration. Our approach is based on the analysis of the rigid motion equations as expressed in the Geometric Algebra framework. Through this analysis it was apparent that this problem could be cast into a problem of finding a certain 3D plane in a different space that satisfies certain geometric constraints. In order to find this plane in a robust way, the Tensor Voting methodology was used. Unlike other common algorithms for point registration (like the Iterated Closest Points algorithm), ours does not require an initialization, works equally well with small and large transformations, it cannot be trapped in “local minima” and works even in the presence of large amounts of outliers. We also show that this algorithm is easily extended to account for multiple motions and certain non-rigid or elastic transformations.  相似文献   

8.
The saturation algorithm for symbolic state-space exploration   总被引:1,自引:0,他引:1  
We present various algorithms for generating the state space of an asynchronous system based on the use of multiway decision diagrams to encode sets and Kronecker operators on boolean matrices to encode the next-state function. The Kronecker encoding allows us to recognize and exploit the “locality of effect” that events might have on state variables. In turn, locality information suggests better iteration strategies aimed at minimizing peak memory consumption. In particular, we focus on the saturation strategy, which is completely different from traditional breadth-first symbolic approaches, and extend its applicability to models where the possible values of the state variables are not known a priori. The resulting algorithm merges “on-the-fly” explicit state-space generation of each submodel with symbolic state-space generation of the overall model. Each algorithm we present is implemented in our tool SmArT. This allows us to run fair and detailed comparisons between them on a suite of representative models. Saturation, in particular, is shown to be many orders of magnitude more efficient in terms of memory and time with respect to traditional methods.  相似文献   

9.
目的 利用低秩矩阵恢复方法可从稀疏噪声污染的数据矩阵中提取出对齐且线性相关低秩图像的优点,提出一种新的基于低秩矩阵恢复理论的多曝光高动态范围(HDR)图像融合的方法,以提高HDR图像融合技术的抗噪声与去伪影的性能。方法 以部分奇异值(PSSV)作为优化目标函数,可构建通用的多曝光低动态范围(LDR)图像序列的HDR图像融合低秩数学模型。然后利用精确增广拉格朗日乘子法,求解输入的多曝光LDR图像序列的低秩矩阵,并借助交替方向乘子法对求解算法进行优化,对不同的奇异值设置自适应的惩罚因子,使得最优解尽量集中在最大奇异值的空间,从而得到对齐无噪声的场景完整光照信息,即HDR图像。结果 本文求解方法具有较好的收敛性,抗噪性能优于鲁棒主成分分析(RPCA)与PSSV方法,且能适用于多曝光LDR图像数据集较少的场合。通过对经典的Memorial Church与Arch多曝光LDR图像序列的HDR图像融合仿真结果表明,本文方法对噪声与伪影的抑制效果较为明显,图像细节丰富,基于感知一致性(PU)映射的峰值信噪比(PSNR)与结构相似度(SSIM)指标均优于对比方法:对于无噪声的Memorial Church图像序列,RPCA方法的PSNR、SSIM值分别为28.117 dB与0.935,而PSSV方法的分别为30.557 dB与0.959,本文方法的分别为32.550 dB与0.968。当为该图像序列添加均匀噪声后,RPCA方法的PSNR、SSIM值为28.115 dB与0.935,而PSSV方法的分别为30.579 dB与0.959,本文方法的为32.562 dB与0.967。结论 本文方法将多曝光HDR图像融合问题与低秩最优化理论结合,不仅可以在较少的数据量情况下以较低重构误差获取到HDR图像,还能有效去除动态场景伪影与噪声的干扰,提高融合图像的质量,具有更好的鲁棒性,适用于需要记录场景真实光线变化的场合。  相似文献   

10.
王海鹏  降爱莲  李鹏翔 《计算机应用》2005,40(11):3133-3138
针对鲁棒主成分分析(RPCA)问题,为了降低RPCA算法的时间复杂度,提出了牛顿-软阈值迭代(NSTI)算法。首先,使用低秩矩阵的Frobenius范数与稀疏矩阵的l1-范数的和来构造NSTI算法的模型;其次,同时使用两种不同的优化方式求解模型的不同部分,即用牛顿法快速计算出低秩矩阵,用软阈值迭代算法快速计算出稀疏矩阵,交替使用这两种方法计算出原数据的低秩矩阵和稀疏矩阵的分解;最后,得到原始数据的低秩特征。在数据规模为5 000×5 000,低秩矩阵的秩为20的情况下,NSTI算法和梯度下降(GD)算法、低秩矩阵拟合(LMaFit)算法相比,时间效率分别提高了24.6%、45.5%。对180帧的视频前景背景进行分离,NSTI耗时3.63 s,时间效率比GD算法、LMaFit算法分别高78.7%、82.1%。图像降噪实验中,NSTI算法耗时0.244 s,所得到的降噪后的图像与原始图像的残差为0.381 3,与GD算法、LMaFit算法相比,时间效率和精确度分别提高了64.3%和45.3%。实验结果证明,NSTI算法能够有效解决RPCA问题并提升RPCA算法的时间效率。  相似文献   

11.
The current work is focused on the implementation of a robust multimedia application for watermarking digital images, which is based on an innovative spread spectrum analysis algorithm for watermark embedding and on a content-based image retrieval technique for watermark detection. The existing highly robust watermark algorithms are applying “detectable watermarks” for which a detection mechanism checks if the watermark exists or not (a Boolean decision) based on a watermarking key. The problem is that the detection of a watermark in a digital image library containing thousands of images means that the watermark detection algorithm is necessary to apply all the keys to the digital images. This application is non-efficient for very large image databases. On the other hand “readable” watermarks may prove weaker but easier to detect as only the detection mechanism is required. The proposed watermarking algorithm combine’s the advantages of both “detectable” and “readable” watermarks. The result is a fast and robust multimedia application which has the ability to cast readable multibit watermarks into digital images. The watermarking application is capable of hiding 214 different keys into digital images and casting multiple zero-bit watermarks onto the same coefficient area while maintaining a sufficient level of robustness.  相似文献   

12.
We present a new probabilistic algorithm to compute the Smith normal form of a sparse integer matrix . The algorithm treats A as a “black box”—A is only used to compute matrix-vector products and we do not access individual entries in A directly. The algorithm requires about black box evaluations for word-sized primes p and , plus additional bit operations. For sparse matrices this represents a substantial improvement over previously known algorithms. The new algorithm suffers from no “fill-in” or intermediate value explosion, and uses very little additional space. We also present an asymptotically fast algorithm for dense matrices which requires about bit operations, where O(MM(m)) operations are sufficient to multiply two matrices over a field. Both algorithms are probabilistic of the Monte Carlo type — on any input they return the correct answer with a controllable, exponentially small probability of error. Received: March 9, 2000.  相似文献   

13.
王海鹏  降爱莲  李鹏翔 《计算机应用》2020,40(11):3133-3138
针对鲁棒主成分分析(RPCA)问题,为了降低RPCA算法的时间复杂度,提出了牛顿-软阈值迭代(NSTI)算法。首先,使用低秩矩阵的Frobenius范数与稀疏矩阵的l1-范数的和来构造NSTI算法的模型;其次,同时使用两种不同的优化方式求解模型的不同部分,即用牛顿法快速计算出低秩矩阵,用软阈值迭代算法快速计算出稀疏矩阵,交替使用这两种方法计算出原数据的低秩矩阵和稀疏矩阵的分解;最后,得到原始数据的低秩特征。在数据规模为5 000×5 000,低秩矩阵的秩为20的情况下,NSTI算法和梯度下降(GD)算法、低秩矩阵拟合(LMaFit)算法相比,时间效率分别提高了24.6%、45.5%。对180帧的视频前景背景进行分离,NSTI耗时3.63 s,时间效率比GD算法、LMaFit算法分别高78.7%、82.1%。图像降噪实验中,NSTI算法耗时0.244 s,所得到的降噪后的图像与原始图像的残差为0.381 3,与GD算法、LMaFit算法相比,时间效率和精确度分别提高了64.3%和45.3%。实验结果证明,NSTI算法能够有效解决RPCA问题并提升RPCA算法的时间效率。  相似文献   

14.
多视图数据在现实世界中应用广泛,各种视角和不同的传感器有助于更好的数据表示,然而,来自不同视图的数据具有较大的差异,尤其当多视图数据不完整时,可能导致训练效果较差甚至失败。为了解决该问题,本文提出了一个基于双重低秩分解的不完整多视图子空间学习算法。所提算法通过两方面来解决不完整多视图问题:一方面,基于双重低秩分解子空间框架,引入潜在因子来挖掘多视图数据中缺失的信息;另一方面,通过预先学习的多视图数据低维特征获得更好的鲁棒性,并以有监督的方式来指导双重低秩分解。实验结果证明,所提算法较之前的多视图子空间学习算法有明显优势;即使对于不完整的多视图数据,该算法也具有良好的分类性能。  相似文献   

15.
Fast joins using join indices   总被引:1,自引:0,他引:1  
Two new algorithms, “Jive join” and “Slam join,” are proposed for computing the join of two relations using a join index. The algorithms are duals: Jive join range-partitions input relation tuple ids and then processes each partition, while Slam join forms ordered runs of input relation tuple ids and then merges the results. Both algorithms make a single sequential pass through each input relation, in addition to one pass through the join index and two passes through a temporary file, whose size is half that of the join index. Both algorithms require only that the number of blocks in main memory is of the order of the square root of the number of blocks in the smaller relation. By storing intermediate and final join results in a vertically partitioned fashion, our algorithms need to manipulate less data in memory at a given time than other algorithms. The algorithms are resistant to data skew and adaptive to memory fluctuations. Selection conditions can be incorporated into the algorithms. Using a detailed cost model, the algorithms are analyzed and compared with competing algorithms. For large input relations, our algorithms perform significantly better than Valduriez's algorithm, the TID join algorithm, and hash join algorithms. An experimental study is also conducted to validate the analytical results and to demonstrate the performance characteristics of each algorithm in practice. Received July 21, 1997 / Accepted June 8, 1998  相似文献   

16.
The success of bilinear subspace learning heavily depends on reducing correlations among features along rows and columns of the data matrices. In this work, we study the problem of rearranging elements within a matrix in order to maximize these correlations so that information redundancy in matrix data can be more extensively removed by existing bilinear subspace learning algorithms. An efficient iterative algorithm is proposed to tackle this essentially integer programming problem. In each step, the matrix structure is refined with a constrained Earth Mover's Distance procedure that incrementally rearranges matrices to become more similar to their low-rank approximations, which have high correlation among features along rows and columns. In addition, we present two extensions of the algorithm for conducting supervised bilinear subspace learning. Experiments in both unsupervised and supervised bilinear subspace learning demonstrate the effectiveness of our proposed algorithms in improving data compression performance and classification accuracy.  相似文献   

17.
To recover motion and shape matrices from a matrix of tracking feature points on a rigid object under orthography, we can do low-rank matrix approximation of the tracking matrix with its each column minus the row mean vector of the matrix. To obtain the row mean vector, usually 4-rank matrix approximation is used to recover the missing entries. Then, 3-rank matrix approximation is used to recover the shape and motion. Obviously, the procedure is not convenient. In this paper, we build a cost function which calculates the shape matrix, motion matrix as well as the row mean vector at the same time. The function is in L 1 norm, and is not smooth everywhere. To optimize the function, a continuous-time dynamic system is newly proposed. With time going on, the product of the shape and rotation matrices becomes closer and closer, in L 1-norm sense, to the tracking matrix with each its column minus the mean vector. A parameter is implanted into the system for improving the calculating efficiency, and the influence of the parameter on approximation accuracy and computational efficiency are theoretically studied and experimentally confirmed. The experimental results on a large number of synthetic data and a real application of structure from motion demonstrate the effectiveness and efficiency of the proposed method. The proposed system is also applicable to general low-rank matrix approximation in L 1 norm, and this is also experimentally demonstrated.  相似文献   

18.
We propose an integrated registration and clustering algorithm, called “consistency clustering”, that automatically constructs a probabilistic white-matter atlas from a set of multi-subject diffusion weighted MR images. We formulate the atlas creation as a maximum likelihood problem which the proposed method solves using a generalized Expectation Maximization (EM) framework. Additionally, the algorithm employs an outlier rejection and denoising strategy to produce sharp probabilistic maps of certain bundles of interest. We test this algorithm on synthetic and real data, and evaluate its stability against initialization. We demonstrate labeling a novel subject using the resulting spatial atlas and evaluate the accuracy of this labeling. Consistency clustering is a viable tool for completely automatic white-matter atlas construction for sub-populations and the resulting atlas is potentially useful for making diffusion measurements in a common coordinate system to identify pathology related changes or developmental trends.  相似文献   

19.
We propose a model-based learning algorithm, the Adaptive-resolution Reinforcement Learning (ARL) algorithm, that aims to solve the online, continuous state space reinforcement learning problem in a deterministic domain. Our goal is to combine adaptive-resolution approximation schemes with efficient exploration in order to obtain polynomial learning rates. The proposed algorithm uses an adaptive approximation of the optimal value function using kernel-based averaging, going from coarse to fine kernel-based representation of the state space, which enables us to use finer resolution in the “important” areas of the state space, and coarser resolution elsewhere. We consider an online learning approach, in which we discover these important areas online, using an uncertainty intervals exploration technique. In addition, we introduce an incremental variant of the ARL (IARL), which is a more practical version of the original algorithm with reduced computational complexity at each stage. Polynomial learning rates in terms of mistake bound (in a PAC framework) are established for these algorithms, under appropriate continuity assumptions.  相似文献   

20.
A new dynamic clustering approach (DCPSO), based on particle swarm optimization, is proposed. This approach is applied to image segmentation. The proposed approach automatically determines the “optimum” number of clusters and simultaneously clusters the data set with minimal user interference. The algorithm starts by partitioning the data set into a relatively large number of clusters to reduce the effects of initial conditions. Using binary particle swarm optimization the “best” number of clusters is selected. The centers of the chosen clusters is then refined via the K-means clustering algorithm. The proposed approach was applied on both synthetic and natural images. The experiments conducted show that the proposed approach generally found the “optimum” number of clusters on the tested images. A genetic algorithm and random search version of dynamic clustering is presented and compared to the particle swarm version.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号