首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Linear subspace methods are extensively used in many areas such as pattern recognition and machine learning. Among them, block subspace methods are efficient in terms of the computational complexity. In this paper, we perform a thorough analysis on block subspace methods and give a theoretical framework for understanding block subspace methods. It reveals the relationship between block subspace methods and classical subspace methods. We theoretically show that blockwise PCA has larger reconstruction errors than classical PCA and classical LDA has stronger discriminant power than blockwise LDA in the case of the same number of reduced features. In addition, based on the Fisher criterion, we also give a strategy for selecting an approximate block size for classification problems. The comprehensive experiments on face images and gene expression data are used to evaluate our results and a comparative analysis for various methods is made. Experimental results demonstrate that overly combining subspaces of block subspace methods without considering the subspace distance may yield undesirable performance on undersampled problems.  相似文献   

2.
A reliable system for visual learning and recognition should enable a selective treatment of individual parts of input data and should successfully deal with noise and occlusions. These requirements are not satisfactorily met when visual learning is approached by appearance-based modeling of objects and scenes using the traditional PCA approach. In this paper we extend standard PCA approach to overcome these shortcomings. We first present a weighted version of PCA, which, unlike the standard approach, considers individual pixels and images selectively, depending on the corresponding weights. Then we propose a robust PCA method for obtaining a consistent subspace representation in the presence of outlying pixels in the training images. The method is based on the EM algorithm for estimation of principal subspaces in the presence of missing data. We demonstrate the efficiency of the proposed methods in a number of experiments.  相似文献   

3.
基于内容的图像检索技术中,形状特征是最重要的图像特征部分。傅立叶描述子(FD),因为它高效性,稳定性,低计算复杂度和少量参数运行的优点而被广泛应用。目前已经存在的大多数傅立叶描述子往往直接使用它的傅立叶系数,本文我们应用主成分分析Principal Component Analysis(PCA)方法在傅立叶描述子中,从而提出基于PCA的傅立叶描述子的形状描述和检索方法。本论文的实验采用MPEG-7标准图像数据库来比较检索性能。实验结果表明PCA-FD的方法实现了更短时间内的更好检索精确率。  相似文献   

4.
Investigates the use of linear and nonlinear principal manifolds for learning low-dimensional representations for visual recognition. Several leading techniques - principal component analysis (PCA), independent component analysis (ICA) and nonlinear kernel PCA (KPCA) - are examined and tested in a visual recognition experiment using 1,800+ facial images from the "FERET" (FacE REcognition Technology) database. We compare the recognition performance of nearest-neighbor matching with each principal manifold representation to that of a maximum a-posteriori (MAP) matching rule using a Bayesian similarity measure derived from dual probabilistic subspaces. The experimental results demonstrate the simplicity, computational economy and performance superiority of the Bayesian subspace method over principal manifold techniques for visual matching  相似文献   

5.
在图像分块压缩感知(Block compressed sensning, BCS)框架下,基于平滑投影Landweber迭代的重建算法能以低计算复杂度确保良好率失真性能,尤其是采用主成分分析(Principle component analysis, PCA)作自适应硬阈值收缩。然而,在PCA学习过程中忽略了图像局部结构特性平稳,会影响Landweber迭代重建性能的提升。针对该问题,本文采用粒计算(Granular computing, GrC)理论,根据图像子块结构特性将图像分解为若干粒,再实施PCA学习各粒的稀疏表示基底,并 对粒内子块硬阈值收缩去噪。由于粒内图像子块具有平稳的结构特性,可有效改善硬阈值收缩性能。实验结果表明,与传统算法相比,本文算法重建图像的整体客观质量较优, 且可更好地保护边缘与纹理等重要细节,主观视觉质量良好,与此同时,保证了较低的重建计算复杂度。  相似文献   

6.
Image coding using principal component analysis (PCA), a type of image compression technique, projects image blocks to a subspace that can preserve most of the original information. However, the blocks in the image exhibit various inhomogeneous properties, such as smooth region, texture, and edge, which give rise to difficulties in PCA image coding. This paper proposes a repartition clustering method to partition the data into groups, such that individuals of the same group are homogeneous, and vice versa. The PCA method is applied separately for each group. In the clustering method, the genetic algorithm acts as a framework consisting of three phases, including the proposed repartition clustering. Based on this mechanism, the proposed method can effectively increase image quality and provide an enhanced visual effect.  相似文献   

7.
We give a general overview of the state-of-the-art in subspace system identification methods. We have restricted ourselves to the most important ideas and developments since the methods appeared in the late eighties. First, the basics of linear subspace identification are summarized. Different algorithms one finds in literature (such as N4SID, IV-4SID, MOESP, CVA) are discussed and put into a unifying framework. Further, a comparison between subspace identification and prediction error methods is made on the basis of computational complexity and precision of the methods by applying them on 10 industrial data sets.  相似文献   

8.
We develop a new biologically motivated algorithm for representing natural images using successive projections into complementary subspaces. An image is first projected into an edge subspace spanned using an ICA basis adapted to natural images which captures the sharp features of an image like edges and curves. The residual image obtained after extraction of the sharp image features is approximated using a mixture of probabilistic principal component analyzers (MPPCA) model. The model is consistent with cellular, functional, information theoretic, and learning paradigms in visual pathway modeling. We demonstrate the efficiency of our model for representing different attributes of natural images like color and luminance. We compare the performance of our model in terms of quality of representation against commonly used basis, like the discrete cosine transform (DCT), independent component analysis (ICA), and principal components analysis (PCA), based on their entropies. Chrominance and luminance components of images are represented using codes having lower entropy than DCT, ICA, or PCA for similar visual quality. The model attains considerable simplification for learning from images by using a sparse independent code for representing edges and explicitly evaluating probabilities in the residual subspace.  相似文献   

9.
Outlier detection algorithms are often computationally intensive because of their need to score each point in the data. Even simple distance-based algorithms have quadratic complexity. High-dimensional outlier detection algorithms such as subspace methods are often even more computationally intensive because of their need to explore different subspaces of the data. In this paper, we propose an exceedingly simple subspace outlier detection algorithm, which can be implemented in a few lines of code, and whose complexity is linear in the size of the data set and the space requirement is constant. We show that this outlier detection algorithm is much faster than both conventional and high-dimensional algorithms and also provides more accurate results. The approach uses randomized hashing to score data points and has a neat subspace interpretation. We provide a visual representation of this interpretability in terms of outlier sensitivity histograms. Furthermore, the approach can be easily generalized to data streams, where it provides an efficient approach to discover outliers in real time. We present experimental results showing the effectiveness of the approach over other state-of-the-art methods.  相似文献   

10.
Efficient and compact representation of images is a fundamental problem in computer vision. In this paper, we propose methods that use Haar-like binary box functions to represent a single image or a set of images. A desirable property of these box functions is that their inner product operation with an image can be computed very efficiently. We propose two closely related novel subspace methods to model images: the non-orthogonal binary subspace (NBS) method and binary principal component analysis (B-PCA) algorithm. NBS is spanned directly by binary box functions and can be used for image representation, fast template matching and many other vision applications. B-PCA is a structure subspace that inherits the merits of both NBS (fast computation) and PCA (modeling data structure information). B-PCA base vectors are obtained by a novel PCA guided NBS method. We also show that BPCA base vectors are nearly orthogonal to each other. As a result, in the non-orthogonal vector decomposition process, the computationally intensive pseudo-inverse projection operator can be approximated by the direct dot product without causing significant distance distortion. Experiments on real image datasets show promising performance in image matching, reconstruction and recognition tasks with significant speed improvement.  相似文献   

11.
An approach that unifies subspace feature selection and optimal classification is presented. Independent component analysis (ICA) and principal component analysis (PCA) provide a maximally variant or statistically independent basis for pattern recognition. A support vector classifier (SVC) provides information about the significance of each feature vector. The feature vectors and the principal and independent component bases are modified to obtain classification results which provide lower classification error and better generalization than can be obtained by the SVC on the raw data and its PCA or ICA subspace representation. The performance of the approach is demonstrated with artificial data sets and an example of face recognition from an image database.  相似文献   

12.
In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering.  相似文献   

13.
蔺宏伟  王国瑾 《计算机学报》2003,26(12):1645-1651
距离变换是图像处理中历史悠久的研究课题.该文将二维带符号的欧氏距离变换推广到三维,对其进行了优化,分析了它的计算复杂度,并应用于解决计算机图形学中的两个重要问题:第一,将图形对象的三角网格表示转换为它的距离场表示.即首先将三角网格模型离散为体素表示,利用三维带符号的距离变换,将求空间一点到图形对象的最短距离的全局搜索过程,转化为求这一点到离它最近的特征体素所包含的图形对象部分的局部搜索过程;第二,利用类似的思想,求两张空间曲面之间的最短距离.  相似文献   

14.
In this paper, we focus on incrementally learning a robust multi-view subspace representation for visual object tracking. During the tracking process, due to the dynamic background variation and target appearance changing, it is challenging to learn an informative feature representation of tracking object, distinguished from the dynamic background. To this end, we propose a novel online multi-view subspace learning algorithm (OMEL) via group structure analysis, which consistently learns a low-dimensional representation shared across views with time changing. In particular, both group sparsity and group interval constraints are incorporated to preserve the group structure in the low-dimensional subspace, and our subspace learning model will be incrementally updated to prevent repetitive computation of previous data. We extensively evaluate our proposed OMEL on multiple benchmark video tracking sequences, by comparing with six related tracking algorithms. Experimental results show that OMEL is robust and effective to learn dynamic subspace representation for online object tracking problems. Moreover, several evaluation tests are additionally conducted to validate the efficacy of group structure assumption.  相似文献   

15.
In many scientific simulations, the temporal variation and analysis of features are important. Visualization and visual analysis of time series data is still a significant challenge because of the large volume of data. Irregular and scattered time series data sets are even more problematic to visualize interactively. Previous work proposed functional representation using basis functions as one solution for interactively visualizing scattered data by harnessing the power of modern PC graphics boards. In this paper, we use the functional representation approach for time-varying data sets and develop an efficient encoding technique utilizing temporal similarity between time steps. Our system utilizes a graduated approach of three methods with increasing time complexity based on the lack of similarity of the evolving data sets. Using this system, we are able to enhance the encoding performance for the time-varying data sets, reduce the data storage by saving only changed or additional basis functions over time, and interactively visualize the time-varying encoding results. Moreover, we present efficient rendering of the functional representations using binary space partitioning tree textures to increase the rendering performance.  相似文献   

16.
Linear subspace analysis methods have been successfully applied to extract features for face recognition.But they are inadequate to represent the complex and nonlinear variations of real face images,such as illumination,facial expression and pose variations,because of their linear properties.In this paper,a nonlinear subspace analysis method,Kernel-based Nonlinear Discriminant Analysis (KNDA),is presented for face recognition,which combines the nonlinear kernel trick with the linear subspace analysis method-Fisher Linear Discriminant Analysis (FLDA).First,the kernel trick is used to project the input data into an implicit feature space,then FLDA is performed in this feature space.Thus nonlinear discriminant features of the input data are yielded.In addition,in order to reduce the computational complexity,a geometry-based feature vectors selection scheme is adopted.Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA),which combines the kernel trick with linear Principal Component Analysis (PCA).Experiments are performed with the polynomial kernel,and KNDA is compared with KPCA and FLDA.Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.  相似文献   

17.
Data reduction can improve the storage, transfer time, and processing requirements of very large data sets. One of the challenges of designing effective data reduction techniques is to be able to preserve the ability to use the reduced format directly for a wide range of database and data mining applications. We propose the novel idea of hierarchical subspace sampling in order to create a reduced representation of the data. The method is naturally able to estimate the local implicit dimensionalities of each point very effectively and, thereby, create a variable dimensionality reduced representation of the data. Such a technique is very adaptive about adjusting its representation depending upon the behavior of the immediate locality of a data point. An important property of the subspace sampling technique is that the overall efficiency of compression improves with increasing database size. Because of its sampling approach, the procedure is extremely fast and scales linearly both with data set size and dimensionality. We propose new and effective solutions to problems such as selectivity estimation and approximate nearest-neighbor search. These are achieved by utilizing the locality specific subspace characteristics of the data which are revealed by the subspace sampling technique.  相似文献   

18.
In this paper, we present a low-complexity algorithm for real-time joint user scheduling and receive antenna selection (JUSRAS) in multiuser MIMO systems. The computational complexity of exhaustive search for JUSRAS problem grows exponentially with the number of users and receives antennas. We apply binary particle swarm optimization (BPSO) to the joint user scheduling and receive antenna selection problem. In addition to applying the conventional BPSO to JUSRAS, we also present a specific improvement to this population-based heuristic algorithm; namely, we feed cyclically shifted initial population, so that the number of iterations until reaching an acceptable solution is reduced. The proposed BPSO for JUSRAS problem has a low computational complexity, and its effectiveness is verified through simulation results.  相似文献   

19.
针对多模态间歇过程故障检测问题,本文提出一种基于局部保持投影–加权k近邻规则(LPP--Wk NN)的故障检测策略.首先,应用局部保持投影(LPP)方法将原始数据投影到低维主元子空间;接下来,在主元子空间中,应用样本第k近邻的局部近邻集确定每个样本的权重并计算权重统计量Dw;最后,应用核密度估计方法确定Dw控制限并进行故障检测.本文方法应用LPP对过程数据进行维数约减,既能够降低训练过程中离群点对模型的影响,又能够降低在线故障检测的计算复杂度.同时,加权k近邻规则(Wk NN)方法通过引入权重规则能够使得过程故障检测统计量分布具有单模态结构.相比传统的k NN统计量,本文引入的权重统计量具有更高的故障检测性能.通过数值例子和半导体蚀刻过程的仿真实验,并与主元分析(PCA), k NN, Wk NN, LPP--k NN等方法进行比较,实验结果验证了本文方法的有效性.  相似文献   

20.

Cloud computing delivers resources such as software, data, storage and servers over the Internet; its adaptable infrastructure facilitates on-demand access of computational resources. There are many benefits of cloud computing such as being scalable, paying only for consumption, improving accessibility, limiting investment costs and being environmentally friendly. Thus, many organizations have already started applying this technology to improve organizational efficiency. In this study, we developed a cloud-based book recommendation service that uses a principle component analysis–scale-invariant feature transform (PCA-SIFT) feature detector algorithm to recommend book(s) based on a user-uploaded image of a book or collection of books. The high dimensionality of the image is reduced with the help of a principle component analysis (PCA) pre-processing technique. When the mobile application user takes a picture of a book or a collection of books, the system recognizes the image(s) and recommends similar books. The computational task is performed via the cloud infrastructure. Experimental results show the PCA-SIFT-based cloud recommendation service is promising; additionally, the application responds faster when the pre-processing technique is integrated. The proposed generic cloud-based recommendation system is flexible and highly adaptable to new environments.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号