首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Informative experiments are identification experiments which contain sufficient information for an identification algorithm to discriminate between different models in an intended model set. In this paper, a particular set of identification algorithms, namely subspace based identification, is considered. Criteria for experiments to be informative with these methods in the deterministic setup and the combined deterministic-stochastic setup are presented. It is pointed out that if these criteria are not satisfied, interesting phenomena, in which perfect cancellations of the deterministic components and the stochastic components occur in a subspace projection, may occur. It is further shown that such cancellations can indeed be avoided under mild conditions.  相似文献   

2.

This paper proposes a new subspace clustering method based on sparse sample self-representation (SSR). The proposed method considers SSR to solve the problem that affinity matrix does not strictly follow the structure of subspace, and also utilizes sparse constraint to ensure the robustness to noise and outliers in subspace clustering. Specifically, we propose to first construct a self-representation matrix for all samples and combine an l 1-norm regularizer with an l 2,1-norm regularizer to guarantee that each sample can be represented as a sparse linear combination of its related samples. Then, we conduct the resulting matrix to build an affinity matrix. Finally, we apply spectral clustering on the affinity matrix to conduct clustering. In order to validate the effectiveness of the proposed method, we conducted experiments on UCI datasets, and the experimental results showed that our proposed method reduced the minimal clustering error, outperforming the state-of-the-art methods.

  相似文献   

3.
应力影响下的变异语音是由于说话人受到重力加速度变化而产生的,与正常语音相比,变异语音频谱能量在频带范围内分布更加分散。把整个频带划分成8个子带,采用子带频谱能量的比值为特征,提出一种基于子空间方法的正常/变异语音分类方法。该方法采用CLAFIC方法设计初始向量子空间,并通过LSM算法对两类样本子空间按不同的旋转方式训练,用预分类的结果调整分类器的参数来改善分类器的性能。实验结果表明,该方法对应力影响下的变异语音与正常语音具有良好分类效果,平均分类正确率达到了95.9%。  相似文献   

4.
NNSRM is an implementation of the structural risk minimization (SRM) principle using the nearest neighbor (NN) rule, and linear discriminant analysis (LDA) is a dimension-reducing method, which is usually used in classifications. This paper combines the two methods for face recognition. We first project the face images into a PCA subspace, then project the results into a much lower-dimensional LDA subspace, and then use an NNSRM classifier to recognize them in the LDA subspace. Experimental results demonstrate that the combined method can achieve a better performance than NN by selecting different distances and a comparable performance with SVM but costing less computational time.
Jiaxin Wang (Corresponding author)Email:

Danian Zheng   received his Bachelor degree in Computer Science and Technology in 2002 from Tsinghua University, Beijing, China. He received his Master degree and Doctoral degree in Computer Science and Technology in 2006 from Tsinghua University. He is currently a researcher in Fujitsu R&D Center Co. Ltd, Beijing, China. His research interests are mainly in the areas of support vector machines, kernel methods and their applications. Meng Na   received her Bachelor degree in Computer Science and Technology in 2003 from Northeastern, China. Since 2003 she has been pursuing the Master degree and the Doctoral degree at the Department of Computer Science and Technology at Tsinghua University. Her research interests are in the area of image processing, pattern recognition, and virtual human. Jiaxin Wang   received his Bachelor degree in Automatic Control in 1965 from Beijing University of Aeronautics and Astronautics, his Master degree in Computer Science and Technology in 1981 from Tsinghua University, Beijing, China, and his Doctoral degree in 1996 from Engineering Faculty of Vrije Universiteit Brussel, Belgium. He is currently a professor of Department of Computer Science and Technology, Tsinghua University. His research interests are in the areas of artificial intelligence, intelligent control and robotics, machine learning, pattern recognition, image processing and virtual reality.   相似文献   

5.
In this paper, we formulate the problem of summarization of a data set of transactions with categorical attributes as an optimization problem involving two objective functions – compaction gain and information loss. We propose metrics to characterize the output of any summarization algorithm. We investigate two approaches to address this problem. The first approach is an adaptation of clustering and the second approach makes use of frequent itemsets from the association analysis domain. We illustrate one application of summarization in the field of network data where we show how our technique can be effectively used to summarize network traffic into a compact but meaningful representation. Specifically, we evaluate our proposed algorithms on the 1998 DARPA Off-Line Intrusion Detection Evaluation data and network data generated by SKAION Corp for the ARDA information assurance program. Vipin Kumar is currently William Norris Professor and Head of the Computer Science and Engineering Department at the University of Minnesota. His research interests include high-performance computing and data mining. He has authored over 200 research articles, and has coedited or coauthored nine books including the widely used text booksIntroduction to Parallel Computing andIntroduction to Data Mining, both published by Addison Wesley. He has served as chair/co-chair for many conferences/workshops in the area of data mining and parallel computing, including the IEEE International Conference on Data Mining (2002) and the 15th International Parallel and Distributed Processing Symposium (2001). He serves as the chair of the steering committee of the SIAM International Conference on Data Mining, and is a member of the steering committee of the IEEE International Conference on Data Mining. Dr. Kumar serves or has served on the editorial boards of several journals includingKnowledge and Information Systems,Journal of Parallel and Distributed Computing andIEEE Transactions of Data and Knowledge Engineering (1993–1997). He is a Fellow of the ACM and IEEE, and a member of SIAM. Varun Chandola received his BTech degree in Computer Science from the Indian Institute of Technology, Madras, India, in 2002. He is currently a PhD student in the Computer Science and Engineering Department at the University of Minnesota. His research interests include data mining, cyber-security and machine learning.  相似文献   

6.
We investigate an algorithm to find a point of an affine subspace in the positive orthant such that it is the closest one to the original point with respect to the Kullback–Leibler distance. This problem is solved by means of the classical Darroch–Ratcliff algorithm (see [1]), while we use ideas of the information geometry founded by Chentsov (see [2]) and Csiszar (see [3]). The main theorem of the present work proves the convergence of that algorithm (the method of the proof is different from previous ones). The proposed algorithm can be applied, e.g., to find the maximum likelihood estimates in an exponential family (see the last section of the paper).  相似文献   

7.
The performance of clustering in document space can be influenced by the high dimension of the vectors, because there exists a great deal of redundant information in the high-dimensional vectors, which may make the similarity between vectors inaccurate. Hence, it is very considerable to derive a low-dimensional subspace that contains less redundant information, so that document vectors can be grouped more reasonably. In general, learning a subspace and clustering vectors are treated as two independent steps; in this case, we cannot estimate whether the subspace is appropriate for the method of clustering or vice versa. To overcome this drawback, this paper combines subspace learning and clustering into an iterative procedure named adaptive subspace learning (ASL). Firstly, the intracluster similarity and the intercluster separability of vectors can be increased via the initial cluster indicators in the step of subspace learning, and then affinity propagation is adopted to partition the vectors into a specific number of clusters, so as to update the cluster indicators and repeat subspace learning. In ASL, the obtained subspace can become more suitable for the clustering with the iterative optimization. The proposed method is evaluated using NG20, Classic3 and K1b datasets, and the results are shown to be superior to the conventional methods of document clustering.  相似文献   

8.
The subspace methods of classification are decision-theoretic pattern recognition methods in which each class is represented in terms of a linear subspace of the Euclidean pattern or feature space. In most reported subspace methods, a priori criteria have been applied to improve either the class representation or the discriminatory power of the subspaces. Recently, construction of the class subspaces by learning has been suggested by Kohonen, resulting in an improved classification accuracy. A variant of the original learning rule is analyzed and results are given on its application to the classification of phonemes in automatic speech recognition.  相似文献   

9.
10.
Comparing subspace clusterings   总被引:5,自引:0,他引:5  
We present the first framework for comparing subspace clusterings. We propose several distance measures for subspace clusterings, including generalizations of well-known distance measures for ordinary clusterings. We describe a set of important properties for any measure for comparing subspace clusterings and give a systematic comparison of our proposed measures in terms of these properties. We validate the usefulness of our subspace clustering distance measures by comparing clusterings produced by the algorithms FastDOC, HARP, PROCLUS, ORCLUS, and SSPC. We show that our distance measures can be also used to compare partial clusterings, overlapping clusterings, and patterns in binary data matrices.  相似文献   

11.
In this paper, we present an orthonormal version of the generalized signal subspace tracking. It is based on an interpretation of the generalized signal subspace as the solution of a constrained minimization task. This algorithm, referred to as the CGST algorithm, guarantees the Cx-orthonormality of the estimated generalized signal subspace basis at each iteration which Cx denotes the correlation matrix of the sequence x(t). An efficient implementation of the proposed algorithm enhances applicability of it in real time applications.  相似文献   

12.
Dear editor,Most existing ontology matching methods utilize literal in-formation to discover alignments[1,2].However,some lit-eral information in ontologies ma...  相似文献   

13.

In this paper, we propose a novel method, called random subspace method (RSM) based on tensor (Tensor-RS), for face recognition. Different from the traditional RSM which treats each pixel (or feature) of the face image as a sampling unit, thus ignores the spatial information within the face image, the proposed Tensor-RS regards each small image region as a sampling unit and obtains spatial information within small image regions by using reshaping image and executing tensor-based feature extraction method. More specifically, an original whole face image is first partitioned into some sub-images to improve the robustness to facial variations, and then each sub-image is reshaped into a new matrix whose each row corresponds to a vectorized small sub-image region. After that, based on these rearranged newly formed matrices, an incomplete random sampling by row vectors rather than by features (or feature projections) is applied. Finally, tensor subspace method, which can effectively extract the spatial information within the same row (or column) vector, is used to extract useful features. Extensive experiments on four standard face databases (AR, Yale, Extended Yale B and CMU PIE) demonstrate that the proposed Tensor-RS method significantly outperforms state-of-the-art methods.

  相似文献   

14.
Experimental data is subject to uncertainty as every measurement apparatus is inaccurate at some level. However, the design of most computer vision and pattern recognition techniques (e.g., Hough transform) overlooks this fact and treats intensities, locations and directions as precise values. In order to take imprecisions into account, entries are often resampled to create input datasets where the uncertainty of each original entry is characterized by as many exact elements as necessary. Clear disadvantages of the sampling-based approach are the natural processing penalty imposed by a larger dataset and the difficulty of estimating the minimum number of required samples. We present an improved voting scheme for the General Framework for Subspace Detection (hence to its particular case: the Hough transform) that allows processing both exact and uncertain data. Our approach is based on an analytical derivation of the propagation of Gaussian uncertainty from the input data into the distribution of votes in an auxiliary parameter space. In this parameter space, the uncertainty is also described by Gaussian distributions. In turn, the votes are mapped to the actual parameter space as non-Gaussian distributions. Our results show that resulting accumulators have smoother distributions of votes and are in accordance with the ones obtained using the conventional sampling process, thus safely replacing them with significant performance gains.  相似文献   

15.
The validity of the application of the Krylov subspace techniques in adaptive filtering and detection is investigated. A new verification of the equivalence of two well-known methods in the Krylov subspace, namely the multistage Wiener filters (MWF) and the auxiliary-vector filtering (AVF), is given in this paper. The MWF and AVF are incorporated into two well-known detectors, namely, the adaptive matched filter (AMF) and Kelly's generalized likelihood ratio test (CLRT) including their diagonally loaded versions, which form new detectors. Compared to the conventional AMF, CLRT, and their diagonally loaded versions as well as the reduced-rank AMF and GLRT, the probabilities of detection (PDs) of the new detectors are improved especially when the sample support is low. More importantly, the new detectors are robust of the rank selection of the clutter subspace compared to the reduced-rank AMF and GLRT. These new detectors all possess asymptotic constant false alarm rate (CFAR) property.  相似文献   

16.
This paper proposes a probabilistic framework for sensor-based grasping and describes how information about object attributes, such as position and orientation, can be updated using on-line sensor information gained during grasping. This allows learning about the target object even with a failed grasp, leading to replanning with improved performance at each successive attempt. Two grasp planning approaches utilizing the framework are proposed. Firstly, an approach maximizing the expected posterior stability of a grasp is suggested. Secondly, the approach is extended to use an entropy-based explorative procedure, which allows gathering more information when the current belief about the grasp stability does not allow robust grasping. In the framework, both object and grasp attributes as well as the stability of the grasp and on-line sensor information are represented by probabilistic models. Experiments show that the probabilistic treatment of grasping allows improving the probability of success in a series of grasping attempts. Moreover, experimental results on a real platform using the basic stability maximizing approach not only validate the proposed probabilistic framework but also show that under large initial uncertainties, explorative actions help to achieve successful grasps faster.  相似文献   

17.
A generalized information model (GIM) of an informative educational telecommunication environment (IETE) is considered. Such a GIM is a first-order autonomous dynamic system. The results of a qualitative analysis of the model are presented that underlie the formulation of key characteristics of the IETE. These characteristics can be the basis for classification of IETEs. The safety margins for the parameters of the GIM are determined. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 3, pp. 177–187, May–June 2006.  相似文献   

18.
In kernel-based nonlinear subspace (KNS) methods, the subspace dimensions have a strong influence on the performance of the subspace classifier. In order to get a high classification accuracy, a large dimension is generally required. However, if the chosen subspace dimension is too large, it leads to a low performance due to the overlapping of the resultant subspaces and, if it is too small, it increases the classification error due to the poor resulting approximation. The most common approach is of an ad hoc nature, which selects the dimensions based on the so-called cumulative proportion computed from the kernel matrix for each class. We propose a new method of systematically and efficiently selecting optimal or near-optimal subspace dimensions for KNS classifiers using a search strategy and a heuristic function termed the overlapping criterion. The rationale for this function has been motivated in the body of the paper. The task of selecting optimal subspace dimensions is reduced to find the best ones from a given problem-domain solution space using this criterion as a heuristic function. Thus, the search space can be pruned to very efficiently find the best solution. Our experimental results demonstrate that the proposed mechanism selects the dimensions efficiently without sacrificing the classification accuracy.  相似文献   

19.
Performing data mining tasks in streaming data is considered a challenging research direction, due to the continuous data evolution. In this work, we focus on the problem of clustering streaming time series, based on the sliding window paradigm. More specifically, we use the concept of subspace αα-clusters. A subspace αα-cluster consists of a set of streams, whose value difference is less than αα in a consecutive number of time instances (dimensions). The clusters can be continuously and incrementally updated as the streaming time series evolve with time. The proposed technique is based on a careful examination of pair-wise stream similarities for a subset of dimensions and then it is generalized for more streams per cluster. Additionally, we extend our technique in order to find maximal pClusters in consecutive dimensions that have been used in previously proposed clustering methods. Performance evaluation results, based on real-life and synthetic data sets, show that the proposed method is more efficient than existing techniques. Moreover, it is shown that the proposed pruning criteria are very important for search space reduction, and that the cost of incremental cluster monitoring is more computationally efficient that the re-clustering process.  相似文献   

20.
快速子空间谱峰搜索方法   总被引:1,自引:0,他引:1  
利用信号子空间和噪声子空间正交性,可以实现超分辨波达方向(DOA)估计.但实现过程中,谱峰搜索运算量相当大,影响了系统的实时性.新算法在谱峰搜索时,首先设置一个门限,然后通过大步进搜索,寻找出高于门限的谱峰所在一个邻域,再在该邻域内进行小步进搜索,寻找出准确谱峰.介绍了新算法的原理和步骤,并对算法性能进行分析.计算机仿真表明,新算法在实现相同分辨率的性能条件下,比传统谱峰搜索方法减少40%~50%的运算量.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号