首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Supersaturated designs (SSDs) are widely researched because they can greatly reduce the number of experiments. However, analyzing the data from SSDs is not easy as their run size is not large enough to estimate all the main effects. This paper introduces contrast-orthogonality cluster and anticontrast-orthogonality cluster to reflect the inner structure of SSDs which are helpful for experimenters to arrange factors to the columns of SSDs. A new strategy for screening active factors is proposed and named as contrast-orthogonality cluster analysis (COCA) method. Simulation studies demonstrate that this method performs well compared to most of the existing methods. Furthermore, the COCA method has lower type II errors and it is easy to be understood and implemented.  相似文献   

2.
Image coding using principal component analysis (PCA), a type of image compression technique, projects image blocks to a subspace that can preserve most of the original information. However, the blocks in the image exhibit various inhomogeneous properties, such as smooth region, texture, and edge, which give rise to difficulties in PCA image coding. This paper proposes a repartition clustering method to partition the data into groups, such that individuals of the same group are homogeneous, and vice versa. The PCA method is applied separately for each group. In the clustering method, the genetic algorithm acts as a framework consisting of three phases, including the proposed repartition clustering. Based on this mechanism, the proposed method can effectively increase image quality and provide an enhanced visual effect.  相似文献   

3.
In this paper a new method of mode separation is proposed. The method is based on mapping of data points from the N-dimensional space onto a sequence so that the majority of points from each mode become successive elements of the sequence. The intervals of points in the sequence belonging to the respective modes of the p.d.f. are then determined from a function generated on this sequence. The nuclei of the modes formed by the elements of these intervals are then used to obtain separating surfaces between the modes and so to partition the data set with multimodal probability density function into unimodal subsets.  相似文献   

4.
Clustering is one of the widely used knowledge discovery techniques to reveal structures in a dataset that can be extremely useful to the analyst. In iterative clustering algorithms the procedure adopted for choosing initial cluster centers is extremely important as it has a direct impact on the formation of final clusters. Since clusters are separated groups in a feature space, it is desirable to select initial centers which are well separated. In this paper, we have proposed an algorithm to compute initial cluster centers for k-means algorithm. The algorithm is applied to several different datasets in different dimension for illustrative purposes. It is observed that the newly proposed algorithm has good performance to obtain the initial cluster centers for the k-means algorithm.  相似文献   

5.
聚类分析是数据挖掘技术中的一类常见的方法。对于一类数值属性的挖掘,聚类之后,常出现所谓的孤立点。然而,有的孤立点其实并不孤立,它可能仍属于某个已确定的类,文章提出了一个基于属性之间相似关系的聚类分析方法,并对此进行了探讨。  相似文献   

6.
A new subspace identification approach based on principal component analysis   总被引:17,自引:0,他引:17  
Principal component analysis (PCA) has been widely used for monitoring complex industrial processes with multiple variables and diagnosing process and sensor faults. The objective of this paper is to develop a new subspace identification algorithm that gives consistent model estimates under the errors-in-variables (EIV) situation. In this paper, we propose a new subspace identification approach using principal component analysis. PCA naturally falls into the category of EIV formulation, which resembles total least squares and allows for errors in both process input and output. We propose to use PCA to determine the system observability subspace, the A, B, C, and D matrices and the system order for an EIV formulation. Standard PCA is modified with instrumental variables in order to achieve consistent estimates of the system matrices. The proposed subspace identification method is demonstrated using a simulated process and a real industrial process for model identification and order determination. For comparison the MOESP algorithm and N4SID algorithm are used as benchmarks to demonstrate the advantages of the proposed PCA based subspace model identification (SMI) algorithm.  相似文献   

7.
一种新型的模糊C均值聚类初始化方法   总被引:10,自引:0,他引:10  
刘笛  朱学峰  苏彩红 《计算机仿真》2004,21(11):148-151
模糊C均值聚类(FCM)是一种广泛采用的动态聚类方法,其聚类效果往往受初始聚类中心的影响。受自适应免疫系统对入侵机体的抗原产生免疫记忆的机理启示,提出了一种新的产生初始聚类中心的方法。算法中,待分析的数据被视为入侵性抗原,产生的记忆细胞作为聚类分析的初始中心。克隆选择用来产生抗原的记忆细胞群体,免疫网络理论则用来抑制该群体规模的快速增长。实验结果表明免疫记忆机理用于FCM初始中心的选择是可行的,不仅提高了FCM算法的收敛速度,而且可以通过改变阈值的大小自动决定类别数。  相似文献   

8.
This paper describes a new methodology to detect small anomalies in high resolution hyperspectral imagery, which involves successively: (1) a multivariate statistical analysis (principal component analysis, PCA) of all spectral bands; (2) a geostatistical filtering of noise and regional background in the first principal components using factorial kriging; and finally (3) the computation of a local indicator of spatial autocorrelation to detect local clusters of high or low reflectance values and anomalies. The approach is illustrated using 1 m resolution data collected in and near northeastern Yellowstone National Park. Ground validation data for tarps and for disturbed soils on mine tailings demonstrate the ability of the filtering procedure to reduce the proportion of false alarms (i.e., pixels wrongly classified as target), and its robustness under low signal to noise ratios. In almost all scenarios, the proposed approach outperforms traditional anomaly detectors (i.e., RX detector which computes the Mahalanobis distance between the vector of spectral values and the vector of global means), and fewer false alarms are obtained when using a novel statistic S2 (average absolute deviation of p-values from 0.5 through all spectral bands) to summarize information across bands. Image degradation through addition of noise or reduction of spectral resolution tends to blur the detection of anomalies, increasing false alarms, in particular for the identification of the least pure pixels. Results from a mine tailings site demonstrate the approach performs reasonably well for highly complex landscape with multiple targets of various sizes and shapes. By leveraging both spectral and spatial information, the technique requires little or no input from the user, and hence can be readily automated.  相似文献   

9.
This paper presents a unified theory of a class of learning neural nets for principal component analysis (PCA) and minor component analysis (MCA). First, some fundamental properties are addressed which all neural nets in the class have in common. Second, a subclass called the generalized asymmetric learning algorithm is investigated, and the kind of asymmetric structure which is required in general to obtain the individual eigenvectors of the correlation matrix of a data sequence is clarified. Third, focusing on a single-neuron model, a systematic way of deriving both PCA and MCA learning algorithms is shown, through which a relation between the normalization in PCA algorithms and that in MCA algorithms is revealed. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998  相似文献   

10.
11.
The EM algorithm is used to track moving objects as clusters of pixels significantly different from the corresponding pixels in a reference image. The underlying cluster model is Gaussian in image space, but not in grey-level difference distribution. The generative model is used to derive criteria for the elimination and merging of clusters, while simple heuristics are used for the initialisation and splitting of clusters. The system is competitive with other tracking algorithms based on image differencing.  相似文献   

12.
基于改进的FCM的人脑MR图像分割   总被引:2,自引:0,他引:2  
传统模糊C均值广泛应用于图像分割,它是一种经典的模棚聚类分析方法,但是FCM算法对于初始值的选择都是采取随机的方法,强烈依赖于初始值的选择,收敛结果容易陷入局部最小值,并且FCM并没有考虑图像的空间信息,因而对噪声十分敏感。提出改进的FCM方法,采用新的方法确定初始值的选择,然后考虑空间信息,利用Gibbs随机场的性质引入先验邻域约束信息,重新确定像素的模糊隶属度值,同时再进一步地调整距离矩阵。通过实验可以表明,此改进的方法具有很好的分割效果,同时对噪声具有较强的鲁棒性。  相似文献   

13.
Clustering is an important concept formation process within AI. It detects a set of objects with similar characteristics. These similar aggregated objects represent interesting concepts and categories. As clustering becomes more mature, post-clustering activities that reason about clusters need a great attention. Numerical quantitative information about clusters is not as intuitive as qualitative one for human analysis, and there is a great demand for an intelligent qualitative cluster reasoning technique in data-rich environments. This article introduces a qualitative cluster reasoning framework that reasons about clusters. Experimental results demonstrate that our proposed qualitative cluster reasoning reveals interesting cluster structures and rich cluster relations.  相似文献   

14.
李雄杰  周东华 《计算机科学》2016,43(Z11):320-323
仿射投影算法(APA)重复利用数据,可提高算法的收敛速度。针对现有盲源分离收敛速度慢的问题,以盲源分离的非线性主分量分析(PCA)为基础,结合仿射投影算法,提出了盲源分离的非线性APA-PCA准则,并设计出盲源分离的APA-Kalman,APA-RLS,APA-LMS新算法。在这些新算法中,预白化后的观测向量数据被重复利用,向量式数据转变成矩阵式数据,从而加快了盲源分离的收敛速度。仿真结果表明,非线性APA-PCA准则是有效的。  相似文献   

15.
首先从人的行为、安全管理、装备设施、自然条件、安全技术与监管机制5个方面建立了煤矿安全评价指标体系;然后运用主成分分析和聚类分析建立了煤矿安全评价模型,通过主成分分析选取综合指标,减少了评价指标的个数,通过聚类分析对各煤矿企业的安全状态进行分类评价,分析其相似性和差异性;最后通过对某省40个煤矿企业的安全状况进行评价,介绍了该煤矿安全评价模型的应用步骤。应用结果表明,该煤矿安全评价模型得出的评价结果简单、直观地反映了煤矿的安全状态。  相似文献   

16.
Vertices Principal Component Analysis (V-PCA), and Centers Principal Component Analysis (C-PCA) generalize Principal Component Analysis (PCA) in order to summarize interval valued data. Neural Network Principal Component Analysis (NN-PCA) represents an extension of PCA for fuzzy interval data. However, also the first two methods can be used for analyzing fuzzy interval data, but they then ignore the spread information. In the literature, the V-PCA method is usually considered computationally cumbersome because it requires the transformation of the interval valued data matrix into a single valued data matrix the number of rows of which depends exponentially on the number of variables and linearly on the number of observation units. However, it has been shown that this problem can be overcome by considering the cross-products matrix which is easy to compute. A review of C-PCA and V-PCA (which hence also includes the computational short-cut to V-PCA) and NN-PCA is provided. Furthermore, a comparison is given of the three methods by means of a simulation study and by an application to an empirical data set. In the simulation study, fuzzy interval data are generated according to various models, and it is reported in which conditions each method performs best.  相似文献   

17.
Fault detection and diagnosis is a critical approach to ensure safe and efficient operation of manufacturing and chemical processing plants. Although multivariate statistical process monitoring has received considerable attention, investigation into the diagnosis of the source or cause of the detected process fault has been relatively limited. This is partially due to the difficulty in isolating multiple variables, which jointly contribute to the occurrence of fault, through conventional contribution analysis. In this work, a method based on probabilistic principal component analysis is proposed for fault isolation. Furthermore, a branch and bound method is developed to handle the combinatorial nature of problem involving finding the contributing variables, which are most likely to be responsible for the occurrence of fault. The efficiency of the method proposed is shown through benchmark examples, such as Tennessee Eastman process, and randomly generated cases.  相似文献   

18.
提出了一种基于蚁群算法的聚类新算法。按分类的样本数N和类别数p,设计N+1层城市,除第1层城市外,其余城市均有p个城市。蚂蚁每次从第1层城市开始到最后一层城市的移动,就完成对所有样本的分类。访问城市的选择受路径信息素和样品类信息素的共同作用,每次完成层间城市的访问,需要对路径信息素更新;完成一次循环,分别对路径信息素和样本类信息素更新。通过实例分析,该算法能够得到较为满意的结果。  相似文献   

19.
This paper addresses the problem of face recognition using independent component analysis (ICA). More specifically, we are going to address two issues on face representation using ICA. First, as the independent components (ICs) are independent but not orthogonal, images outside a training set cannot be projected into these basis functions directly. In this paper, we propose a least-squares solution method using Householder Transformation to find a new representation. Second, we demonstrate that not all ICs are useful for recognition. Along this direction, we design and develop an IC selection algorithm to find a subset of ICs for recognition. Three public available databases, namely, MIT AI Laboratory, Yale University and Olivette Research Laboratory, are selected to evaluate the performance and the results are encouraging.  相似文献   

20.
The objective of this paper is to present an efficient hardware architecture for generalized Hebbian algorithm (GHA). In the architecture, the principal component computation and weight vector updating of the GHA are operated in parallel, so that the throughput of the circuit can be significantly enhanced. In addition, the weight vector updating process is separated into a number of stages for lowering area costs and increasing computational speed. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is designed. It is embedded in a system-on-programmable-chip (SOPC) platform for physical performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号