首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
基于神经网络的HSIC改进算法分析与仿真   总被引:1,自引:1,他引:0  
为提高控制系统的稳定性和优化控制性能.在对相平面e-e分析的基础上,提出了一种仿人智能控制(HSIC)特征模型的改进算法.将系统动态过程划分为若干运行模式,间接地与系统性能指标建立了相互联系.控制结构上采用开闭环(开环为主导的闭环控制),且控制模态简单.同时,将神经网络逆模型与HSIC相结合提出了一种新的智能控制器(神经网络复合HSIC控制器),将神经网络的参数自学习,自适应特性与HSIC控制相结合,降低了系统逆模型对原系统精确数学模型的依赖.仿真结果证明了设计方法的有效性.  相似文献   

2.
在分析相平面e˙e的基础上,提出了一种仿人智能控制(HSIC)特征模型的新算法;将系统动态过程划分为若干运行模式,在系统响应性能指标与动态过程分区之间间接地建立了相互联系.控制结构上采用开闭环控制,即开环为主导的闭环控制,其控制模态简单.同时,将模糊控制与HSIC相结合,提出了一种新的智能控制器——模糊HSIC控制器.模糊控制器输入输出各语言变量的论域和HSIC特征模型新算法的相平面分区有直接的联系,其控制规则表可以由新算法的逻辑控制规则直接构建.仿真结果表明,HSIC特征模型新算法便于构建模糊控制器,且模糊控制器有较好的鲁棒性和抗干扰能力.  相似文献   

3.
基于独立性理论的文本分类特征选择方法   总被引:1,自引:0,他引:1       下载免费PDF全文
特征与各个文档类在文本集中的独立程度体现了特征的代表性,文本分类的特征选择过程是选择能够提高分类性能的高代表性特征的过程。基于该原理提出DHChi2和EIBA 2种新的文本分类特征选择方法,对这2种方法进行合理的组合。实验结果表明,独立性理论应用于文本分类特征选择有利于提高分类性能。  相似文献   

4.
向量空间模型(VSM)是一种常用的文本特征表示方法,它是基于特征独立性假设建立起来的,将文本看成是由一个个独立的词所构成,这些词之间互不关联,这种方法丢失了文本中词间的一些重要的关联特征信息。基于双词关联的文本特征选择模型是在VSM的基础上,选择文本中相邻的单词之间的关联信息也作为文本特征,从而能更加充分地表达文本的特征信息。实验表明,这是一种更加有效的文本特征选择方法。  相似文献   

5.
基于Web Services的分布式工作流的研究与实现   总被引:10,自引:2,他引:8  
为了满足现代企业信息的分布性、自治性和异构性,构建完全分布的工作流管理系统是亟待解决的问题。Web Services作为一种面向服务的体系架构,其突出优点是实现了真正意义上的平台独立性和语言独立性,基于此提出了一种完全分布的工作流模型,并给出了具体实现中相关问题的解决方案。  相似文献   

6.
张逸石  陈传波 《计算机科学》2011,38(12):200-205
提出了一种基于最小联合互信息亏损的最优特征选择算法。该算法首先通过一种动态渐增策略搜索一个特征全集的无差异特征子集,并基于最小条件互信息原则在保证每一步中联合互信息量亏损都最小的情况下筛选其中的冗余特征,从而得到一个近似最优特征子集。针对现有基于条件互信息的条件独立性测试方法在高维特征域上所面临的效率瓶颈问题,给出了一种用于估计条件互信息的快速实现方法,并将其用于所提算法的实现。分类实验结果表明,所提算法优于经典的特征选择算法。此外,执行效率实验结果表明,所提条件互信息的快速实现方法在执行效率上有着显著的优势。  相似文献   

7.
在无标签高维数据普遍存在的数据挖掘和模式识别任务中,无监督特征选择是必不可少的预处理步骤。然而现有的大多数特征选择方法忽略了数据特征之间的相关性,选择出具有高冗余、低判别性的特征。本文提出一种基于联合不相关回归和非负谱分析的无监督特征选择方法(joint uncorrelated regression and nonnegative spectral analysis for unsupervised feature selection),在选择不相关且具有判别性特征的同时,自适应动态确定数据之间的相似性关系,从而能获得更准确的数据结构和标签信息。而且,模型中广义不相关约束能够避免平凡解,所以此方法具有不相关回归和非负谱聚类两种特征选择方法的优点。本文还设计出一种求解模型的高效算法,并在多个数据集上进行了大量实验与分析,验证模型的优越性。  相似文献   

8.
特征选择方法主要包括过滤方法和绕封方法。为了利用过滤方法计算简单和绕封方法精度高的优点,提出一种组合过滤和绕封方法的特征选择新方法。该方法首先利用基于互信息准则的过滤方法得到满足一定精度要求的子集后,再采用绕封方法找到最后的优化特征子集。由于遗传算法在组合优化问题上的成功应用,对特征子集寻优采用了遗传算法。在数值仿真和轴承故障特征选择中,采用新方法在保证诊断精度的同时,可以节省大量选择时间。组合特征选择方法有较好的寻优特征子集的能力,能够节省选择时间,具有高效、高精度的双重优点。  相似文献   

9.
常见的相关系数反映变量之间的线性或非线性程度。基于希尔伯特-斯密特独立准则(Hilbert-Schmidt Independence Criterion,HSIC)的有偏估计(HSIC0),提出了根据类标签划分出的类与类之间的非线性相关关系的度量方法。通过六组真实的、不同类型的数据集,分别选取了线性核、多项式核、RBF核和Sigmoid核函数进行实验。结果表明,该方法具有较好的可行性。  相似文献   

10.
基于有向带权图迭代的面向对象系统分解方法   总被引:8,自引:1,他引:8  
罗景  赵伟  秦涛  姜人宽  张路  孙家驌 《软件学报》2004,15(9):1292-1300
针对如何从现存的系统中提取构件的问题,提出了一种基于有向带权图迭代分析的面向对象系统分解方法.它将面向对象系统抽象为一个有向带权图,使用迭代算法考察不同粒度的子图的独立性,并选择独立性高的作为候选构件.实验结果表明,该方法是一种有效的系统分解方法,在准确性上比现有系统分解方法有所提高.  相似文献   

11.
Dimensionality reduction is an important and challenging task in machine learning and data mining. Feature selection and feature extraction are two commonly used techniques for decreasing dimensionality of the data and increasing efficiency of learning algorithms. Specifically, feature selection realized in the absence of class labels, namely unsupervised feature selection, is challenging and interesting. In this paper, we propose a new unsupervised feature selection criterion developed from the viewpoint of subspace learning, which is treated as a matrix factorization problem. The advantages of this work are four-fold. First, dwelling on the technique of matrix factorization, a unified framework is established for feature selection, feature extraction and clustering. Second, an iterative update algorithm is provided via matrix factorization, which is an efficient technique to deal with high-dimensional data. Third, an effective method for feature selection with numeric data is put forward, instead of drawing support from the discretization process. Fourth, this new criterion provides a sound foundation for embedding kernel tricks into feature selection. With this regard, an algorithm based on kernel methods is also proposed. The algorithms are compared with four state-of-the-art feature selection methods using six publicly available datasets. Experimental results demonstrate that in terms of clustering results, the proposed two algorithms come with better performance than the others for almost all datasets we experimented with here.  相似文献   

12.
Multi-label learning deals with data associated with a set of labels simultaneously. Dimensionality reduction is an important but challenging task in multi-label learning. Feature selection is an efficient technique for dimensionality reduction to search an optimal feature subset preserving the most relevant information. In this paper, we propose an effective feature evaluation criterion for multi-label feature selection, called neighborhood relationship preserving score. This criterion is inspired by similarity preservation, which is widely used in single-label feature selection. It evaluates each feature subset by measuring its capability in preserving neighborhood relationship among samples. Unlike similarity preservation, we address the order of sample similarities which can well express the neighborhood relationship among samples, not just the pairwise sample similarity. With this criterion, we also design one ranking algorithm and one greedy algorithm for feature selection problem. The proposed algorithms are validated in six publicly available data sets from machine learning repository. Experimental results demonstrate their superiorities over the compared state-of-the-art methods.   相似文献   

13.
F-score作为特征评价准则时,没有考虑不同特征的不同测量量纲对特征重要性的影响。为此,提出一种新的特征评价准则D-score,该准则不仅可以衡量样本特征在两类或多类之间的辨别能力,而且不受特征测量量纲对特征重要性的影响。以D-score为特征重要性评价准则,结合前向顺序搜索、前向顺序浮动搜索以及后向浮动搜索三种特征搜索策略,以支持向量机分类正确率评价特征子集的分类性能得到三种混合的特征选择方法。这些特征选择方法结合了Filter方法和Wrapper方法的各自优势实现特征选择。对UCI机器学习数据库中9个标准数据集的实验测试,以及与基于改进F-score与支持向量机的混合特征选择方法的实验比较,表明D-score特征评价准则是一种有效的样本特征重要性,也即特征辨别能力衡量准则。基于该准则与支持向量机的混合特征选择方法实现了有效的特征选择,在保持数据集辨识能力不变情况下实现了维数压缩。  相似文献   

14.
特征选择作为一个数据预处理过程,在数据挖掘、模式识别和机器学习中有着重要地位。通过特征选择,可以降低问题的复杂度,提高学习算法的预测精度、鲁棒性和可解释性。介绍特征选择方法框架,重点描述生成特征子集、评价准则两个过程;根据特征选择和学习算法的不同结合方式对特征选择算法分类,并分析各种方法的优缺点;讨论现有特征选择算法存在的问题,提出一些研究难点和研究方向。  相似文献   

15.
A genetic algorithm-based method for feature subset selection   总被引:5,自引:2,他引:3  
As a commonly used technique in data preprocessing, feature selection selects a subset of informative attributes or variables to build models describing data. By removing redundant and irrelevant or noise features, feature selection can improve the predictive accuracy and the comprehensibility of the predictors or classifiers. Many feature selection algorithms with different selection criteria has been introduced by researchers. However, it is discovered that no single criterion is best for all applications. In this paper, we propose a framework based on a genetic algorithm (GA) for feature subset selection that combines various existing feature selection methods. The advantages of this approach include the ability to accommodate multiple feature selection criteria and find small subsets of features that perform well for a particular inductive learning algorithm of interest to build the classifier. We conducted experiments using three data sets and three existing feature selection methods. The experimental results demonstrate that our approach is a robust and effective approach to find subsets of features with higher classification accuracy and/or smaller size compared to each individual feature selection algorithm.  相似文献   

16.
黄琴    钱文彬    王映龙  吴兵龙 《智能系统学报》2019,14(5):929-938
在多标记学习中,特征选择是提升多标记学习分类性能的有效手段。针对多标记特征选择算法计算复杂度较大且未考虑到现实应用中数据的获取往往需要花费代价,本文提出了一种面向代价敏感数据的多标记特征选择算法。该算法利用信息熵分析特征与标记之间的相关性,重新定义了一种基于测试代价的特征重要度准则,并根据服从正态分布的特征重要度和特征代价的标准差,给出一种合理的阈值选择方法,同时通过阈值剔除冗余和不相关特征,得到低总代价的特征子集。通过在多标记数据的实验对比和分析,表明该方法的有效性和可行性。  相似文献   

17.
Unsupervised feature selection is fundamental in statistical pattern recognition, and has drawn persistent attention in the past several decades. Recently, much work has shown that feature selection can be formulated as nonlinear dimensionality reduction with discrete constraints. This line of research emphasizes utilizing the manifold learning techniques, where feature selection and learning can be studied based on the manifold assumption in data distribution. Many existing feature selection methods such as Laplacian score, SPEC(spectrum decomposition of graph Laplacian), TR(trace ratio) criterion, MSFS(multi-cluster feature selection) and EVSC(eigenvalue sensitive criterion) apply the basic properties of graph Laplacian, and select the optimal feature subsets which best preserve the manifold structure defined on the graph Laplacian. In this paper, we propose a new feature selection perspective from locally linear embedding(LLE), which is another popular manifold learning method. The main difficulty of using LLE for feature selection is that its optimization involves quadratic programming and eigenvalue decomposition, both of which are continuous procedures and different from discrete feature selection. We prove that the LLE objective can be decomposed with respect to data dimensionalities in the subset selection problem, which also facilitates constructing better coordinates from data using the principal component analysis(PCA) technique. Based on these results, we propose a novel unsupervised feature selection algorithm,called locally linear selection(LLS), to select a feature subset representing the underlying data manifold. The local relationship among samples is computed from the LLE formulation, which is then used to estimate the contribution of each individual feature to the underlying manifold structure. These contributions, represented as LLS scores, are ranked and selected as the candidate solution to feature selection. We further develop a locally linear rotation-selection(LLRS) algorithm which extends LLS to identify the optimal coordinate subset from a new space. Experimental results on real-world datasets show that our method can be more effective than Laplacian eigenmap based feature selection methods.  相似文献   

18.
Reducing the dimensionality of the data has been a challenging task in data mining and machine learning applications. In these applications, the existence of irrelevant and redundant features negatively affects the efficiency and effectiveness of different learning algorithms. Feature selection is one of the dimension reduction techniques, which has been used to allow a better understanding of data and improve the performance of other learning tasks. Although the selection of relevant features has been extensively studied in supervised learning, feature selection in the absence of class labels is still a challenging task. This paper proposes a novel method for unsupervised feature selection, which efficiently selects features in a greedy manner. The paper first defines an effective criterion for unsupervised feature selection that measures the reconstruction error of the data matrix based on the selected subset of features. The paper then presents a novel algorithm for greedily minimizing the reconstruction error based on the features selected so far. The greedy algorithm is based on an efficient recursive formula for calculating the reconstruction error. Experiments on real data sets demonstrate the effectiveness of the proposed algorithm in comparison with the state-of-the-art methods for unsupervised feature selection.  相似文献   

19.
We present a methodology for learning a taxonomy from a set of text documents that each describes one concept. The taxonomy is obtained by clustering the concept definition documents with a hierarchical approach to the Self-Organizing Map. In this study, we compare three different feature extraction approaches with varying degree of language independence. The feature extraction schemes include fuzzy logic-based feature weighting and selection, statistical keyphrase extraction, and the traditional tf-idf weighting scheme. The experiments are conducted for English, Finnish, and Spanish. The results show that while the rule-based fuzzy logic systems have an advantage in automatic taxonomy learning, taxonomies can also be constructed with tolerable results using statistical methods without domain- or style-specific knowledge.  相似文献   

20.
Simultaneous feature selection and clustering using mixture models   总被引:6,自引:0,他引:6  
Clustering is a common unsupervised learning technique used to discover group structure in a set of data. While there exist many algorithms for clustering, the important issue of feature selection, that is, what attributes of the data should be used by the clustering algorithms, is rarely touched upon. Feature selection for clustering is difficult because, unlike in supervised learning, there are no class labels for the data and, thus, no obvious criteria to guide the search. Another important problem in clustering is the determination of the number of clusters, which clearly impacts and is influenced by the feature selection issue. In this paper, we propose the concept of feature saliency and introduce an expectation-maximization (EM) algorithm to estimate it, in the context of mixture-based clustering. Due to the introduction of a minimum message length model selection criterion, the saliency of irrelevant features is driven toward zero, which corresponds to performing feature selection. The criterion and algorithm are then extended to simultaneously estimate the feature saliencies and the number of clusters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号