首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the literature, there exist statistical tests to compare supervised learning algorithms on multiple data sets in terms of accuracy but they do not always generate an ordering. We propose Multi2Test, a generalization of our previous work, for ordering multiple learning algorithms on multiple data sets from “best” to “worst” where our goodness measure is composed of a prior cost term additional to generalization error. Our simulations show that Multi2Test generates orderings using pairwise tests on error and different types of cost using time and space complexity of the learning algorithms.  相似文献   

2.
In this paper, we propose a method to predict the presence or absence of correct classification results in classification problems with many classes and the output of the classifier is provided in the form of a ranking list. This problem differs from the “traditional” classification tasks encountered in pattern recognition. While the original problem of forming a ranking of the most likely classes can be solved by running several classification methods, the analysis presented here is moved one step further. The main objective is to analyse (classify) the provided rankings (an ordered list of rankings of a fixed length) and decide whether the “true” class is present on this list. With this regard, a two-class classification problem is formulated where the underlying feature space is built through a characterization of the ranking lists. Experimental results obtained for synthetic data as well as real world face identification data are presented.  相似文献   

3.
在多标记学习中,如何处理高维特征一直是研究难点之一,而特征提取算法可以有效解决数据特征高维性导致的分类性能降低问题。但目前已有的多标记特征提取算法很少充分利用特征信息并充分提取"特征-标记"独立信息及融合信息。基于此,提出一种基于特征标记依赖自编码器的多标记特征提取方法。使用核极限学习机自编码器将原标记空间与原特征空间融合并产生重构后的新特征空间。一方面最大化希尔伯特-施密特范数以充分利用标记信息;另一方面通过主成分分析来降低特征提取过程中的信息损失,结合二者并分别提取"特征-特征"和"特征-标记"信息。通过在Yahoo多组高维多标记数据集上的对比实验表明,该算法的性能优于当前五种主要的多标记特征提取方法,验证了所提算法的有效性。  相似文献   

4.
This paper introduces the problem of searching for social network accounts, e.g., Twitter accounts, with the rich information available on the Web, e.g., people names, attributes, and relationships to other people. For this purpose, we need to map Twitter accounts with Web entities. However, existing solutions building upon naive textual matching inevitably suffer low precision due to false positives (e.g., fake impersonator accounts) and false negatives (e.g., accounts using nicknames). To overcome these limitations, we leverage “relational” evidences extracted from the Web corpus. We consider two types of evidence resources—First, web-scale entity relationship graphs, extracted from name co-occurrences crawled from the Web. This co-occurrence relationship can be interpreted as an “implicit” counterpart of Twitter follower relationships. Second, web-scale relational repositories, such as Freebase with complementary strength. Using both textual and relational features obtained from these resources, we learn a ranking function aggregating these features for the accurate ordering of candidate matches. Another key contribution of this paper is to formulate confidence scoring as a separate problem from relevance ranking. A baseline approach is to use the relevance of the top match itself as the confidence score. In contrast, we train a separate classifier, using not only the top relevance score but also various statistical features extracted from the relevance scores of all candidates, and empirically validate that our approach outperforms the baseline approach. We evaluate our proposed system using real-life internet-scale entity-relationship and social network graphs.  相似文献   

5.
Many financial organizations such as banks and retailers use computational credit risk analysis (CRA) tools heavily due to recent financial crises and more strict regulations. This strategy enables them to manage their financial and operational risks within the pool of financial institutes. Machine learning algorithms especially binary classifiers are very popular for that purpose. In real-life applications such as CRA, feature selection algorithms are used to decrease data acquisition cost and to increase interpretability of the decision process. Using feature selection methods directly on CRA data sets may not help due to categorical variables such as marital status. Such features are usually are converted into binary features using 1-of-k encoding and eliminating a subset of features from a group does not help in terms of data collection cost or interpretability. In this study, we propose to use the probit classifier with a proper prior structure and multiple kernel learning with a proper kernel construction procedure to perform group-wise feature selection (i.e., eliminating a group of features together if they are not helpful). Experiments on two standard CRA data sets show the validity and effectiveness of the proposed binary classification algorithm variants.  相似文献   

6.
In classification problems, a large number of features are typically used to describe the problem’s instances. However, not all of these features are useful for classification. Feature selection is usually an important pre-processing step to overcome the problem of “curse of dimensionality”. Feature selection aims to choose a small number of features to achieve similar or better classification performance than using all features. This paper presents a particle swarm Optimization (PSO)-based multi-objective feature selection approach to evolving a set of non-dominated feature subsets which achieve high classification performance. The proposed algorithm uses local search techniques to improve a Pareto front and is compared with a pure multi-objective PSO algorithm, three well-known evolutionary multi-objective algorithms and a current state-of-the-art PSO-based multi-objective feature selection approach. Their performances are examined on 12 benchmark datasets. The experimental results show that in most cases, the proposed multi-objective algorithm generates better Pareto fronts than all other methods.  相似文献   

7.
Ranking items is an essential problem in recommendation systems. Since comparing two items is the simplest type of queries in order to measure the relevance of items, the problem of aggregating pairwise comparisons to obtain a global ranking has been widely studied. Furthermore, ranking with pairwise comparisons has recently received a lot of attention in crowdsourcing systems where binary comparative queries can be used effectively to make assessments faster for precise rankings. In order to learn a ranking based on a training set of queries and their labels obtained from annotators, machine learning algorithms are generally used to find the appropriate ranking model which describes the data set the best.In this paper, we propose a probabilistic model for learning multiple latent rankings by using pairwise comparisons. Our novel model can capture multiple hidden rankings underlying the pairwise comparisons. Based on the model, we develop an efficient inference algorithm to learn multiple latent rankings as well as an effective inference algorithm for active learning to update the model parameters in crowdsourcing systems whenever new pairwise comparisons are supplied. The performance study with synthetic and real-life data sets confirms the effectiveness of our model and inference algorithms.  相似文献   

8.
What is the simplest thing you can do to solve a problem? In the context of semi-supervised feature selection, we tackle exactly this—how much we can gain from two simple classifier-independent strategies. If we have some binary labelled data and some unlabelled, we could assume the unlabelled data are all positives, or assume them all negatives. These minimalist, seemingly naive, approaches have not previously been studied in depth. However, with theoretical and empirical studies, we show they provide powerful results for feature selection, via hypothesis testing and feature ranking. Combining them with some “soft” prior knowledge of the domain, we derive two novel algorithms (Semi-JMI, Semi-IAMB) that outperform significantly more complex competing methods, showing particularly good performance when the labels are missing-not-at-random. We conclude that simple approaches to this problem can work surprisingly well, and in many situations we can provably recover the exact feature selection dynamics, as if we had labelled the entire dataset.  相似文献   

9.
10.
This paper proposes a new quantum-inspired evolutionary algorithm for solving ordering problems. Quantum-inspired evolutionary algorithms based on binary and real representations have been previously developed to solve combinatorial and numerical optimization problems, providing better results than classical genetic algorithms with less computational effort. However, for ordering problems, order-based genetic algorithms are more suitable than those with binary and real representations. This is because specialized crossover and mutation processes are employed to always generate feasible solutions. Therefore, this work proposes a new quantum-inspired evolutionary algorithm especially devised for ordering problems (QIEA-O). Two versions of the algorithm have been proposed. The so-called pure version generates solutions by using the proposed procedure alone. The hybrid approach, on the other hand, combines the pure version with a traditional order-based genetic algorithm. The proposed quantum-inspired order-based evolutionary algorithms have been evaluated for two well-known benchmark applications – the traveling salesman problem (TSP) and the vehicle routing problem (VRP) – as well as in a real problem of line scheduling. Numerical results were obtained for ten cases (7 VRP and 3 TSP) with sizes ranging from 33 to 101 stops and 1 to 10 vehicles, where the proposed quantum-inspired order-based genetic algorithm has outperformed a traditional order-based genetic algorithm in most experiments.  相似文献   

11.
Since given classification data often contains redundant, useless or misleading features, feature selection is an important pre-processing step for solving classification problems. This problem is often solved by applying evolutionary algorithms to decrease the dimensional number of features involved. Removing irrelevant features in the feature space and identifying relevant features correctly is the primary objective, which can increase classification accuracy. In this paper, a novel QBGSA–K-NN hybrid system which hybridizes the quantum-inspired binary gravitational search algorithm (QBGSA) with the K-nearest neighbor (K-NN) method with leave-one-out cross-validation (LOOCV) is proposed. The main aim of this system is to improve classification accuracy with an appropriate feature subset in binary problems. We evaluate the proposed hybrid system on several UCI machine learning benchmark examples. The experimental results show that the proposed method is able to select the discriminating input features correctly and achieve high classification accuracy which is comparable to or better than well-known similar classifier systems.  相似文献   

12.
13.
Most feature selection algorithms based on information-theoretic learning (ITL) adopt ranking process or greedy search as their searching strategies. The former selects features individually so that it ignores feature interaction and dependencies. The latter heavily relies on the search paths, as only one path will be explored with no possible back-track. In addition, both strategies typically lead to heuristic algorithms. To cope with these problems, this article proposes a novel feature selection framework based on correntropy in ITL, namely correntropy based feature selection using binary projection (BPFS). Our framework selects features by projecting the original high-dimensional data to a low-dimensional space through a special binary projection matrix. The formulated objective function aims at maximizing the correntropy between selected features and class labels. And this function can be efficiently optimized via standard mathematical tools. We apply the half-quadratic method to optimize the objective function in an iterative manner, where each iteration reduces to an assignment subproblem which can be highly efficiently solved with some off-the-shelf toolboxes. Comparative experiments on six real-world datasets indicate that our framework is effective and efficient.  相似文献   

14.
Work in inductive learning has mostly been concentrated on classifying.However,there are many applications in which it is desirable to order rather than to classify instances.Formodelling ordering problems,we generalize the notion of information tables to ordered information tables by adding order relations in attribute values.Then we propose a data analysis model by analyzing the dependency of attributes to describe the properties of ordered information tables.The problem of mining ordering rules is formulated as finding association between orderings of attribute values and the overall ordering of objects.An ordering rules may state that “if the value of an object x on an attribute a is ordered ahead of the value of another object y on the same attribute,then x is ordered ahead of y“.For mining ordering rules,we first transform an ordered information table into a binary information table,and then apply any standard machine learning and data mining algorithms.As an illustration,we analyze in detail Maclean‘s universities ranking for the year 2000.  相似文献   

15.
分类分析中基于信息论准则的特征选取   总被引:3,自引:1,他引:2  
Feature selection aims to reduce the dimensionality of patterns for classificatory analysis by selecting the most informative instead of irrelevant and/or redundant features. In this study, two novel information-theoretic measures for feature ranking are presented: one is an improved formula to estimate the conditional mutual information between the candidate feature fi and the target class C given the subset of selected features S, i.e., I(C;fi|S), under the assumption that information of features is distributed uniformly; the other is a mutual information (MI) based constructive criterion that is able to capture both irrelevant and redundant input features under arbitrary distributions of information of features. With these two measures, two new feature selection algorithms, called the quadratic MI-based feature selection (QMIFS) approach and the MI-based constructive criterion (MICC) approach, respectively, are proposed, in which no parameters like β in Battiti's MIFS and (Kwak and Choi)'s MIFS-U methods need to be preset. Thus, the intractable problem of how to choose an appropriate value for β to do the tradeoff between the relevance to the target classes and the redundancy with the already-selected features is avoided completely. Experimental results demonstrate the good performances of QMIFS and MICC on both synthetic and benchmark data sets.  相似文献   

16.
This paper studies supervised clustering in the context of label ranking data. The goal is to partition the feature space into K clusters, such that they are compact in both the feature and label ranking space. This type of clustering has many potential applications. For example, in target marketing we might want to come up with K different offers or marketing strategies for our target audience. Thus, we aim at clustering the customers’ feature space into K clusters by leveraging the revealed or stated, potentially incomplete customer preferences over products, such that the preferences of customers within one cluster are more similar to each other than to those of customers in other clusters. We establish several baseline algorithms and propose two principled algorithms for supervised clustering. In the first baseline, the clusters are created in an unsupervised manner, followed by assigning a representative label ranking to each cluster. In the second baseline, the label ranking space is clustered first, followed by partitioning the feature space based on the central rankings. In the third baseline, clustering is applied on a new feature space consisting of both features and label rankings, followed by mapping back to the original feature and ranking space. The RankTree principled approach is based on a Ranking Tree algorithm previously proposed for label ranking prediction. Our modification starts with K random label rankings and iteratively splits the feature space to minimize the ranking loss, followed by re-calculation of the K rankings based on cluster assignments. The MM-PL approach is a multi-prototype supervised clustering algorithm based on the Plackett-Luce (PL) probabilistic ranking model. It represents each cluster with a union of Voronoi cells that are defined by a set of prototypes, and assign each cluster with a set of PL label scores that determine the cluster central ranking. Cluster membership and ranking prediction for a new instance are determined by cluster membership of its nearest prototype. The unknown cluster PL parameters and prototype positions are learned by minimizing the ranking loss, based on two variants of the expectation-maximization algorithm. Evaluation of the proposed algorithms was conducted on synthetic and real-life label ranking data by considering several measures of cluster goodness: (1) cluster compactness in feature space, (2) cluster compactness in label ranking space and (3) label ranking prediction loss. Experimental results demonstrate that the proposed MM-PL and RankTree models are superior to the baseline models. Further, MM-PL is has shown to be much better than other algorithms at handling situations with significant fraction of missing label preferences.  相似文献   

17.
The significance of detection and classification of power quality (PQ) events that disturbs the voltage and/or current waveforms in the electrical power distribution networks is well known. Consequently, in spite of a large number of research reports in this area, the problem of PQ event classification remains to be an important engineering problem. Several feature construction, pattern recognition, analysis, and classification methods were proposed for this purpose. In spite of the extensive number of such alternatives, a research on the comparison of “how useful these features with respect to each other using specific classifiers” was omitted. In this work, a thorough analysis is carried out regarding the classification strengths of an ensemble of celebrated features. The feature items were selected from well-known tools such as spectral information, wavelet extrema across several decomposition levels, and local statistical variations of the waveform. The tests are repeated for classification of several types of real-life data acquired during line-to-ground arcing faults and voltage sags due to the induction motor starting under different load conditions. In order to avoid specificity in classifier strength determination, eight different approaches are applied, including the computationally costly “exhaustive search” together with the leave-one-out technique. To further avoid specificity of the feature for a given classifier, two classifiers (Bayes and SVM) are tested. As a result of these analyses, the more useful set among a wider set of features for each classifier is obtained. It is observed that classification accuracy improves by eliminating relatively useless feature items for both classifiers. Furthermore, the feature selection results somewhat change according to the classifier used. This observation shows that when a new analysis tool or a feature is developed and claimed to perform “better” than another, one should always indicate the matching classifier for the feature because that feature may prove comparably inefficient with other classifiers.  相似文献   

18.
An effective algorithm for extracting two useful features from text documents for analyzing word collocation habits, “Frequency Rank Ratio” (FRR) and “Intimacy”, is proposed. FRR is derived from a ranking index of a word according to its word frequency. Intimacy, computed by a compact language model called Influence Language Model (ILM), measures how close a word is to others within the same sentence. Using the proposed features, a visualization framework is developed for word collocation analysis. To evaluate our proposed framework, two corpora are designed and collected from the real-life data covering diverse topics and genres. Extensive simulations are conducted to illustrate the feasibility and effectiveness of our visualization framework. Our results demonstrate that the proposed features and algorithm are able to conduct reliable text analysis efficiently.  相似文献   

19.
It is a significant and challenging task to detect the informative features to carry out explainable analysis for high dimensional data, especially for those with very small number of samples. Feature selection especially the unsupervised ones are the right way to deal with this challenge and realize the task. Therefore, two unsupervised spectral feature selection algorithms are proposed in this paper. They group features using advanced Self-Tuning spectral clustering algorithm based on local standard deviation, so as to detect the global optimal feature clusters as far as possible. Then two feature ranking techniques, including cosine-similarity-based feature ranking and entropy-based feature ranking, are proposed, so that the representative feature of each cluster can be detected to comprise the feature subset on which the explainable classification system will be built. The effectiveness of the proposed algorithms is tested on high dimensional benchmark omics datasets and compared to peer methods, and the statistical test are conducted to determine whether or not the proposed spectral feature selection algorithms are significantly different from those of the peer methods. The extensive experiments demonstrate the proposed unsupervised spectral feature selection algorithms outperform the peer ones in comparison, especially the one based on cosine similarity feature ranking technique. The statistical test results show that the entropy feature ranking based spectral feature selection algorithm performs best. The detected features demonstrate strong discriminative capabilities in downstream classifiers for omics data, such that the AI system built on them would be reliable and explainable. It is especially significant in building transparent and trustworthy medical diagnostic systems from an interpretable AI perspective.  相似文献   

20.
面向范畴数据的序列化信息瓶颈算法(CD-sIB)假设数据各个属性特征对二元化转化的贡献均匀,从而影响转化效果。文中提出二元化加权转化方法来反映非共现数据的特征。该方法通过突出非共现数据的代表性属性,从抑制非代表性(冗余)属性,从而获取最佳共现表示。文中提出随机分布数据的适用性和计算方法的无监督性两个非共现加权原则,并基于加权粒度概念构造二元化加权转化算法。实验结果表明,文中算法的聚类精度优于其它算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号