首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
With the wide usage of e-banking in recent years, and by increased opportunities for fraudsters subsequently, we are witnessing a loss of billions of Euros worldwide due to credit card fraud every year. Therefore, credit card fraud detection has become a critical necessity for financial institutions. Several studies have used machine learning techniques for proposing a method to address the problem. However, most of them did not take into account the sequential nature of transactional data. In this paper, we proposed a novel credit card fraud detection model using sequence labelling based on both deep neural networks and probabilistic graphical models (PGM). Then by using two real-world datasets, we compared our model with the baseline model and examined how considering hidden sequential dependencies among transactions and also among predicted labels can improve the results. Moreover, we introduce a novel undersampling algorithm, which helps to maintain the sequential patterns of data during the random undersampling process. Our experiments demonstrate that this algorithm achieves promising results compared to the state-of-the-art methods in oversampling and undersampling.  相似文献   

3.
Feature selection is an important data preprocessing step for the construction of an effective bankruptcy prediction model. The prediction performance can be affected by the employed feature selection and classification techniques. However, there have been very few studies of bankruptcy prediction that identify the best combination of feature selection and classification techniques. In this study, two types of feature selection methods, including filter‐ and wrapper‐based methods, are considered, and two types of classification techniques, including statistical and machine learning techniques, are employed in the development of the prediction methods. In addition, bagging and boosting ensemble classifiers are also constructed for comparison. The experimental results based on three related datasets that contain different numbers of input features show that the genetic algorithm as the wrapper‐based feature selection method performs better than the filter‐based one by information gain. It is also shown that the lowest prediction error rates for the three datasets are provided by combining the genetic algorithm with the naïve Bayes and support vector machine classifiers without bagging and boosting.  相似文献   

4.
Constructing support vector machine ensemble   总被引:30,自引:0,他引:30  
Hyun-Chul  Shaoning  Hong-Mo  Daijin  Sung 《Pattern recognition》2003,36(12):2757-2767
Even the support vector machine (SVM) has been proposed to provide a good generalization performance, the classification result of the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the SVM ensemble with bagging (bootstrap aggregating) or boosting. In bagging, each individual SVM is trained independently using the randomly chosen training samples via a bootstrap technique. In boosting, each individual SVM is trained using the training samples chosen according to the sample's probability distribution that is updated in proportional to the errorness of the sample. In both bagging and boosting, the trained individual SVMs are aggregated to make a collective decision in several ways such as the majority voting, least-squares estimation-based weighting, and the double-layer hierarchical combining. Various simulation results for the IRIS data classification and the hand-written digit recognition, and the fraud detection show that the proposed SVM ensemble with bagging or boosting outperforms a single SVM in terms of classification accuracy greatly.  相似文献   

5.
This paper performs an exploratory study of the use of metaheuristic optimization techniques to select important parameters (features and members) in the design of ensemble of classifiers. In order to do this, an empirical investigation, using 10 different optimization techniques applied to 23 classification problems, will be performed. Furthermore, we will analyze the performance of both mono and multi-objective versions of these techniques, using all different combinations of three objectives, classification error as well as two important diversity measures to ensembles, which are good and bad diversity measures. Additionally, the optimization techniques will also have to select members for heterogeneous ensembles, using k-NN, Decision Tree and Naive Bayes as individual classifiers and they are all combined using the majority vote technique. The main aim of this study is to define which optimization techniques obtained the best results in the context of mono and multi-objective as well as to provide a comparison with classical ensemble techniques, such as bagging, boosting and random forest. Our findings indicated that three optimization techniques, Memetic, SA and PSO, provided better performance than the other optimization techniques as well as traditional ensemble generator (bagging, boosting and random forest).  相似文献   

6.
Recently, mobile context inference becomes an important issue. Bayesian probabilistic model is one of the most popular probabilistic approaches for context inference. It efficiently represents and exploits the conditional independence of propositions. However, there are some limitations for probabilistic context inference in mobile devices. Mobile devices relatively lacks of sufficient memory. In this paper, we present a novel method for efficient Bayesian inference on a mobile phone. In order to overcome the constraints of the mobile environment, the method uses two-layered Bayesian networks with tree structure. In contrast to the conventional techniques, this method attempts to use probabilistic models with fixed tree structures and intermediate nodes. It can reduce the inference time by eliminating junction tree creation. To evaluate the performance of this method, an experiment is conducted with data collected over a month. The result shows the efficiency and effectiveness of the proposed method.  相似文献   

7.
An important challenge for mobility analysis is the development of techniques that can associate users’ identities across multiple datasets. These can assist in developing hybrid sensing and tracking mechanisms across large urban spaces, inferring context by combining multiple datasets, but at the same time have important implications for privacy. In this paper we present a scheme to associate different identities of a person across two movement databases. Our two key contributions are the reformulation of this problem in terms of a two-class classification, and the development of efficient techniques for pruning the search space. We evaluate performance of the scheme on synthetic and real data from two co-located city-wide WiFi and Bluetooth networks, and show that the pruning has a remarkable effect on the performance of the scheme in identifying individuals across two distinct mobility datasets. Finally, we discuss the privacy implications of this scheme in the light of our findings.  相似文献   

8.
Sentiment analysis has long been a hot topic for understanding users statements online. Previously many machine learning approaches for sentiment analysis such as simple feature-oriented SVM or more complicated probabilistic models have been proposed. Though they have demonstrated capability in polarity detection, there exist one challenge called the curse of dimensionality due to the high dimensional nature of text-based documents. In this research, inspired by the dimensionality reduction and feature extraction capability of auto-encoders, an auto-encoder-based bagging prediction architecture (AEBPA) is proposed. The experimental study on commonly used datasets has shown its potential. It is believed that this method can offer the researchers in the community further insight into bagging oriented solution for sentimental analysis.  相似文献   

9.
Credit scoring modelling comprises one of the leading formal tools for supporting the granting of credit. Its core objective consists of the generation of a score by means of which potential clients can be listed in the order of the probability of default. A critical factor is whether a credit scoring model is accurate enough in order to provide correct classification of the client as a good or bad payer. In this context the concept of bootstraping aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the fitted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper we propose a new bagging-type variant procedure, which we call poly-bagging, consisting of combining predictors over a succession of resamplings. The study is derived by credit scoring modelling. The proposed poly-bagging procedure was applied to some different artificial datasets and to a real granting of credit dataset up to three successions of resamplings. We observed better classification accuracy for the two-bagged and the three-bagged models for all considered setups. These results lead to a strong indication that the poly-bagging approach may promote improvement on the modelling performance measures, while keeping a flexible and straightforward bagging-type structure easy to implement.  相似文献   

10.
Bayesian networks are important knowledge representation tools for handling uncertain pieces of information. The success of these models is strongly related to their capacity to represent and handle dependence relations. Some forms of Bayesian networks have been successfully applied in many classification tasks. In particular, naive Bayes classifiers have been used for intrusion detection and alerts correlation. This paper analyses the advantage of adding expert knowledge to probabilistic classifiers in the context of intrusion detection and alerts correlation. As examples of probabilistic classifiers, we will consider the well-known Naive Bayes, Tree Augmented Naïve Bayes (TAN), Hidden Naive Bayes (HNB) and decision tree classifiers. Our approach can be applied for any classifier where the outcome is a probability distribution over a set of classes (or decisions). In particular, we study how additional expert knowledge such as “it is expected that 80 % of traffic will be normal” can be integrated in classification tasks. Our aim is to revise probabilistic classifiers’ outputs in order to fit expert knowledge. Experimental results show that our approach improves existing results on different benchmarks from intrusion detection and alert correlation areas.  相似文献   

11.
Statistical topic models for multi-label document classification   总被引:2,自引:0,他引:2  
Machine learning approaches to multi-label document classification have to date largely relied on discriminative modeling techniques such as support vector machines. A?drawback of these approaches is that performance rapidly drops off as the total number of labels and the number of labels per document increase. This problem is amplified when the label frequencies exhibit the type of highly skewed distributions that are often observed in real-world datasets. In this paper we investigate a class of generative statistical topic models for multi-label documents that associate individual word tokens with different labels. We investigate the advantages of this approach relative to discriminative models, particularly with respect to classification problems involving large numbers of relatively rare labels. We compare the performance of generative and discriminative approaches on document labeling tasks ranging from datasets with several thousand labels to datasets with tens of labels. The experimental results indicate that probabilistic generative models can achieve competitive multi-label classification performance compared to discriminative methods, and have advantages for datasets with many labels and skewed label frequencies.  相似文献   

12.
Bayesian networks for imputation in classification problems   总被引:1,自引:0,他引:1  
Missing values are an important problem in data mining. In order to tackle this problem in classification tasks, we propose two imputation methods based on Bayesian networks. These methods are evaluated in the context of both prediction and classification tasks. We compare the obtained results with those achieved by classical imputation methods (Expectation–Maximization, Data Augmentation, Decision Trees, and Mean/Mode). Our simulations were performed by means of four datasets (Congressional Voting Records, Mushroom, Wisconsin Breast Cancer and Adult), which are benchmarks for data mining methods. Missing values were simulated in these datasets by means of the elimination of some known values. Thus, it is possible to assess the prediction capability of an imputation method, comparing the original values with the imputed ones. In addition, we propose a methodology to estimate the bias inserted by imputation methods in classification tasks. In this sense, we use four classifiers (One Rule, Naïve Bayes, J4.8 Decision Tree and PART) to evaluate the employed imputation methods in classification scenarios. Computing times consumed to perform imputations are also reported. Simulation results in terms of prediction, classification, and computing times allow us performing several analyses, leading to interesting conclusions. Bayesian networks have shown to be competitive with classical imputation methods.  相似文献   

13.
The aim of bankruptcy prediction in the areas of data mining and machine learning is to develop an effective model which can provide the higher prediction accuracy. In the prior literature, various classification techniques have been developed and studied, in/with which classifier ensembles by combining multiple classifiers approach have shown their outperformance over many single classifiers. However, in terms of constructing classifier ensembles, there are three critical issues which can affect their performance. The first one is the classification technique actually used/adopted, and the other two are the combination method to combine multiple classifiers and the number of classifiers to be combined, respectively. Since there are limited, relevant studies examining these aforementioned disuses, this paper conducts a comprehensive study of comparing classifier ensembles by three widely used classification techniques including multilayer perceptron (MLP) neural networks, support vector machines (SVM), and decision trees (DT) based on two well-known combination methods including bagging and boosting and different numbers of combined classifiers. Our experimental results by three public datasets show that DT ensembles composed of 80–100 classifiers using the boosting method perform best. The Wilcoxon signed ranked test also demonstrates that DT ensembles by boosting perform significantly different from the other classifier ensembles. Moreover, a further study over a real-world case by a Taiwan bankruptcy dataset was conducted, which also demonstrates the superiority of DT ensembles by boosting over the others.  相似文献   

14.
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.  相似文献   

15.
The ability to automatically detect fraudulent escrow websites is important in order to alleviate online auction fraud. Despite research on related topics, such as web spam and spoof site detection, fake escrow website categorization has received little attention. The authentic appearance of fake escrow websites makes it difficult for Internet users to differentiate legitimate sites from phonies; making systems for detecting such websites an important endeavor. In this study we evaluated the effectiveness of various features and techniques for detecting fake escrow websites. Our analysis included a rich set of fraud cues extracted from web page text, image, and link information. We also compared several machine learning algorithms, including support vector machines, neural networks, decision trees, naïve bayes, and principal component analysis. Experiments were conducted to assess the proposed fraud cues and techniques on a test bed encompassing nearly 90,000 web pages derived from 410 legitimate and fake escrow websites. The combination of an extended feature set and a support vector machines ensemble classifier enabled accuracies over 90 and 96% for page and site level classification, respectively, when differentiating fake pages from real ones. Deeper analysis revealed that an extended set of fraud cues is necessary due to the broad spectrum of tactics employed by fraudsters. The study confirms the feasibility of using automated methods for detecting fake escrow websites. The results may also be useful for informing existing online escrow fraud resources and communities of practice about the plethora of fraud cues pervasive in fake websites.  相似文献   

16.
Imperfect information inevitably appears in real situations for a variety of reasons. Although efforts have been made to incorporate imperfect data into classification techniques, there are still many limitations as to the type of data, uncertainty, and imprecision that can be handled. In this paper, we will present a Fuzzy Random Forest ensemble for classification and show its ability to handle imperfect data into the learning and the classification phases. Then, we will describe the types of imperfect data it supports. We will devise an augmented ensemble that can operate with others type of imperfect data: crisp, missing, probabilistic uncertainty, and imprecise (fuzzy and crisp) values. Additionally, we will perform experiments with imperfect datasets created for this purpose and datasets used in other papers to show the advantage of being able to express the true nature of imperfect information.  相似文献   

17.
Learning from noisy data is a challenging task for data mining research. In this paper, we argue that for noisy data both global bagging strategy and local bagging strategy su er from their own inherent disadvantages and thus cannot form accurate prediction models. Consequently, we present a Global and Local Bagging (called Glocal Bagging:GB) approach to tackle this problem. GB assigns weight values to the base classi ers under the consideration that: (1) for each test instance Ix, GB prefers bags close to Ix, which is the nature of the local learning strategy; (2) for base classi ers, GB assigns larger weight values to the ones with higher accuracy on the out-of-bag, which is the nature of the global learning strategy. Combining (1) and (2), GB assign large weight values to the classi ers which are close to the current test instance Ix and have high out-of-bag accuracy. The diversity/accuracy analysis on synthetic datasets shows that GB improves the classi er ensemble's performance by increasing its base classi er's accuracy. Moreover, the bias/variance analysis also shows that GB's accuracy improvement mainly comes from the reduction of the bias error. Experiment results on 25 UCI benchmark datasets show that when the datasets are noisy, GB is superior to other former proposed bagging methods such as the classical bagging, bragging, nice bagging, trimmed bagging and lazy bagging.  相似文献   

18.
文本分类是目前深度学习方法被广泛应用的重要领域之一.本文设计了一种基于循环神经网络和胶囊网络的混合模型,使用胶囊网络来克服卷积神经网络对空间不敏感的缺点,学习文本局部与整体之间的关系特征,并使用循环神经网络中的GRU神经网络经过最大池化层来学习上下文显著信息特征,结合两者来优化特征提取过程从而提高文本分类效果.与此同时,在嵌入层中提出了一种基于缺失词补全的混合词向量方法,采用两种策略来减少缺失词过度匹配的现象以及降低词向量中的噪声数据出现的概率,从而获得语义丰富且少噪声的高质量词向量.在经典文本分类数据集中进行实验,通过与对比模型的最优方法进行比较,证明了该模型和方法能有效地提升文本分类准确度.  相似文献   

19.
Pawalai  Chun Che   《Neurocomputing》2009,72(13-15):2845
This paper presents an ensemble neural network and interval neutrosophic sets approach to the problem of binary classification. A bagging technique is applied to an ensemble of pairs of neural networks created to predict degree of truth membership, indeterminacy membership, and false membership values in the interval neutrosophic sets. In our approach, the error and vagueness are quantified in the classification process as well. A number of aggregation techniques are proposed in this paper. We applied our techniques to the classical benchmark problems including ionosphere, pima-Indians diabetes, and liver-disorders from the UCI machine learning repository. Our approaches improve the classification performance as compared to the existing techniques which applied only to the truth membership values. Furthermore, the proposed ensemble techniques also provide better results than those obtained from only a single pair of neural networks.  相似文献   

20.
目前在方面级情感分类研究中,图卷积网络被应用于句法依赖树上构建方面词与上下文词的依赖关系。但是由于句法依赖树的不稳定性和语句的复杂性与不规范表达,这种改进较为有限。为解决上述问题,提出了一种基于混合图神经网络模型。在该模型中,为了深度提取方面词与上下文词的依赖关系,设计了应用于句法依赖树的多层图卷积网络。同时为提取词级依赖特征,设计了具有残差连接的图注意力网络(Res-GAT),其主要思想为以词级依赖关系特征作为补充,结合句法依赖关系进行方面级情感分类。通过在五个经典数据集上实验,证明了该模型相较于基线模型具有更优异的分类能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号