首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   51篇
  免费   1篇
  国内免费   3篇
无线电   2篇
一般工业技术   2篇
自动化技术   51篇
  2020年   1篇
  2019年   1篇
  2018年   2篇
  2017年   4篇
  2016年   1篇
  2015年   1篇
  2014年   4篇
  2013年   6篇
  2012年   4篇
  2011年   5篇
  2010年   6篇
  2009年   4篇
  2008年   6篇
  2007年   5篇
  2006年   1篇
  2004年   2篇
  2003年   2篇
排序方式: 共有55条查询结果,搜索用时 27 毫秒
1.
In business applications such as direct marketing, decision-makers are required to choose the action which best maximizes a utility function. Cost-sensitive learning methods can help them achieve this goal. In this paper, we introduce Pessimistic Active Learning (PAL). PAL employs a novel pessimistic measure, which relies on confidence intervals and is used to balance the exploration/exploitation trade-off. In order to acquire an initial sample of labeled data, PAL applies orthogonal arrays of fractional factorial design. PAL was tested on ten datasets using a decision tree inducer. A comparison of these results to those of other methods indicates PAL’s superiority.  相似文献   
2.
We propose three methods for extending the Boosting family of classifiers motivated by the real-life problems we have encountered. First, we propose a semisupervised learning method for exploiting the unlabeled data in Boosting. We then present a novel classification model adaptation method. The goal of adaptation is optimizing an existing model for a new target application, which is similar to the previous one but may have different classes or class distributions. Finally, we present an efficient and effective cost-sensitive classification method that extends Boosting to allow for weighted classes. We evaluated these methods for call classification in the AT&;T VoiceTone® spoken language understanding system. Our results indicate that it is possible to obtain the same classification performance by using 30% less labeled data when the unlabeled data is utilized through semisupervised learning. Using model adaptation we can achieve the same classification accuracy using less than half of the labeled data from the new application. Finally, we present significant improvements in the “important” (i.e., higher weighted) classes without a significant loss in overall performance using the proposed cost-sensitive classification method.  相似文献   
3.
With the developments in the information technology, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed for credit card systems, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS (Point Of Sale) terminals or mail orders so called online credit card fraud. As a result, fraud detection becomes the essential tool and probably the best way to stop such fraud types. In this study, a new cost-sensitive decision tree approach which minimizes the sum of misclassification costs while selecting the splitting attribute at each non-terminal node is developed and the performance of this approach is compared with the well-known traditional classification models on a real world credit card data set. In this approach, misclassification costs are taken as varying. The results show that this cost-sensitive decision tree algorithm outperforms the existing well-known methods on the given problem set with respect to the well-known performance metrics such as accuracy and true positive rate, but also a newly defined cost-sensitive metric specific to credit card fraud detection domain. Accordingly, financial losses due to fraudulent transactions can be decreased more by the implementation of this approach in fraud detection systems.  相似文献   
4.
阮晓宏  黄小猛  袁鼎荣  段巧灵 《计算机科学》2013,40(Z11):140-142,146
代价敏感学习方法常常假设不同类型的代价能够被转换成统一单位的同种代价,显然构建适当的代价敏感属性选择因子是个挑战。设计了一种新的异构代价敏感决策树分类器算法,该算法充分考虑了不同代价在分裂属性选择中的作用,构建了一种基于异构代价的分裂属性选择模型,设计了基于代价敏感的剪枝标准。实验结果表明,该方法处理代价机制和属性信息的异质性比现有方法更有效。  相似文献   
5.
In many real-world regression and forecasting problems, over-prediction and under-prediction errors have different consequences and incur asymmetric costs. Such problems entail the use of cost-sensitive learning, which attempts to minimize the expected misprediction cost, rather than minimize a simple measure such as mean squared error. A method has been proposed recently for tuning a regular regression model post hoc so as to minimize the average misprediction cost under an asymmetric cost structure. In this paper, we build upon that method and propose an extended tuning method for cost-sensitive regression. The previous method becomes a special case of the method we propose. We apply the proposed method to loan charge-off forecasting, a cost-sensitive regression problem that has had a bearing on bank failures over the last few years. Empirical evaluation in the loan charge-off forecasting domain demonstrates that the method we have proposed can further lower the misprediction cost significantly.  相似文献   
6.
7.
Costs are often an important part of the classification process. Cost factors have been taken into consideration in many previous studies regarding decision tree models. In this study, we also consider a cost-sensitive decision tree construction problem. We assume that there are test costs that must be paid to obtain the values of the decision attribute and that a record must be classified without exceeding the spending cost threshold. Unlike previous studies, however, in which records were classified with only a single condition attribute, in this study, we are able to simultaneously classify records with multiple condition attributes. An algorithm is developed to build a cost-constrained decision tree, which allows us to simultaneously classify multiple condition attributes. The experimental results show that our algorithm satisfactorily handles data with multiple condition attributes under different cost constraints.  相似文献   
8.
In real-world classification problems, different types of misclassification errors often have asymmetric costs, thus demanding cost-sensitive learning methods that attempt to minimize average misclassification cost rather than plain error rate. Instance weighting and post hoc threshold adjusting are two major approaches to cost-sensitive classifier learning. This paper compares the effects of these two approaches on several standard, off-the-shelf classification methods. The comparison indicates that the two approaches lead to similar results for some classification methods, such as Naïve Bayes, logistic regression, and backpropagation neural network, but very different results for other methods, such as decision tree, decision table, and decision rule learners. The findings from this research have important implications on the selection of the cost-sensitive classifier learning approach as well as on the interpretation of a recently published finding about the relative performance of Naïve Bayes and decision trees.  相似文献   
9.
Classification of data with imbalanced class distribution has posed a significant drawback of the performance attainable by most standard classifier learning algorithms, which assume a relatively balanced class distribution and equal misclassification costs. The significant difficulty and frequent occurrence of the class imbalance problem indicate the need for extra research efforts. The objective of this paper is to investigate meta-techniques applicable to most classifier learning algorithms, with the aim to advance the classification of imbalanced data. The AdaBoost algorithm is reported as a successful meta-technique for improving classification accuracy. The insight gained from a comprehensive analysis of the AdaBoost algorithm in terms of its advantages and shortcomings in tacking the class imbalance problem leads to the exploration of three cost-sensitive boosting algorithms, which are developed by introducing cost items into the learning framework of AdaBoost. Further analysis shows that one of the proposed algorithms tallies with the stagewise additive modelling in statistics to minimize the cost exponential loss. These boosting algorithms are also studied with respect to their weighting strategies towards different types of samples, and their effectiveness in identifying rare cases through experiments on several real world medical data sets, where the class imbalance problem prevails.  相似文献   
10.
The current research investigates a single cost for cost-sensitive neural networks (CNN) for decision making. This may not be feasible for real cost-sensitive decisions which involve multiple costs. We propose to modify the existing model, the traditional back-propagation neural networks (TNN), by extending the back-propagation error equation for multiple cost decisions. In this multiple-cost extension, all costs are normalized to be in the same interval (i.e. between 0 and 1) as the error estimation generated in the TNN. A comparative analysis of accuracy dependent on three outcomes for constant costs was performed: (1) TNN and CNN with one constant cost (CNN-1C), (2) TNN and CNN with two constant costs (CNN-2C), and (3) CNN-1C and CNN-2C. A similar analysis for accuracy was also made for non-constant costs; (1) TNN and CNN with one non-constant cost (CNN-1NC), (2) TNN and CNN with two non-constant costs (CNN-2NC), and (3) CNN-1NC and CNN-2NC. Furthermore, we compared the misclassification cost for CNNs for both constant and non-constant costs (CNN-1C vs. CNN-2C and CNN-1NC vs. CNN-2NC). Our findings demonstrate that there is a competitive behavior between the accuracy and misclassification cost in the proposed CNN model. To obtain a higher accuracy and lower misclassification cost, our results suggest merging all constant cost matrices into one constant cost matrix for decision making. For multiple non-constant cost matrices, our results suggest maintaining separate matrices to enhance the accuracy and reduce the misclassification cost.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号