共查询到20条相似文献,搜索用时 15 毫秒
1.
Multimedia Tools and Applications - Rough-set based multimodal histogram thresholding technique is effective for medical image segmentation. However, it is difficult to obtain the significant peaks... 相似文献
2.
Kimiaki Shirahama Yuta Matsuoka Kuniaki Uehara 《Multimedia Tools and Applications》2012,57(1):145-173
This paper develops a query-by-example method for retrieving shots of an event (event shots) using example shots provided by a user. The following three problems are mainly addressed. Firstly, event shots cannot be retrieved using a single model as they contain significantly different features due to varied camera techniques, settings and so forth. This is overcome by using rough set theory to extract multiple classification rules with each rule specialized to retrieve a portion of event shots. Secondly, since a user can only provide a small number of example shots, the amount of event shots retrieved by extracted rules is inevitably limited. We thus incorporate bagging and the random subspace method. Classifiers characterize significantly different event shots depending on example shots and feature dimensions. However, this can result in the potential retrieval of many unnecessary shots. Rough set theory is used to combine classifiers into rules which provide greater retrieval accuracy. Lastly, counter example shots, which are a necessity for rough set theory, are not provided by the user. Hence, a partially supervised learning method is used to collect these from shots other than example shots. Counter example shots, which are as similar to example shots as possible, are collected because they are useful for characterizing the boundary between event shots and the remaining shots. The proposed method is tested on TRECVID 2009 video data. 相似文献
3.
Lin Feng 《人工智能实验与理论杂志》2013,25(2):223-231
The study of intrusion detection techniques has been one of the hot spot topics in the field of network security in recent years. For high-dimensional intrusion detection data sets and a single classifier's weak classification ability for data sets with many classes, a novel intrusion detection approach, termed intrusion detection based on multiple rough classifiers integration, is proposed. First, some training data sets are generated from intrusion detection data by random sampling. By combing rough sets and quantum genetic algorithm, a subset of attributes is selected. Then, each simplified data set is trained, which establishes a group of rough classifiers. Finally, the intrusion data classification result is obtained according to the absolute majority voting strategy. The experimental results illustrate the effectiveness of our methods. 相似文献
4.
Application of rough set classifiers for determining hemodialysis adequacy in ESRD patients 总被引:1,自引:0,他引:1
The incidence and the prevalence of end-stage renal disease (ESRD) in Taiwan are the highest in the world. Therefore, hemodialysis (HD) therapy is a major concern and an important challenge due to the shortage of donated organs for transplantation. Previous researchers developed various forecasting models based on statistical methods and artificial intelligence techniques to address the real-world problems of HD therapy that are faced by ESRD patients and their doctors in the healthcare services. Because the performance of these forecasting models is highly dependent on the context and the data used, it would be valuable to develop more suitable methods for applications in this field. This study presents an integrated procedure that is based on rough set classifiers and aims to provide an alternate method for predicting the urea reduction ratio for assessing HD adequacy for ESRD patients and their doctors. The proposed procedure is illustrated in practice by examining a dataset from a specific medical center in Taiwan. The experimental results reveal that the proposed procedure has better accuracy with a low standard deviation than the listed methods. The output created by the rough set LEM2 algorithm is a comprehensible decision rule set that can be applied in knowledge-based healthcare services as desired. The analytical results provide useful information for both academics and practitioners. 相似文献
5.
针对传统多模型数据集回归分析方法计算时间长、模型识别准确率低的问题,提出了一种新的启发式鲁棒回归分析方法。该方法模拟免疫系统聚类学习的原理,采用B细胞网络作为数据集的分类和存储工具,通过判断数据对模型的符合度进行分类,提高了数据分类的准确性,将模型集抽取过程分解成“聚类”“回归”“再聚类”的反复尝试过程,利用并行启发式搜索逼近模型集的解。仿真结果表明,所提方法回归分析时间明显少于传统算法,模型识别准确率明显高于传统算法。根据8模型数据集分析结果,传统算法中,效果最好的是基于RANSAC的逐次提取算法,其平均模型识别准确率为90.37%,需53.3947s;计算时间小于0.5s的传统算法,其准确率不足1%;所提算法仅需0.5094s,其准确率达到了98.25%。 相似文献
6.
Credit scoring analysis is an important activity, especially nowadays after a huge number of defaults has been one of the main causes of the financial crisis. Among the many different tools used to model credit risk, the recent development of rough set models has proved effective. The original development of rough set theory has been widely generalized and combined with other approaches to uncertain reasoning, especially probability and fuzzy set theories. Since coherent conditional probability assessments cope well with the problem of unifying these different approaches, a merging of fuzzy rough set theory with this subjectivist approach is proposed. Specifically, expert partial probabilistic evaluations are encompassed inside a gradual decision rule structure, with coherence of the conclusion as a guideline. In line with Bayesian rough set models, credibility degrees of multiple premises are introduced through conditional probability assessments. Nonetheless, discernibility with this method remains too fine. Therefore, the basic partition is coarsened by equivalence classes based on the arity of positively, negatively and neutrally related criteria. A membership function, which grades the likelihood of default, is introduced by a peculiar choice of t-norms and t-conorms. To build and test the model, real data related to a sample of firms are used. 相似文献
7.
8.
Autonomous clustering using rough set theory 总被引:1,自引:0,他引:1
Charlotte Bean Chandra Kambhampati 《国际自动化与计算杂志》2008,5(1):90-102
This paper proposes a clustering technique that minimizes the need for subjective human intervention and is based on elements of rough set theory (RST). The proposed algorithm is unified in its approach to clustering and makes use of both local and global data properties to obtain clustering solutions. It handles single-type and mixed attribute data sets with ease. The results from three data sets of single and mixed attribute types are used to illustrate the technique and establish its efficiency. 相似文献
9.
The credit scoring model development has become a very important issue, as the credit industry is highly competitive. Therefore, considerable credit scoring models have been widely studied in the areas of statistics to improve the accuracy of credit scoring during the past few years. This study constructs a hybrid SVM-based credit scoring models to evaluate the applicant’s credit score according to the applicant’s input features: (1) using neighborhood rough set to select input features; (2) using grid search to optimize RBF kernel parameters; (3) using the hybrid optimal input features and model parameters to solve the credit scoring problem with 10-fold cross validation; (4) comparing the accuracy of the proposed method with other methods. Experiment results demonstrate that the neighborhood rough set and SVM based hybrid classifier has the best credit scoring capability compared with other hybrid classifiers. It also outperforms linear discriminant analysis, logistic regression and neural networks. 相似文献
10.
Hepatitis is a disease which is seen at all levels of age. Hepatitis disease solely does not have a lethal effect, but the early diagnosis and treatment of hepatitis is crucial as it triggers other diseases. In this study, a new hybrid medical decision support system based on rough set (RS) and extreme learning machine (ELM) has been proposed for the diagnosis of hepatitis disease. RS-ELM consists of two stages. In the first one, redundant features have been removed from the data set through RS approach. In the second one, classification process has been implemented through ELM by using remaining features. Hepatitis data set, taken from UCI machine learning repository has been used to test the proposed hybrid model. A major part of the data set (48.3%) includes missing values. As removal of missing values from the data set leads to data loss, feature selection has been done in the first stage without deleting missing values. In the second stage, the classification process has been performed through ELM after the removal of missing values from sub-featured data sets that were reduced in different dimensions. The results showed that the highest 100.00% classification accuracy has been achieved through RS-ELM and it has been observed that RS-ELM model has been considerably successful compared to the other methods in the literature. Furthermore in this study, the most significant features have been determined for the diagnosis of the hepatitis. It is considered that proposed method is to be useful in similar medical applications. 相似文献
11.
Classification and rule induction using rough set theory 总被引:3,自引:1,他引:2
Rough set theory (RST) offers an interesting and novel approach both to the generation of rules for use in expert systems and to the traditional statistical task of classification. The method is based on a novel classification metric, implemented as upper and lower approximations of a set and more generally in terms of positive, negative and boundary regions. Classification accuracy, which may be set by the decision maker, is measured in terms of conditional probabilities for equivalence classes, and the method involves a search for subsets of attributes (called 'reducts') which do not require a loss of classification quality. To illustrate the technique, RST is employed within a state level comparison of education expenditure in the USA. 相似文献
12.
结合粗糙集与支持向量回归进行油藏物性参数预测 总被引:1,自引:0,他引:1
为了更准确的预测油藏物性3个重要参数:孔隙度,渗透率、饱和度,提出了结合粗糙集属性约简和支持向量机回归的方法.首先用粗糙集理论对测井数据样本属性进行约简,从而选出决策属性,构成新的样本数据.然后用支持向量回归理论对数据样本进行训练,建立支持向量回归模型,并且对测试样本进行预测.实验结果表明,该方法获得了较好的拟舍结果,并且减少了支持向量机在训练中的计算复杂度,提高了物性参数预测的准确率.执行该方法可为油藏开发提供决策依据. 相似文献
13.
14.
Formal concept analysis (FCA) was originally proposed by Wille (1982), which is an important theory for data analysis and knowledge discovery. AFS (axiomatic fuzzy set) algebra was proposed by Liu [X. Liu, The fuzzy theory based on AFS algebras and AFS structure, Journal of Mathematical Analysis and Applications 217 (1998) 459–478; X. Liu, The topology on AFS algebra and AFS structure, Journal of Mathematical Analysis and Applications 217 (1998) 479–489], which is a semantic methodology relating to the fuzzy theory. Combining above two theories, we propose AFS formal concept, which can be viewed as the generalization and development of monotone concept proposed by Deogun and Saquer (2003). Moreover, we show that the set of all AFS formal concepts forms a complete lattice. AFS formal concept can be applied to represent the logic operations of queries in information retrieval. Furthermore, we give an approach to find the AFS formal concepts whose intents (extents) approximate any element of AFS algebra by virtue of rough set theory. 相似文献
15.
Several methods for optimization of multiple response problems using planned experimental data have been proposed in the literature. Among them, an integrated approach of multiple regression-based optimization using an overall performance criteria has become quite popular. In this article, we examine the effectiveness of five performance metrics that are used for optimization of multiple response problems. The usefulness of these performance metrics are compared with respect to a utility measure, namely, the expected total non-conformance (NC), for three experimental datasets taken from the literature. It is observed that multiple regression-based weighted signal-to-noise ratio as a performance metric is the most effective in finding an optimal solution for multiple response problems. 相似文献
16.
Luis V. Santana-Quintero Alfredo G. Hernández-Díaz Julián Molina Carlos A. Coello Coello Rafael Caballero 《Computers & Operations Research》2010,37(3):470-480
The aim of this paper is to show how the hybridization of a multi-objective evolutionary algorithm (MOEA) and a local search method based on the use of rough set theory is a viable alternative to obtain a robust algorithm able to solve difficult constrained multi-objective optimization problems at a moderate computational cost. This paper extends a previously published MOEA [Hernández-Díaz AG, Santana-Quintero LV, Coello Coello C, Caballero R, Molina J. A new proposal for multi-objective optimization using differential evolution and rough set theory. In: 2006 genetic and evolutionary computation conference (GECCO’2006). Seattle, Washington, USA: ACM Press; July 2006], which was limited to unconstrained multi-objective optimization problems. Here, the main idea is to use this sort of hybrid approach to approximate the Pareto front of a constrained multi-objective optimization problem while performing a relatively low number of fitness function evaluations. Since in real-world problems the cost of evaluating the objective functions is the most significant, our underlying assumption is that, by aiming to minimize the number of such evaluations, our MOEA can be considered efficient. As in its previous version, our hybrid approach operates in two stages: in the first one, a multi-objective version of differential evolution is used to generate an initial approximation of the Pareto front. Then, in the second stage, rough set theory is used to improve the spread and quality of this initial approximation. To assess the performance of our proposed approach, we adopt, on the one hand, a set of standard bi-objective constrained test problems and, on the other hand, a large real-world problem with eight objective functions and 160 decision variables. The first set of problems are solved performing 10,000 fitness function evaluations, which is a competitive value compared to the number of evaluations previously reported in the specialized literature for such problems. The real-world problem is solved performing 250,000 fitness function evaluations, mainly because of its high dimensionality. Our results are compared with respect to those generated by NSGA-II, which is a MOEA representative of the state-of-the-art in the area. 相似文献
17.
Assessing classifiers from two independent data sets using ROC analysis: a nonparametric approach 总被引:2,自引:0,他引:2
Yousef WA Wagner RF Loew MH 《IEEE transactions on pattern analysis and machine intelligence》2006,28(11):1809-1817
This paper considers binary classification. We assess a classifier in terms of the area under the ROC curve (AUC). We estimate three important parameters, the conditional AUC (conditional on a particular training set) and the mean and variance of this AUC. We derive, as well, a closed form expression of the variance of the estimator of the AUG. This expression exhibits several components of variance that facilitate an understanding for the sources of uncertainty of that estimate. In addition, we estimate this variance, i.e., the variance of the conditional AUC estimator. Our approach is nonparametric and based on general methods from U-statistics; it addresses the case where the data distribution is neither known nor modeled and where there are only two available data sets, the training and testing sets. Finally, we illustrate some simulation results for these estimators 相似文献
18.
In this paper,rough set theory is used to extract roughly-correct inference rules from information systems.Based on this idea,the learning algorithm ERCR is presented.In order to refine the learned roughly-correct inference rules,the knowledge-based neural network is used.The method presented here sufficiently combines the advantages of rough set theory and neural network. 相似文献
19.
Context has been identified as an important factor in recommender systems. Lots of researches have been done for context-aware
recommendation. However, in current approaches, the weights of contextual information are the same, which limits the accuracy
of the results. This paper aims to propose a context-aware recommender system by extracting, measuring and incorporating significant
contextual information in recommendation. The approach is based on rough set theory and collaborative filtering. It involves
a three-steps process. At first, significant attributes to represent contextual information are extracted and measured to
identify recommended items based on rough set theory. Then the users’ similarity is measured in a target context consideration.
Furthermore collaborative filtering is adopted to recommend appropriate items. The evaluation experiments show that the proposed
approach is helpful to improve the recommendation quality. 相似文献
20.
Rawat Jyoti Singh Annapurna Bhadauria H. S. Virmani Jitendra Devgun J. S. 《Multimedia Tools and Applications》2017,76(18):19057-19085
Multimedia Tools and Applications - In current consequence of haematology, blood cancer i.e. acute lymphoblastic leukemia is very frequently founded in medical practice, which is characterized by... 相似文献