首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
一种基于时间分层决策矩阵的综合评价算法   总被引:1,自引:0,他引:1  
综合评价系统用四元总体(A,Z,X^(k),D)表示,其中决策矩阵X^(k)是具有对象维i,指标维j,时间维k的三维矩阵,将三维矩阵X(k)按时间划分形成m个二维矩阵,每个二维矩阵分别代表每个年度n个评价对象对应于p个指标的属性集,讨论了按时间分层决策矩阵的多指标综合评价问题,描述了按时间分层的决策矩阵的求解算法,并采用Visual Foxpro数据库语言编程实现。  相似文献   

2.
按对象分层决策矩阵的逼近理想解TOPSIS法的算法及实现   总被引:13,自引:1,他引:12  
采用逼近理想解TOPSIS法对按对象分层的决策矩阵进行多指标综合评价,并利用Visual FoxPro数据库语言编程实现.  相似文献   

3.
在文献犤1犦的基础上,研究利用MATLAB实现按对象分层决策矩阵的逼近理想解法。由于MATLAB强大的矩阵运算能力,将其运用于多指标综合评价是可行的,具有方便、高效等优势。  相似文献   

4.
讨论了一种基于最优投资决策模型的按时间分层决策矩阵的综合评价算法,该算法利用最优投资决策模型取得最优T.CoverT.Cover投资决策,并采用关系数据库语言加以实现。b*  相似文献   

5.
该文介绍了具有分层结构的综合评价计算机决策支持系统DLCES中的决策矩阵模型,并讨论了DLCES中模型的定义、模型字典的描述、模型的构造原理、模型库管理系统的功能等。  相似文献   

6.
一种属性权重未知的区间概率风险型混合多属性决策方法   总被引:2,自引:0,他引:2  
针对指标权重未知的区间概率风险型混合多属性决策问题,提出一种基于熵权和投影理论的决策方法.首先,建立了语言变量和不确定语言变量与梯形模糊数的转化关系,将混合型数据转化成统一的梯形模糊数;然后,通过期望值将风险型决策矩阵转化为确定型决策矩阵,并用熵权法确定各指标权重,计算加权决策矩阵,根据各方案在正负理想方案上投影的相对接近度对方案进行排序;最后,通过应用案例说明了该方法的有效性.  相似文献   

7.
在模糊语言下,提出了梯形模糊数心态指标的群决策方法。在属性权系数和决策者权系数信息都不完全的情况下,引入心态指标,将模糊语言的梯形模糊数决策矩阵转化为带心态指标的决策矩阵;利用决策的不完全信息构造Fuzzy线性规划,分别求解出属性权重和决策者权重;对决策者的心态指标进行集成得到群体风险态度,模糊集成群体风险态度与方案的群体评价值,得到整个方案集的排序。选取实例说明该方法的有效性、可行性和可操作性。  相似文献   

8.
区间型多属性决策的心态指标法   总被引:6,自引:1,他引:5  
针对决策者偏好信息和属性值均为区间数的多属性决策问题,提出一种新的决策方法.该方法将区间型决策矩阵转化为带心态指标的决策矩阵,通过求解主、客观偏好的总绝对偏差最小与各方案综合属性值差距最大的双目标规划问题,客观地确定了属性的权重,从而给出各方案的排序结果.当决策者处于不同心态时,可以通过调整其心态指标来进行决策,因而更加符合实际.应用实例表明了该方法的有效性和实用性.  相似文献   

9.
一种基于决策矩阵的属性约简及规则提取算法   总被引:16,自引:1,他引:16  
研究了Rough集理论中属性约简和值约简问题,扩展了决策矩阵的定义,提出了一种基于决策矩阵的完备属性约简算法,该算法利用决策属性把论域划分成多个等价类,然后利用每个等价类对应的决策矩阵计算属性约简。与区分矩阵相比,采用决策矩阵可以有效地减少存储空间,提高约简算法效率。同时,借助决策矩阵进行值约简,提出了一种新的规则提取算法,使最终得到的决策规则更加简洁。实验结果表明,本文提出的属性约简和值约简算法是正确、有效、可行的。  相似文献   

10.
研究了几组可能度公式之间的关系,提出一种基于可能度矩阵的区间型多属性决策(MADM)方法。对决策矩阵中各指标下的属性区间值两两比较并建立各指标的可能度矩阵,通过各个可能度矩阵的排序向量把属性值为区间数的决策矩阵转化为以精确数为测度的矩阵,把求解区间型多属性决策中指标权重的不确定性问题转化为确定性问题处理,随后利用区间数排序的可能度法获得最优方案。实验结果表明了所提方法的可行性和有效性。最后对多属性决策问题中由不确定性转化为确定性的求解策略及其可能产生的问题作了必要讨论。  相似文献   

11.
Many approaches attempt to improve naive Bayes and have been broadly divided into five main categories: (1) structure extension; (2) attribute weighting; (3) attribute selection; (4) instance weighting; (5) instance selection, also called local learning. In this paper, we work on the approach of structure extension and single out a random Bayes model by augmenting the structure of naive Bayes. We called it random one-dependence estimators, simply RODE. In RODE, each attribute has at most one parent from other attributes and this parent is randomly selected from log2m (where m is the number of attributes) attributes with the maximal conditional mutual information. Our work conducts the randomness into Bayesian network classifiers. The experimental results on a large number of UCI data sets validate its effectiveness in terms of classification, class probability estimation, and ranking.  相似文献   

12.

In the fields of pattern recognition and machine learning, the use of data preprocessing algorithms has been increasing in recent years to achieve high classification performance. In particular, it has become inevitable to use the data preprocessing method prior to classification algorithms in classifying medical datasets with the nonlinear and imbalanced data distribution. In this study, a new data preprocessing method has been proposed for the classification of Parkinson, hepatitis, Pima Indians, single proton emission computed tomography (SPECT) heart, and thoracic surgery medical datasets with the nonlinear and imbalanced data distribution. These datasets were taken from UCI machine learning repository. The proposed data preprocessing method consists of three steps. In the first step, the cluster centers of each attribute were calculated using k-means, fuzzy c-means, and mean shift clustering algorithms in medical datasets including Parkinson, hepatitis, Pima Indians, SPECT heart, and thoracic surgery medical datasets. In the second step, the absolute differences between the data in each attribute and the cluster centers are calculated, and then, the average of these differences is calculated for each attribute. In the final step, the weighting coefficients are calculated by dividing the mean value of the difference to the cluster centers, and then, weighting is performed by multiplying the obtained weight coefficients by the attribute values in the dataset. Three different attribute weighting methods have been proposed: (1) similarity-based attribute weighting in k-means clustering, (2) similarity-based attribute weighting in fuzzy c-means clustering, and (3) similarity-based attribute weighting in mean shift clustering. In this paper, we aimed to aggregate the data in each class together with the proposed attribute weighting methods and to reduce the variance value within the class. Thus, by reducing the value of variance in each class, we have put together the data in each class and at the same time, we have further increased the discrimination between the classes. To compare with other methods in the literature, the random subsampling has been used to handle the imbalanced dataset classification. After attribute weighting process, four classification algorithms including linear discriminant analysis, k-nearest neighbor classifier, support vector machine, and random forest classifier have been used to classify imbalanced medical datasets. To evaluate the performance of the proposed models, the classification accuracy, precision, recall, area under the ROC curve, κ value, and F-measure have been used. In the training and testing of the classifier models, three different methods including the 50–50% train–test holdout, the 60–40% train–test holdout, and tenfold cross-validation have been used. The experimental results have shown that the proposed attribute weighting methods have obtained higher classification performance than random subsampling method in the handling of classifying of the imbalanced medical datasets.

  相似文献   

13.
徐政  邓安生  曲衍鹏 《计算机应用研究》2021,38(5):1355-1359,1364
针对传统的K近邻算法在计算样本之间相似度时将每个属性视为同等重要的问题,提出了一种基于推土机距离的方法来计算每个条件属性的权重。首先根据近邻关系划分用于比较一致性的两个分布;之后根据推土机距离设计不一致性评价函数,用于衡量每个属性下各个样本的近邻样本集与这一集合由决策属性细化的等价划分之间的不一致性程度;最后将近邻的不一致性程度转换为相应属性的重要性,用于实现属性加权K近邻分类器。通过在多个数据集上进行实验,该方法对参数的敏感程度低,在多个参数下可以显著提高K近邻的分类精度,并且在多个指标下的表现优于现有的一些分类方法。结果表明,该方法可以通过属性加权选择出更加准确的近邻样本,可广泛应用于基于近邻的机器学习方法中。  相似文献   

14.
在传统的K-means聚类算法基础上,本文提出一种基于熵和均方差法综合赋权的Syn-K-means算法。引入综合权重提高聚类结果的类内相似度,从而提高聚类精度。算法中特征权重的计算基于概率论中数字特征的基本描述方法——均方差和信息论中信息特征的基本度量方法——熵;综合赋权系数的选择采用主观设定法求解。实验结果表明,Syn-K-means算法在聚类精度方面优于标准的K-means算法。  相似文献   

15.
在多属性群决策中, 针对每一个属性下决策者都有一个关于决策方案的乘法偏好关系的决策问题, 提出一种基于乘法偏好关系的群一致性偏差熵多属性群决策方法. 此方法考虑到不同属性下的决策者具有不同的权重, 并通过迭代运算可以达到群一致性水平, 从而得出最终的不同属性下决策者的权重; 同时, 可以利用偏差熵模型来求解属性权重, 利用这两个权重最终获得一个综合各方意见的群一致性乘法偏好关系. 最后通过算例分析验证了所提出方法的有效性.  相似文献   

16.
Due to its simplicity, efficiency and efficacy, naive Bayes (NB) continues to be one of the top 10 data mining algorithms. A mass of improved approaches to NB have been proposed to weaken its conditional independence assumption. However, there has been little work, up to the present, on instance weighting filter approaches to NB. In this paper, we propose a simple, efficient, and effective instance weighting filter approach to NB. We call it attribute (feature) value frequency-based instance weighting and denote the resulting improved model as attribute value frequency weighted naive Bayes (AVFWNB). In AVFWNB, the weight of each training instance is defined as the inner product of its attribute value frequency vector and the attribute value number vector. The experimental results on 36 widely used classification problems show that AVFWNB significantly outperforms NB, yet at the same time maintains the computational simplicity that characterizes NB.  相似文献   

17.
深入分析了粗糙集表征的属性重要度,针对现有粗糙集理论的属性权重确立方法的不足,结合属性集中条件属性的整体重要度和系统中条件属性的个体重要度,提出了在不完备信息系统的一种权重确定方法,分析了其合理性。通过算例分析表明,新的不完备信息系统的权重确定方法可以解决原有粗糙集权重确定方法存在的问题。  相似文献   

18.
为实现对电气事故快速、准确和动态的分类,提出一种有机结合实例和属性加权的朴素贝叶斯电气事故分类方法(AIWNB)。朴素贝叶斯分类方法中的先验概率和条件概率采用两种实例加权方式加以改进,积极实例权值取决于各属性值频度的统计值,而消极实例权值通过逐条计算训练实例与测试实例间的相关性加以确定。属性权值则基于互信息定义为属性-属性相关性和属性-类相关性之间的残差。所提出的AIWNB方法将属性加权和实例加权有机结合在朴素贝叶斯统一框架内,利用高低压用户的电气实测数据进行验证,实验结果表明,与朴素贝叶斯相比,加权后的朴素贝叶斯方法更具竞争性,准确率和F1分数可提升3.09%和9.39%,证明所提的AIWNB算法在电气事故分类的实用性及有效性,并可推广至其他分类情形。  相似文献   

19.
深入研究粗糙集表征的属性重要度,针对现有粗糙集理论的属性权重确立方法的不足,结合属性集中各个条件属性的重要度及其确定的条件属性取值个数,提出了改进的粗糙集权重的确定方法,分析了其合理性。通过实例说明了,改进的粗糙集权重方法可以解决原有粗糙集权重确定方法存在的问题。  相似文献   

20.
In this article, a filter feature weighting technique for attribute selection in classification problems is proposed (LIA). It has two main characteristics. First, unlike feature weighting methods, it is able to consider attribute interactions in the weighting process, rather than only evaluating single features. Attribute subsets are evaluated by projecting instances into a grid defined by attributes in the subset. Then, the joint relevance of the subset is computed by measuring the information present in the cells of the grid. The final weight for each attribute is computed by taking into account its performance in each of the grids it participates. Second, many real problems contain low signal-to-noise ratios, due to instance of high noise levels, class overlap, class imbalance, or small training samples. LIA computes reliable local information for each of the cells by estimating the number of target class instances not due to chance, given a confidence value. In order to study its properties, LIA has been evaluated with a collection of 18 real datasets and compared to two feature weighting methods (Chi-Squared and ReliefF) and a subset feature selection algorithm (CFS). Results show that the method is significantly better in many cases, and never significantly worse. LIA has also been tested with different grid dimensions (1, 2, and 3). The method works best when evaluating attribute subsets larger than 1, hence showing the usefulness of considering attribute interactions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号