首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Grangerl因果性是衡量系统变量间动态关系的重要依据.传统的两变量Grangerl因果分析法容易产生伪因果关系,且不能刻画变量间的即时因果性.本文利用图模型方法研究时间序列变量间的Grangerl因果关系,建立了时间序列Granger因果图,提出Grangerl因果图的条件互信息辨识方法,利用混沌理论中的关联积分估计条件互信息,统计量的显著性由置换检验确定.仿真结果证实了方法的有效性,并利用该方法研究了空气污染指标以及中国股市间的Grangerl因果关系.  相似文献   

2.
集值信息系统中的对象的属性值多值化,可以实现对复杂信息更全面的刻画.在传统的集值信息系统中,每个属性只有一个尺度.但在具体应用中,人们往往需要在不同的尺度上处理和分析数据.为此,将多尺度信息系统的粒度转换函数引入集值信息系统中,建立多尺度集值信息系统的理论框架,并讨论该系统的不同尺度间信息粒、粗糙集的关系.在此基础上,...  相似文献   

3.
刘超  王磊  杨文  钟强强  黎敏 《计算机应用》2022,42(2):463-468
为了解决集值决策信息系统中的属性数量不断发生动态变化时,静态属性约简方法无法高效更新属性约简的问题,提出一种以知识粒度为启发信息的增量式属性约简方法.首先,介绍集值决策信息系统的相关概念,接着介绍知识粒度的定义并将其矩阵表示方法推广到此系统中;然后,分析增量式约简的更新机制,并基于知识粒度设计了增量式属性约简方法;最后...  相似文献   

4.
因果关系的预测是因果关系研究的重要内容和主要应用。现有的很多预测方法以寻找最优预测方程或最小特征变量集合为目的,以简化计算。提出一种新的可用于处理政策干预的因果关系预测方法ICIC_Prediction,不局限于利用马尔科夫毯等特征变量集合,而是从因果关系网络结构出发,利用因果关系系统及其采样数据的动态全局特性,预测目标变量在当前采样中的取值。通过在NIPS 2008"因果与预测"的评测会议上发布的四个不同类型的数据集上的对比实验,分析并展示了ICIC_Prediction方法的优势和特点。  相似文献   

5.
粗糙集理论的分层递阶约简算法是根据属性的获取方式、采集成本和实时性要求等对属性进行分类,使信息系统或者决策系统中的所有属性在单层次和单粒度上的知识表示变为部分属性所构成的知识在多种层次和多种粒度上的表示,从而可以逐层对信息系统进行约简.分层递阶约简算法在某水泥窑炉控制决策获取中的应用证实其有效性.  相似文献   

6.
决策多尺度信息系统是一类特殊的数据集,系统中的每个对象无论在条件属性集上还是在决策属性上都可取多个尺度的标记值,并且从细粒度标记属性值到粗粒度标记属性值有一个信息粒度变换.文中针对广义决策多尺度序信息系统的知识获取问题展开研究.首先,引入尺度选择概念,一个尺度选择对应一个单尺度的序决策系统,并将优势关系引入广义决策多尺度信息系统,给出在不同尺度选择下对象集的优势类和集合的下近似和上近似的定义及其性质.然后,在协调广义决策多尺度序信息系统中定义5种最优尺度选择的概念,证明实际上只有2种不同类型的最优尺度选择,即最优尺度选择、下近似最优尺度选择、信任最优尺度选择是等价的,而上近似最优尺度选择与似然最优尺度也是等价的.最后,给出协调广义决策多尺度序信息系统的辨识矩阵约简方法,并在最优尺度选择基础上给出蕴含在协调广义决策多尺度序信息系统中的序决策规则.  相似文献   

7.
基于一般随机信息系统上的属性约简方法讨论合成随机信息系统上的属性约简问题,分析它与原随机信息系统上属性约简问题之间的联系,并尝试讨论其上、下近似算子之间的关系,发现合成随机信息系统上的协调集可以通过两个原随机信息系统的协调集来构造;另一方面从包含度的角度讨论属性约简问题,验证合成随机信息系统上的包含度和原随机信息系统上包含度之间是否存在等价关系。最后通过实例验证了得出的结论。  相似文献   

8.
从多元时间序列观测数据中学习多个变量之间的因果关系是许多专业领域中的重要基本问题。现有的多元时间序列因果关系发现方法通常从每个个体的观测数据中学习个体因果关系,没有考虑部分个体之间可能存在相同的因果关系,导致样本利用不足。提出一种面向多元时间序列的群体因果关系发现算法。该算法分为2个阶段:第一阶段基于因果关系对个体之间的相似性进行度量,并把多个个体划分成多个群体,且无须指定群体的个数;第二阶段基于变分推断方法充分利用每个群体内的所有个体数据,从而学习群体因果关系。实验结果表明,该算法在多组不同参数生成的仿真数据上均具有较好的表现,与对比算法相比,AUC评分提升了5%~20%。在真实数据集中,该算法能够较好地区分具有不同因果关系的群体,并且能够学习到不同群体之间不同的因果关系,表明算法不仅具有因果关系发现能力,而且还具有多元时间序列聚类能力。  相似文献   

9.
随着网络和通信技术的快速的发展,社会进入了大数据时代。如何能够快速地从海量大数据中找到属性约简是目前研究的一个热点。由于传统属性约简的方法在计算大数据属性约简时,需要消耗巨大的计算时间,不能有效地处理日益积累的大数据属性约简的问题。为了提高传统属性约简算法的效率,针对较大决策信息系统属性约简更新问题,利用多粒度粗糙集理论,提出了基于多粒度粗糙集模型的矩阵属性约简算法,通过2组UCI数据集对所提出的多粒度矩阵属性约简算法的性能进行测试,结果验证了该多粒度矩阵属性约简算法是合理且有效的。  相似文献   

10.
几百年科学思维立足于事物具有因果性和决定性,着重于因果关系研究。它取得许多实效,但是,也暴露出对许多问题的解决无能为力。从而出现系统科学、复杂性科学等多种新学科。2013年出版的《大数据时代》提出生活、工作与思维的大变革,强调要从重视因果性转向相关性研究是一个很重要的观念。笔者认为,它明确提出正在进行的"科学革命"或"思维革命"的一个主要内容。同时也应看到,应用少量数据,也可以得出比因果性研究好的研究成果。  相似文献   

11.
王泽平  秦拯 《计算机科学》2008,35(6):280-282
针对某公司入侵检测系统产品误警率高,将因果告警相关方法融入到原系统中,对告警信息进行相关分析.利用DARPA 2000入侵检测场景数据集LLDOS1.0对新系统进行实验验证,结果表明,通过新系统可有效降低误警率,并可用图形的形式显示告警信息之间的因果相关关系,形象揭示出攻击者的攻击过程与攻击策略.  相似文献   

12.
基于约束网络的因果关联规则挖掘研究   总被引:1,自引:0,他引:1  
崔阳  刘长红 《计算机科学》2016,43(Z11):466-468
因果关联规则是知识库中一类特殊且重要的知识类型,相对一般关联规则,其优势在于能够揭示深层知识。首先对因果关系的特征和因果关联规则的挖掘方法进行了简介。针对在挖掘初始阶段如何限定可能导致结果的原因变量集合这一问题,运用了约束网络原理来构建一个实际系统变量间的因果关系结构。通过该因果关系结构可以比较容易地导出原因变量集合及各变量的类型,从而降低挖掘的复杂性,为提高挖掘结果的准确性提供有利条件。约束网络的引入优化了因果关联规则的挖掘过程,使之趋于更完备。  相似文献   

13.
Current constraint-based approaches to the discovery of causal structure in statistical data are unable to discriminate between causal models which entail identical sets of marginal dependencies. Often, marginal dependencies between observed variables are the result of complex causal connections involving observed and latent variables. This paper shows that, in such cases, the latent causal structure in a model often entails properties which can be tested against empirical evidence, and thus used to discriminate between equivalent alternative models of an empirical phenomenon under study.  相似文献   

14.
缪峰  王萍  李太勇 《计算机科学》2022,49(3):276-280
抽取事件之间的因果关系能够应用于自动问答、知识提取、常识推理等方面.隐式因果关系由于缺乏明显的词汇特征和中文复杂的句法结构,使得其抽取极为困难,已成为当前研究的难点.相比而言,显示因果关系的抽取比较容易、准确率高,且因果关系事件之间的逻辑关系稳定.为此,文中提出了一种原创的方法,首先通过对抽取的显示因果事件对进行事件动...  相似文献   

15.
Social media, especially Twitter is now one of the most popular platforms where people can freely express their opinion. However, it is difficult to extract important summary information from many millions of tweets sent every hour. In this work we propose a new concept, sentimental causal rules, and techniques for extracting sentimental causal rules from textual data sources such as Twitter which combine sentiment analysis and causal rule discovery. Sentiment analysis refers to the task of extracting public sentiment from textual data. The value in sentiment analysis lies in its ability to reflect popularly voiced perceptions that are stated in natural language. Causal rules on the other hand indicate associations between different concepts in a context where one (or several concepts) cause(s) the other(s). We believe that sentimental causal rules are an effective summarization mechanism that combine causal relations among different aspects extracted from textual data as well as the sentiment embedded in these causal relationships. In order to show the effectiveness of sentimental causal rules, we have conducted experiments on Twitter data collected on the Kurdish political issue in Turkey which has been an ongoing heated public debate for many years. Our experiments on Twitter data show that sentimental causal rule discovery is an effective method to summarize information about important aspects of an issue in Twitter which may further be used by politicians for better policy making.  相似文献   

16.
In this paper, we develop a granular input space for neural networks, especially for multilayer perceptrons (MLPs). Unlike conventional neural networks, a neural network with granular input is an augmented study on a basis of a well learned numeric neural network. We explore an efficient way of forming granular input variables so that the corresponding granular outputs of the neural network achieve the highest values of the criteria of specificity (and support). When we augment neural networks through distributing information granularities across input variables, the output of a network has different levels of sensitivity on different input variables. Capturing the relationship between input variables and output result becomes of a great help for mining knowledge from the data. And in this way, important features of the data can be easily found. As an essential design asset, information granules are considered in this construct. The quantification of information granules is viewed as levels of granularity which is given by the expert. The detailed optimization procedure of allocation of information granularity is realized by an improved partheno genetic algorithm (IPGA). The proposed algorithm is testified effective by some numeric studies completed for synthetic data and data coming from the machine learning and StatLib repositories. Moreover, the experimental studies offer a deep insight into the specificity of input features.  相似文献   

17.
18.
Recently, a nonparametric marginal structural model (NPMSM) approach to Causal Inference has been proposed [Neugebauer, R., van der Laan, M., 2006. Nonparametric causal effects based on marginal structural models. J. Statist. Plann. Inference (in press), 〈www http://www.sciencedirect.com/science/journal/03783758〉.] as an appealing practical alternative to the original parametric MSM (PMSM) approach introduced by Robins [Robins, J., 1998a. Marginal structural models. In: 1997 Proceedings of the American Statistical Association, American Statistical Association, Alexandria, VA, pp. 1-10]. The new MSM-based causal inference methodology generalizes the concept of causal effects: the proposed nonparametric causal effects are interpreted as summary measures of the causal effects defined with PMSMs. In addition, causal inference with NPMSM does not rely on the assumed correct specification of a parametric MSM but instead defines causal effects based on a user-specified working causal model which can be willingly misspecified. The NPMSM approach was developed for studies with point treatment data or with longitudinal data where the outcome is not time-dependent (typically collected at the end of data collection). In this paper, we generalize this approach to longitudinal studies where the outcome is time-dependent, i.e. collected throughout the span of the studies, and address the subsequent estimation inconsistency which could easily arise from a hasty generalization of the algorithm for maximum likelihood estimation. More generally, we provide an overview of the multiple causal effect representations which have been developed based on MSMs in longitudinal studies.  相似文献   

19.
In this study, we introduce and discuss a concept of knowledge transfer in system modeling. In a nutshell, knowledge transfer is about forming ways on how a source of knowledge (namely, an existing model) can be used in presence of new, very limited experimental evidence. In virtue of the nature of the problem at hand (a situation encountered quite commonly, e.g. in project cost estimation), new data could be very limited and this scarcity of data makes it insufficient to construct a new model. At the same time, the new data originate from a similar (but not the same) phenomenon (process) for which the original model has been constructed so the existing model, even though it could applied, has to be treated with a certain level of reservation. Such situations can be encountered, e.g. in software engineering where in spite existing similarities, each project, process, or product exhibits its own unique characteristics. Taking this into consideration, the existing model is generalized (abstracted) by forming its granular counterpart – granular model where its parameters are regarded as information granules rather than numeric entities, viz. their non-numeric (granular) version is formed based on the values of the numeric parameters present in the original model. The results produced by the granular model are also granular and in this manner they become reflective of the differences existing between the current phenomenon and the process for which the previous model has been formed.In the study on knowledge transfer and reusability, information granularity is viewed as an important design asset and as such it is subject to optimization. We formulate an optimal information granularity allocation problem: assuming a certain level of granularity, distribute it optimally among the parameters of the model (making them granular) so that a certain data coverage criterion is maximized. While the underlying concept is general and applicable to a variety of models, in this study, we discuss its use to fuzzy neural networks with intent to clearly visualize the advantages of the approach and emphasize various ways of forming granular versions of the weights (parameters) of the connections of the network. Several granularity allocation protocols (ranging from a uniform distribution of granularity, symmetric and asymmetric schemes of allocation) are discussed and the effectiveness of each of them is quantified. The use of Particle Swarm Optimization (PSO) as the underlying optimization tool to realize optimal granularity allocation is discussed.  相似文献   

20.
为了分析空间故障树中实际故障数据与影响因素之间的因果关系,基于因素空间思想,在空间故障树理论框架内提出一种因素间因果概念的提取方法。针对故障数据分析,将影响元件故障的使用时间和使用温度作为影响因素;将元件故障概率作为目标因素。通过背景关系分析和基本概念半格分析得出影响因素和目标因素之间的因果概念。将理论和实际中概念的外延和内涵统一,使方法兼顾理论和实际。分析结果中包括了三种基本概念:不可再分的基本概念、中间基本概念、不包含故障概率相的概念。前者属于真概念,可用于实例因果概念分析;后两者不能用于实例因果概念分析,只可作为根据影响因素对对象进行分类的概念。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号