首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
We present a knowledge discovery method for graded attributes that is based on an interactive determination of implications (if-then-rules) holding between the attributes of a given data-set. The corresponding algorithm queries the user in an efficient way about implications between the attributes. The result of the process is a representative set of examples for the entire theory and a set of implications from which all implications that hold between the attributes can be deduced. In many instances, the exploration process may be shortened by the usage of the user’s background knowledge. That is, a set of of implications the user knows beforehand. The method was successfully applied in different real-life applications for discrete data. In this paper, we show that attribute exploration with background information can be generalized for graded attributes.  相似文献   

2.
An Overview of Data Mining and Knowledge Discovery   总被引:9,自引:0,他引:9       下载免费PDF全文
With massive amounts of data stored in databases,mining information and knowledge in databases has become an important issue in recent research.Researchers in many different fields have shown great interest in date mining and knowledge discovery in databases.Several emerging applications in information providing services,such as data warehousing and on-line services over the Internet,also call for various data mining and knowledge discovery tchniques to understand used behavior better,to improve the service provided,and to increase the business opportunities.In response to such a demand,this article is to provide a comprehensive survey on the data mining and knowledge discorvery techniques developed recently,and introduce some real application systems as well.In conclusion,this article also lists some problems and challenges for further research.  相似文献   

3.
The proliferation of large masses of data has created many new opportunities for those working in science, engineering and business. The field of data mining (DM) and knowledge discovery from databases (KDD) has emerged as a new discipline in engineering and computer science. In the modern sense of DM and KDD the focus tends to be on extracting information characterized as knowledge from data that can be very complex and in large quantities. Industrial engineering, with the diverse areas it comprises, presents unique opportunities for the application of DM and KDD, and for the development of new concepts and techniques in this field. Many industrial processes are now automated and computerized in order to ensure the quality of production and to minimize production costs. A computerized process records large masses of data during its functioning. This real-time data which is recorded to ensure the ability to trace production steps can also be used to optimize the process itself. A French truck manufacturer decided to exploit the data sets of measures recorded during the test of diesel engines manufactured on their production lines. The goal was to discover knowledge in the data of the test engine process in order to significantly reduce (by about 25%) the processing time. This paper presents the study of knowledge discovery utilizing the KDD method. All the steps of the method have been used and two additional steps have been needed. The study allowed us to develop two systems: the discovery application is implemented giving a real-time prediction model (with a real reduction of 28%) and the discovery support environment now allows those who are not experts in statistics to extract their own knowledge for other processes.  相似文献   

4.
This paper describes the nature of mathematical discovery (including concept definition and exploration, example generation, and theorem conjecture and proof), and considers how such an intelligent process can be simulated by a machine. Although the material is drawn primarily from graph theory, the results are immediately relevant to research in mathematical discovery and learning.The thought experiment, a protocol paradigm for the empirical study of mathematical discovery, highlights behavioral objectives for machine simulation. This thought experiment provides an insightful account of the discovery process, motivates a framework for describing mathematical knowledge in terms of object classes, and is a rich source of advice on the design of a system to perform discovery in graph theory. The evaluation criteria for a discovery system, it is argued, must include both a set of behavior to display (behavioral objectives) and a target set of facts to be discovered (factual knowledge).Cues from the thought experiment are used to formulate two hierarchies of representational languages for graphy theory. The first hierarchy is based on the superficial terminology and knowledge of the thought experiment. Generated by formal grammars with set-theoretic semantics, this eminently reasonable approach ultimately fails to meet the factual knowledge criteria. The second hierarchy uses declarative expressions, each of which has a semantic interpretation as a stylized, recursive algorithm that defines a class by generating it correctly and completely. A simple version of one such representation is validated by a successful, implemented system called Graph Theorist (GT) for mathematical research in graph theory. GT generates correct examples, defines and explores new graph theory properties, and conjectures and proves theorems.Several themes run through this paper. The first is the dual goals, behavioral objectives and factural knowledge to be discovered, and the multiplicity of their demands on a discovery system. The second theme is the central role of object classes to knowledge representation. The third is the increased power and flexibility of a constructive (generator) definition over the more traditional predicate (tester) definition. The final theme is the importance of examples and recursion in mathematical knowledge. The results provide important guidance for further research in the simulation of mathematical discovery.  相似文献   

5.
This paper presents the insights gained from applying knowledge discovery in databases (KDD) processes for the purpose of developing intelligent models, used to classify a country's investing risk based on a variety of factors. Inferential data mining techniques, like C5.0, as well as intelligent learning techniques, like neural networks, were applied to a dataset of 52 countries. The dataset included 27 variables (economic, stock market performance/risk and regulatory efficiencies) on 52 countries, whose investing risk category was assessed in a Wall Street Journal survey of international experts. The results of applying KDD techniques to the dataset are promising, and successfully classified most countries as compared to the experts' classifications. Implementation details, results, and future plans are also presented.  相似文献   

6.
在研究科学和解决工程应用问题时,经常需要根据两个变量的实验数据,找出这两个变量之间的关系.使用传统的数据拟合技术所求得的近似公式大多表示为代数多项式,系数由最小二乘法原理建立正规方程组求出.但这种传统拟合方法存在一个"病态问题",即系数行列式元素微小变化引起解的显著变化的问题.为了解决这个问题,采用基于人工智能的机器发现和数值计算的曲线拟合相结合的经验公式发现技术,并对经验公式发现系统中的误差评判方法提出了改进算法,提高了公式发现系统的可用性.  相似文献   

7.
项婧  任劼 《计算机工程与设计》2006,27(15):2905-2908
近年来,需要深入研究癌症细胞的基因表达技术正在不断增多。机器学习算法已经被广泛用于当今世界的许多领域,但是却很少应用于生物信息领域。系统研究了决策树的生成、修剪的原理和算法以及其它与决策树相关的问题;并且根据CAMDA2000(critical assessment of mieroarray data analysis)提供的急性淋巴白血病(ALL)和急性骨髓白血病(AML)数据集,设计并实现了一个基于ID3算法的决策树分类器,并利用后剪枝算法简化决策树。最后通过实验验证算法的有效性,实验结果表明利用该决策树分类器对白血病微阵列实验数据进行判别分析,分类准确率很高,证明了决策树算法在医学数据挖掘领域有着广泛的应用前景。  相似文献   

8.
Biclusters are subsets of genes that exhibit similar behavior over a set of conditions. A biclustering algorithm is a useful tool for uncovering groups of genes involved in the same cellular processes and groups of conditions under which these processes take place. In this paper, we propose a polynomial time algorithm to identify functionally highly correlated biclusters. Our algorithm identifies (1) gene sets that simultaneously exhibit additive, multiplicative, and combined patterns and allow high levels of noise, (2) multiple, possibly overlapped, and diverse gene sets, (3) biclusters that simultaneously exhibit negatively and positively correlated gene sets, and (4) gene sets for which the functional association is very high. We validate the level of functional association in our method by using the GO database, protein-protein interactions and KEGG pathways.  相似文献   

9.
Persistent homology is a computationally intensive and yet extremely powerful tool for Topological Data Analysis. Applying the tool on potentially infinite sequence of data objects is a challenging task. For this reason, persistent homology and data stream mining have long been two important but disjoint areas of data science. The first computational model, that was recently introduced to bridge the gap between the two areas, is useful for detecting steady or gradual changes in data streams, such as certain genomic modifications during the evolution of species. However, that model is not suitable for applications that encounter abrupt changes of extremely short duration. This paper presents another model for computing persistent homology on streaming data that addresses the shortcoming of the previous work. The model is validated on the important real-world application of network anomaly detection. It is shown that in addition to detecting the occurrence of anomalies or attacks in computer networks, the proposed model is able to visually identify several types of traffic. Moreover, the model can accurately detect abrupt changes of extremely short as well as longer duration in the network traffic. These capabilities are not achievable by the previous model or by traditional data mining techniques.  相似文献   

10.
Knowledge Discovery from Series of Interval Events   总被引:4,自引:0,他引:4  
Knowledge discovery from data sets can be extensively automated by using data mining software tools. Techniques for mining series of interval events, however, have not been considered. Such time series are common in many applications. In this paper, we propose mining techniques to discover temporal containment relationships in such series. Specifically, an item A is said to contain an item B if an event of type B occurs during the time span of an event of type A, and this is a frequent relationship in the data set. Mining such relationships provides insight about temporal relationships among various items. We implement the technique and analyze trace data collected from a real database application. Experimental results indicate that the proposed mining technique can discover interesting results. We also introduce a quantization technique as a preprocessing step to generalize the method to all time series.  相似文献   

11.
12.
ILP-based concept discovery in multi-relational data mining   总被引:1,自引:0,他引:1  
Multi-relational data mining has become popular due to the limitations of propositional problem definition in structured domains and the tendency of storing data in relational databases. Several relational knowledge discovery systems have been developed employing various search strategies, heuristics, language pattern limitations and hypothesis evaluation criteria, in order to cope with intractably large search space and to be able to generate high-quality patterns. In this work, an ILP-based concept discovery method, namely Confidence-based Concept Discovery (C2D), is described in which strong declarative biases and user-defined specifications are relaxed. Moreover, this new method directly works on relational databases. In addition to this, a new confidence-based pruning is used in this technique. We also describe how to define and use aggregate predicates as background knowledge in the proposed method. In order to use aggregate predicates, we show how to handle numerical attributes by using comparison operators on them. Finally, we analyze the effect of incorporating unrelated facts for generating transitive rules on the proposed method. A set of experiments are conducted on real-world problems to test the performance of the proposed method.  相似文献   

13.
Anonymity preserving pattern discovery   总被引:5,自引:0,他引:5  
It is generally believed that data mining results do not violate the anonymity of the individuals recorded in the source database. In fact, data mining models and patterns, in order to ensure a required statistical significance, represent a large number of individuals and thus conceal individual identities: this is the case of the minimum support threshold in frequent pattern mining. In this paper we show that this belief is ill-founded. By shifting the concept of k -anonymity from the source data to the extracted patterns, we formally characterize the notion of a threat to anonymity in the context of pattern discovery, and provide a methodology to efficiently and effectively identify all such possible threats that arise from the disclosure of the set of extracted patterns. On this basis, we obtain a formal notion of privacy protection that allows the disclosure of the extracted knowledge while protecting the anonymity of the individuals in the source database. Moreover, in order to handle the cases where the threats to anonymity cannot be avoided, we study how to eliminate such threats by means of pattern (not data!) distortion performed in a controlled way.  相似文献   

14.
Relational rule learning algorithms are typically designed to construct classification and prediction rules. However, relational rule learning can be adapted also to subgroup discovery. This paper proposes a propositionalization approach to relational subgroup discovery, achieved through appropriately adapting rule learning and first-order feature construction. The proposed approach was successfully applied to standard ILP problems (East-West trains, King-Rook-King chess endgame and mutagenicity prediction) and two real-life problems (analysis of telephone calls and traffic accident analysis). Editors: Hendrik Blockeel, David Jensen and Stefan Kramer An erratum to this article is available at .  相似文献   

15.
数据挖掘的概念、系统结构和方法   总被引:12,自引:5,他引:7  
首先对数据挖掘的概念及相关流派加以归纳,然后给出一个数据挖掘系统的体系结构,并通过它介绍数据挖掘系统的主要功能部件,最后对数据挖掘的主要方法进行分析。  相似文献   

16.
一种时序数据的离群数据挖掘新算法   总被引:11,自引:0,他引:11  
离群数据挖掘是数据挖掘的重要内容,针对时序数据进行离群数据挖掘方法的研究。首先通过对时序数据进行离散傅立叶变换将其从时域空间变换到频域空间,将时序数据映射为多维空间的点,在此基础上,提出一种新的基于距离的离群数据挖掘算法。对某钢铁企业电力负荷时序数据进行仿真实验,结果表明了算法的有效性。  相似文献   

17.
Data mining is a powerful method to extract knowledge from data. Raw data faces various challenges that make traditional method improper for knowledge extraction. Data mining is supposed to be able to handle various data types in all formats. Relevance of this paper is emphasized by the fact that data mining is an object of research in different areas. In this paper, we review previous works in the context of knowledge extraction from medical data. The main idea in this paper is to describe key papers and provide some guidelines to help medical practitioners. Medical data mining is a multidisciplinary field with contribution of medicine and data mining. Due to this fact, previous works should be classified to cover all users’ requirements from various fields. Because of this, we have studied papers with the aim of extracting knowledge from structural medical data published between 1999 and 2013. We clarify medical data mining and its main goals. Therefore, each paper is studied based on the six medical tasks: screening, diagnosis, treatment, prognosis, monitoring and management. In each task, five data mining approaches are considered: classification, regression, clustering, association and hybrid. At the end of each task, a brief summarization and discussion are stated. A standard framework according to CRISP-DM is additionally adapted to manage all activities. As a discussion, current issue and future trend are mentioned. The amount of the works published in this scope is substantial and it is impossible to discuss all of them on a single work. We hope this paper will make it possible to explore previous works and identify interesting areas for future research.  相似文献   

18.
针对煤矿安全监控历史数据及监测参数特点,提出了一种煤矿安全监控信息特征快速发现方法。该方法采用基于误差带的历史数据压缩算法分析采样数据,发现并存储包含重要特征的信息片段,分析该信息片段的含义,并进行主题抽取和关联分析,研究瓦斯序列的相关分析,从而可得出煤矿安全监控系统重要数据的信息特征。该方法对完善煤矿科学管理、挖掘煤矿多传感器信息和煤矿瓦斯涌出规律有一定参考价值。  相似文献   

19.
摘 归纳了最新的数据挖掘和知识发现方法的理论和应用进展,详细总结了研究和应用的一些关键技术,最后对数据挖掘和知识发现将来的理论发展趋势和应用趋势做出了展望。  相似文献   

20.
基因芯片是微阵列技术的典型代表,它具有高通量的特性和同时检测全部基因组基因表达水平的能力。应用微阵列芯片的一个主要目的是基因表达模式的发现,即在基因组水平发现功能相似,生物学过程相关的基因簇;或者将样本分类,发现样本的各种亚型。例如根据基因表达水平对癌症样本进行分类,发现疾病的分子亚型。非负矩阵分解NMF方法是一种非监督的、非正交的、基于局部表示的矩阵分解方法。近年来这种方法被越来越多地应用在微阵列数据的分类分析和聚类发现中。系统地介绍了非负矩阵分解的原理、算法和应用,分解结果的生物学解释,分类结果的质量评估和基于NMF算法的分类软件。总结并评估了NMF方法在微阵列数据分类和聚类发现应用中的表现。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号