共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
《Expert systems with applications》2014,41(10):4950-4958
Social media, especially Twitter is now one of the most popular platforms where people can freely express their opinion. However, it is difficult to extract important summary information from many millions of tweets sent every hour. In this work we propose a new concept, sentimental causal rules, and techniques for extracting sentimental causal rules from textual data sources such as Twitter which combine sentiment analysis and causal rule discovery. Sentiment analysis refers to the task of extracting public sentiment from textual data. The value in sentiment analysis lies in its ability to reflect popularly voiced perceptions that are stated in natural language. Causal rules on the other hand indicate associations between different concepts in a context where one (or several concepts) cause(s) the other(s). We believe that sentimental causal rules are an effective summarization mechanism that combine causal relations among different aspects extracted from textual data as well as the sentiment embedded in these causal relationships. In order to show the effectiveness of sentimental causal rules, we have conducted experiments on Twitter data collected on the Kurdish political issue in Turkey which has been an ongoing heated public debate for many years. Our experiments on Twitter data show that sentimental causal rule discovery is an effective method to summarize information about important aspects of an issue in Twitter which may further be used by politicians for better policy making. 相似文献
4.
The explosive growth of Chinese electronic market has made it possible for companies to better understand consumers?? opinion towards their products in a timely fashion through their online reviews. This study proposes a framework for extracting knowledge from online reviews through text mining and econometric analysis. Specifically, we extract product features, detect topics, and identify determinants of customer satisfaction. An experiment on the online reviews from a Chinese leading B2C (Business-to-Customer) website demonstrated the feasibility of the proposed method. We also present some findings about the characteristics of Chinese reviewers. 相似文献
5.
Beat Wüthrich 《Journal of Intelligent Information Systems》1993,2(3):245-264
We present a natural and realistic knowledge acquisition and processing scenario. In the first phase a domain expert identifies deduction rules that he thinks are good indicators of whether a specific target concept is likely to occur. In a second knowledge acquisition phase, a learning algorithm automatically adjusts, corrects and optimizes the deterministic rule hypothesis given by the domain expert by selecting an appropriate subset of the rule hypothesis and by attaching uncertainties to them. Then, in the running phase of the knowledge base we can arbitrarily combine the learned uncertainties of the rules with uncertain factual information.Formally, we introduce the natural class of disjunctive probabilistic concepts and prove that this class is efficiently distribution-free learnable. The distribution-free learning model of probabilistic concepts was introduced by Kearns and Schapire and generalizes Valiant's probably approximately correct learning model. We show how to simulate the learned concepts in probabilistic knowledge bases which satisfy the laws of axiomatic probability theory. Finally, we combine the rule uncertainties with uncertain facts and prove the correctness of the combination under an independence assumption. 相似文献
6.
On optimal rule discovery 总被引:4,自引:0,他引:4
Jiuyong Li 《Knowledge and Data Engineering, IEEE Transactions on》2006,18(4):460-471
In machine learning and data mining, heuristic and association rules are two dominant schemes for rule discovery. Heuristic rule discovery usually produces a small set of accurate rules, but fails to find many globally optimal rules. Association rule discovery generates all rules satisfying some constraints, but yields too many rules and is infeasible when the minimum support is small. Here, we present a unified framework for the discovery of a family of optimal rule sets and characterize the relationships with other rule-discovery schemes such as nonredundant association rule discovery. We theoretically and empirically show that optimal rule discovery is significantly more efficient than association rule discovery independent of data structure and implementation. Optimal rule discovery is an efficient alternative to association rule discovery, especially when the minimum support is low. 相似文献
7.
Updating knowledge bases 总被引:2,自引:0,他引:2
We consider the problem of updating a knowledge base, where a knowledge base is realised as a normal (logic) program. We present
procedures for deleting an atom from a normal program and inserting an atom into a normal program, concentrating particularly
on the case when negative literals appear in the bodies of program clauses. We also prove various properties of the procedures
including their correctness. 相似文献
8.
Lu J.J. Nerode A. Subrahmanian V.S. 《Knowledge and Data Engineering, IEEE Transactions on》1996,8(5):773-785
Deductive databases that interact with, and are accessed by, reasoning agents in the real world (such as logic controllers in automated manufacturing, weapons guidance systems, aircraft landing systems, land-vehicle maneuvering systems, and air-traffic control systems) must have the ability to deal with multiple modes of reasoning. Specifically, the types of reasoning we are concerned with include, among others, reasoning about time, reasoning about quantitative relationships that may be expressed in the form of differential equations or optimization problems, and reasoning about numeric modes of uncertainty about the domain which the database seeks to describe. Such databases may need to handle diverse forms of data structures, and frequently they may require use of the assumption-based nonmonotonic representation of knowledge. A hybrid knowledge base is a theoretical framework capturing all the above modes of reasoning. The theory tightly unifies the constraint logic programming scheme of Jaffar and Lassez (1987), the generalized annotated logic programming theory of Kifer and Subrahmanian (1989), and the stable model semantics of Gelfond and Lifschitz (1988). New techniques are introduced which extend both the work on annotated logic programming and the stable model semantics 相似文献
9.
Probabilistic knowledge bases 总被引:1,自引:0,他引:1
We define a new fixpoint semantics for rule based reasoning in the presence of weighted information. The semantics is illustrated on a real world application requiring such reasoning. Optimizations and approximations of the semantics are shown so as to make the semantics amenable to very large scale real world applications. We finally prove that the semantics is probabilistic and reduces to the usual fixpoint semantics of stratified Datalog if all information is certain. We implemented various knowledge discovery systems which automatically generate such probabilistic decision rules. In collaboration with a bank in Hong Kong we use one such system to forecast currency exchange rates 相似文献
10.
As the amount of streaming audio and video available to World Wide Web users grows, tools for analyzing and indexing this content will become increasingly important. Frequently, knowledge management applications and information portals synthesize unstructured text information from the Web, intranets and partner sites. Given this context, we crawl a statistically significant number of Web pages, detect those that contain streaming media links, crawl the media links to extract associated meta-data, then use the crawl data to build a resource list for Web media. We have used these crawl-data findings to build a media indexing application that uses content-based indexing methods 相似文献
11.
Human knowledge in any expertise area changes with respect to time. Two types of such knowledge can be identified, time independent and time dependent. It is shown that the maintenance effort of the latter is harder than that of the former. The present paper applies research results in the area of temporal databases, in order to maintain a rule-based knowledge base whose content changes with respect to the real world time. It is shown that the approach simplifies the maintenance of time dependent knowledge. It also enables the study of the evolution of knowledge with respect to time, which is knowledge on its own. Three distinct solutions are actually proposed and evaluated. Their common characteristic is that knowledge is stored in a database; therefore, all the advantages of databases are inherited by knowledge bases. Implementations are also reported. 相似文献
12.
Combining multiple knowledge bases 总被引:2,自引:0,他引:2
Combining knowledge present in multiple knowledge base systems into a single knowledge base is discussed. A knowledge based system can be considered an extension of a deductive database in that it permits function symbols as part of the theory. Alternative knowledge bases that deal with the same subject matter are considered. The authors define the concept of combining knowledge present in a set of knowledge bases and present algorithms to maximally combine them so that the combination is consistent with respect to the integrity constraints associated with the knowledge bases. For this, the authors define the concept of maximality and prove that the algorithms presented combine the knowledge bases to generate a maximal theory. The authors also discuss the relationships between combining multiple knowledge bases and the view update problem 相似文献
13.
Updating knowledge bases II 总被引:2,自引:0,他引:2
We consider the problem of updating a knowledge base, where a knowledge base is realised as a (logic) program. In a previous
paper, we presented procedures for deleting an atom from a normal program and inserting an atom into a normal program, concentrating
particularly on the case when negative literals appear in the bodies of program clauses. We also proved various properties
of the procedures including their correctness. Here we present mutually recursive versions of the update procedures and prove
their correctness and other properties. We then generalise the procedures so that we can update an (arbitrary) program with
an (arbitrary) formula. The correctness of the update procedures for programs is also proved. 相似文献
14.
Most previous studies on rough sets focused on attribute reduction and decision rule mining on a single concept level. Data with attribute value taxonomies (AVTs) are, however, commonly seen in real-world applications. In this paper, we extend Pawlak’s rough set model, and propose a novel multi-level rough set model (MLRS) based on AVTs and a full-subtree generalization scheme. Paralleling with Pawlak’s rough set model, some conclusions related to the MLRS are given. Meanwhile, a novel concept of cut reduction based on MLRS is presented. A cut reduction can induce the most abstract multi-level decision table with the same classification ability on the raw decision table, and no other multi-level decision table exists that is more abstract. Furthermore, the relationships between attribute reduction in Pawlak’s rough set model and cut reduction in MLRS are discussed. We also prove that the problem of cut reduction generation is NP-hard, and develop a heuristic algorithm named CRTDR for computing the cut reduction. Finally, an approach named RMTDR for mining multi-level decision rule is provided. It can mine decision rules from different concept levels. Example analysis and comparative experiments show that the proposed methods are efficient and effective in handling the problems where data is associated with AVTs. 相似文献
15.
Many information systems record executed process instances in the event log, a very rich source of information for several process management tasks, like process mining and trace comparison. In this paper, we present a framework, able to convert activities in the event log into higher level concepts, at different levels of abstraction, on the basis of domain knowledge. Our abstraction mechanism manages non trivial situations, such as interleaved activities or delays between two activities that abstract to the same concept.Abstracted traces can then be provided as an input to an intelligent system, meant to implement a variety of process management tasks, significantly enhancing the quality and the usefulness of its output.In particular, in the paper we demonstrate how trace abstraction can impact on the quality of process discovery, showing that it is possible to obtain more readable and understandable process models.We also prove, through our experimental results, the impact of our approach on the capability of trace comparison and clustering (realized by means of a metric able to take into account abstraction phase penalties) to highlight (in)correct behaviors, abstracting from details. 相似文献
16.
Komo Christian Beierle Christoph 《Annals of Mathematics and Artificial Intelligence》2022,90(1):107-144
Annals of Mathematics and Artificial Intelligence - For nonmonotonic reasoning in the context of a knowledge base $\mathcal {R}$ containing conditionals of the form If A then usually B, system P... 相似文献
17.
18.
Architecture for knowledge discovery and knowledge management 总被引:1,自引:0,他引:1
In this paper, we propose I-MIN model for knowledge discovery and knowledge management in evolving databases. The model splits the KDD process into three phases. The schema designed during the first phase, abstracts the generic mining requirements of the KDD process and provides a mapping between the generic KDD process and (user) specific KDD subprocesses. The generic process is executed periodically during the second phase and windows of condensed knowledge called knowledge concentrates are created. During the third phase, which corresponds to actual mining by the end users, specific KDD subprocesses are invoked to mine knowledge concentrates. The model provides a set of mining operators for the development of mining applications to discover and renew, preserve and reuse, and share knowledge for effective knowledge management. These operators can be invoked by either using a declarative query language or by writing applications.The architectural proposal emulates a DBMS like environment for the managers, administrators and end users in the organization. Knowledge management functions, like sharing and reuse of the discovered knowledge among the users and periodic updating of the discovered knowledge are supported. Complete documentation and control of all the KDD endeavors in an organization are facilitated by the I-MIN model. This helps in structuring and streamlining the KDD operations in an organization. 相似文献
19.
20.
Verification of non-monotonic knowledge bases 总被引:1,自引:0,他引:1
Neli P Zlatareva Author vitae 《Decision Support Systems》1997,21(4):253-261
Non-monotonic Knowledge-Based Systems (KBSs) must undergo quality assurance procedures for the following two reasons: (i) belief revision (if such is provided) cannot always guarantee the structural correctness of the knowledge base, and in certain cases may introduce new semantic errors in the revised theory; (ii) non-monotonic theories may have multiple extensions, and some types of functional errors which do not violate structural properties of a given extension are hard to detect without testing the overall performance of the KBS. This paper presents an extension of the distributed verification method, which is meant to reveal structural and functional anomalies in non-monotonic KBSs. Two classes of anomalies are considered: (i) structural anomalies which manifest themselves within a given extension (such as logical inconsistencies, structural incompleteness, and intractabilities caused by circular rule chains), and (ii) functional anomalies related to the overall performance of the KBS (such as the existence of complementary rules and some types of rule subsumptions). The corresponding verification tests are presented and illustrated on an extended example. 相似文献