排序方式: 共有48条查询结果,搜索用时 15 毫秒
31.
32.
33.
Multiply sectioned Bayesian networks (MSBNs) support multiagent probabilistic inference in distributed large problem domains. Inference with MSBNs can be performed using their compiled representations. The compilation involves moralization and triangulation of a set of local graphical structures. Privacy of agents may prevent us from compiling MSBNs at a central location. In earlier work, agents performed compilation sequentially via a depth‐first traversal of the hypertree that organizes local subnets, where communication failure between any two agents would crush the whole work. In this paper, we present an asynchronous compilation method by which multiple agents compile MSBNs in full parallel. Compared with the traversal compilation, the asynchronous one is robust, self‐adaptive, and fault‐tolerant. Experiments show that both methods provide similar quality compilation to simple MSBNs, but the asynchronous one provides much higher quality compilation to complex MSBNs. Empirical study also indicates that the asynchronous one is consistently faster than the traversal one. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
34.
35.
Xindong?WuEmail author Philip?S.?Yu Gregory?Piatetsky-Shapiro Nick?Cercone T.?Y.?Lin Ramamohanarao?Kotagiri Benjamin?W.?Wah 《Knowledge and Information Systems》2003,5(2):248-261
At the 2001 IEEE International Conference on Data Mining in San Jose, California,
on November 29 to December 2, 2001, there was a panel discussion on how data
mining research meets practical development. One of the motivations for organizing the
panel discussion was to provide useful advice for industrial people to explore their directions
in data mining development. Based on the panel discussion, this paper presents
the views and arguments from the panel members, the Conference Chair and the Program
Committee Co-Chairs. These people as a group have both academic and industrial
experiences in different data mining related areas such as databases, machine learning,
and neural networks. We will answer questions such as (1) how far data mining is from
practical development, (2) how data mining research differs from practical development,
and (3) what are the most promising areas in data mining for practical development. 相似文献
36.
Multiply sectioned Bayesian networks (MSBNs) support multi-agent probabilistic inference in distributed large problem domains, where agents (subdomains) are organized by a tree structure (called hypertree). In earlier work, all belief updating methods on a hypertree are made of two rounds of propagation, each of which is implemented as a recursive process. Both processes need to be started from the same designated (root) hypernode. Agents perform local belief updating at most in a partial parallel manner. Such methods may not be suitable for practical multi-agent environments because they are easy to crush for the problems happened in communication or local belief updating. In this paper, we present a fault-tolerant belief updating method for multi-agent probabilistic inference. In this method, multiple agents concurrently perform exact belief updating in a complete parallel. Temporary problems happened from time to time at some agents or some communication channels would not prevent agents from eventually converging to the correct beliefs. Permanently disconnected communication channels would not keep the properly connected portions of the system from appropriately finishing their belief updating within portions. Compared to the previous traversal-based belief updating, the proposed approach is not only fault-tolerant but also robust and scalable. 相似文献
37.
Larry Cercone 《Polymer Composites》1991,12(2):81-86
Decreased mechanical strength in a fiber reinforced plastic part can often be traced to poor or incomplete impregnation of the reinforcing fiber with the matrix. To properly understand the impregnation process in the design of new composites manufacturing machinery (specifically in unidirectional tape machines), or to optimize wetout in existing machine designs, the raw process materials and their relationship within the process environment must be examined. The critical factors are: resin viscosity vs. temperature; the work of adhesion between the fiber and resin; and the problem of forcing the resin to completely penetrate a fiber bundle. If these factors are known, nip rolls can be designed to meet a specific process envelope, or in the case of existing equipment, the existing process envelope for specific fiber/matrix combinations can be manipulated for maximum fiber wetout. 相似文献
38.
This paper describes a graphical user-interface for database-oriented knowledge discovery systems, DBLEARN, which has been developed for extracting knowledge rules from relational databases. The interface, designed using a query-by-example approach, provides a graphical means of specifying knowledge-discovery tasks. The interface supplies a graphical browsing facility to help users to perceive the nature of the target database structure. In order to guide users' task specification, a cooperative, menu-based guidance facility has been integrated into the interface. The interface also supplies a graphical interactive adjusting facility for helping users to refine the task specification to improve the quality of learned knowledge rules. 相似文献
39.
We present a method to learn maximal generalized decision rules from databases by integrating discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated and the numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. Horizontal reduction is accomplished by merging identical tuples after the substitution of an attribute value by its higher level value in a pre-defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples in the database. In the second phase, a novel context-sensitive feature merit measure is used to rank the features, a subset of relevant attributes is chosen based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without destroying the interdependence relationships between classes and the attributes. Then rough set-based value reduction is further performed on the reduced table and all redundant condition values are dropped. Finally, tuples in the reduced table are transformed into a set of maximal generalized decision rules. The experimental results on UCI data sets and a real market database demonstrate that our method can dramatically reduce the feature space and improve learning accuracy. 相似文献
40.