In this paper, we propose an iterative approach to increase the computation efficiency of the homotopy analysis method (HAM), a analytic technique for highly nonlinear problems. By means of the Schmidt–Gram process (Arfken et al., 1985) [15], we approximate the right-hand side terms of high-order linear sub-equations by a finite set of orthonormal bases. Based on this truncation technique, we introduce the Mth-order iterative HAM by using each Mth-order approximation as a new initial guess. It is found that the iterative HAM is much more efficient than the standard HAM without truncation, as illustrated by three nonlinear differential equations defined in an infinite domain as examples. This work might greatly improve the computational efficiency of the HAM and also the Mathematica package BVPh for nonlinear BVPs. 相似文献
A hybrid uncertainty theory is developed to bridge the gap between fuzzy set theory and Dempster-Shafer theory. Its basis is the Dempster-Shafer formalism, which is extended to include a complete set of basic operations for manipulating uncertainties in a set-theoretic framework. The new theory, operator-belief theory (OT), retains the probabilistic flavor of Dempster's original point-to-set mappings but includes the potential for defining a wider range of operators like those found in fuzzy set theory.
The basic operations defined for OT in this paper include those for: dominance and order, union, intersection, complement and general mappings. Several sample problems in approximate reasoning are worked out to illustrate the new approach as well as to compare it with the other theories currently being used. A general method or extending the theory by using fuzzy set theory as a guide is suggested. 相似文献
With the increasing number of available XML documents, numerous approaches for retrieval have been proposed in the literature. They usually use the tree representation of documents and queries to process them, whether in an implicit or explicit way. Although retrieving XML documents can be considered as a tree matching problem between the query tree and the document trees, only a few approaches take advantage of the algorithms and methods proposed by the graph theory. In this paper, we aim at studying the theoretical approaches proposed in the literature for tree matching and at seeing how these approaches have been adapted to XML querying and retrieval, from both an exact and an approximate matching perspective. This study will allow us to highlight theoretical aspects of graph theory that have not been yet explored in XML retrieval. 相似文献
Some approximate indexing schemes have been recently proposed in metric spaces which sort the objects in the database according to pseudo-scores. It is known that (1) some of them provide a very good trade-off between response time and accuracy, and (2) probability-based pseudo-scores can provide an optimal trade-off in range queries if the probabilities are correctly estimated. Based on these facts, we propose a probabilistic enhancement scheme which can be applied to any pseudo-score based scheme. Our scheme computes probability-based pseudo-scores using pseudo-scores obtained from a pseudo-score based scheme. In order to estimate the probability-based pseudo-scores, we use the object-specific parameters in logistic regression and learn the parameters using MAP (Maximum a Posteriori) estimation and the empirical Bayes method. We also propose a technique which speeds up learning the parameters using pseudo-scores. We applied our scheme to the two state-of-the-art schemes: the standard pivot-based scheme and the permutation-based scheme, and evaluated them using various kinds of datasets from the Metric Space Library. The results showed that our scheme outperformed the conventional schemes, with regard to both the number of distance computations and the CPU time, in all the datasets. 相似文献
Many methods based on the rough set to deal with incomplete information systems have been proposed in recent years. However, they are only suitable for the incomplete systems with regular attributes whose domains are not preference-ordered. This paper thus attempts to present research focusing on a complex incomplete information system—the incomplete ordered information system. In such incomplete information systems, all attributes are considered as criterions. A criterion indicates an attribute with preference-ordered domain. To conduct classification analysis in the incomplete ordered information system, the concept of similarity dominance relation is first proposed. Two types of knowledge reductions are then formed for preserving two different notions of similarity dominance relations. With introduction of the approximate distribution reduct into the incomplete ordered decision system, the judgment theorems and discernibility matrixes associated with four novel approximate distribution reducts are obtained. A numerical example is employed to substantiate the conceptual arguments. 相似文献
Partial information in databases can arise when information from several databases is combined. Even if each database is complete for some “world”, the combined databases will not be, and answers to queries against such combined databases can only be approximated. In this paper we describe various situations in which a precise answer cannot be obtained for a query asked against multiple databases. Based on an analysis of these situations, we propose a classification of constructs that can be used to model approximations.
The main goal of the paper is to study several formal models of approximations and their semantics. In particular, we obtain universality properties for these models of approximations. Universality properties suggest syntax for languages with approximations based on the operations which are naturally associated with them. We prove universality properties for most of the approximation constructs. Then we design languages built around datatypes given by the approximation constructs. A straightforward approach results in languages that have a number of limitations. In an attempt to overcome those limitations, we explain how all the languages can be embedded into a language for conjunctive and disjunctive sets from Libkin and Wong (1996) and demonstrate its usefulness in querying independent databases. We also discuss the semantics of approximation constructs and the relationship between them. 相似文献
The increasing prominence of data streams arising in a wide range of advanced applications such as fraud detection and trend
learning has led to the study of online mining of frequent itemsets (FIs). Unlike mining static databases, mining data streams
poses many new challenges. In addition to the one-scan nature, the unbounded memory requirement and the high data arrival
rate of data streams, the combinatorial explosion of itemsets exacerbates the mining task. The high complexity of the FI mining
problem hinders the application of the stream mining techniques. We recognize that a critical review of existing techniques
is needed in order to design and develop efficient mining algorithms and data structures that are able to match the processing
rate of the mining with the high arrival rate of data streams. Within a unifying set of notations and terminologies, we describe
in this paper the efforts and main techniques for mining data streams and present a comprehensive survey of a number of the
state-of-the-art algorithms on mining frequent itemsets over data streams. We classify the stream-mining techniques into two
categories based on the window model that they adopt in order to provide insights into how and why the techniques are useful.
Then, we further analyze the algorithms according to whether they are exact or approximate and, for approximate approaches, whether they are false-positive or false-negative. We also discuss various interesting issues, including the merits and limitations in existing research and substantive areas
for future research. 相似文献