共查询到20条相似文献,搜索用时 15 毫秒
1.
Louis Monier 《Theoretical computer science》1980,12(1):97-108
We analyse two recent probabilistic primality testing algorithms; the first one is derived from Miller [6] in a formulation given by Rabin [7], and the second one is from Solovay and Strassen [9]. Both decide whether or not an odd number n is prime in time O(m, lognM(n)) with an error probability less than αm, for some . Our comparison shows that the first algorithm is always more efficient than the second, both in probabilistic and algorithmic terms. 相似文献
2.
A comparison of algorithms for inference and learning in probabilistic graphical models 总被引:7,自引:0,他引:7
Frey BJ Jojic N 《IEEE transactions on pattern analysis and machine intelligence》2005,27(9):1392-1416
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy” belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy. 相似文献
3.
Two algorithms for least-squares estimation of parameters of a Hammerstein model are compared. Numerical examples demonstrate that the iterative method of Narendra and Gallman produces significantly smaller parameter covariance and slightly smaller rms error than the noniterative method of Chang and Luus, as expected from an analysis of the parameter estimators. In addition, the iterative algorithm is faster for high-order systems. 相似文献
4.
Evolving rule induction algorithms with multi-objective grammar-based genetic programming 总被引:4,自引:4,他引:0
Multi-objective optimization has played a major role in solving problems where two or more conflicting objectives need to
be simultaneously optimized. This paper presents a Multi-Objective grammar-based genetic programming (MOGGP) system that automatically
evolves complete rule induction algorithms, which in turn produce both accurate and compact rule models. The system was compared
with a single objective GGP and three other rule induction algorithms. In total, 20 UCI data sets were used to generate and
test generic rule induction algorithms, which can be now applied to any classification data set. Experiments showed that,
in general, the proposed MOGGP finds rule induction algorithms with competitive predictive accuracies and more compact models
than the algorithms it was compared with.
相似文献
Gisele L. PappaEmail: Email: |
5.
《国际计算机数学杂志》2012,89(1-4):291-302
Two algorithms for the eigensolution of real symmetric matrices [8] are discussed. The first is a variant of the classical Jacobi method and the second is also based on orthogonal transforms. Both algorithms may be readily expressed in forms which allow the power of SIMD (single instruction multiple data) machines to be exploited. The performances of the algorithms on the ICL DAP (distributed array processor) and the CRAY-1 are compared. 相似文献
6.
Rule induction is an important part of learning in expert systems. Rules can help managers make more effective decisions and gain insight into the relationships between decision variables. We present a logic-based approach to rule induction in expert systems which is simple, robust and consistent. We also derive bounds on levels of certainty for combining rules. We apply our approach to the development of rules for the entry decisions of new products. We then discuss how the logic-based approach of rule induction can be used to create a decision support system and the methodology to create such a system. 相似文献
7.
During the last decade, databases have been growing rapidly in size and number as a result of rapid advances in database capacity and management techniques. This expansive growth in data and databases has caused a pressing need for the development of more powerful techniques to convert the vast pool of data into valuable information. For the purpose of strategic and decision-making, many companies and researchers have recognized mining useful information and knowledge from large databases as a key research topic and as an opportunity for major revenues and improving competitiveness. In this paper, we will explore a new rule generation algorithm (based on rough sets theory) that can generate a minimal set of rule reducts, and a rule generation and rule induction program (RGRIP) which can efficiently induce decision rules from conflicting information systems. All the methods will also be illustrated with numerical examples. 相似文献
8.
Lemuel R. Waitman Douglas H. Fisher Paul H. King 《Journal of Intelligent Information Systems》2006,27(1):49-77
Most rule learning systems posit hard decision boundaries for continuous attributes and point estimates of rule accuracy,
with no measures of variance, which may seem arbitrary to a domain expert. These hard boundaries/points change with small
perturbations to the training data due to algorithm instability. Moreover, rule induction typically produces a large number
of rules that must be filtered and interpreted by an analyst. This paper describes a method of combining rules over multiple
bootstrap replications of rule induction so as to reduce the total number of rules presented to an analyst, to measure and
increase the stability of the rule induction process, and to provide a measure of variance to continuous attribute decision
boundaries and accuracy point estimates. A measure of similarity between rules is also introduced as a basis of multidimensional
scaling to visualize rule similarity. The method was applied to perioperative data and to the UCI (University of California,
Irvine) thyroid dataset.
Minor Revision submitted to the Journal of Intelligent Information Systems, April 2005. 相似文献
9.
Gerhard Barth 《Information Processing Letters》1984,18(5):249-256
Average case analyses of two algorithms to locate the leftmost occurrence of a string Pattern in a string Text are conducted in this paper. One algorithm is based on a straightforward trial-and-error approach, the other one uses a sophisticated stragegy discovered by Knuth, Morris and Pratt (1977).Costs measured are the expected number of comparisons between individual characters. Let Naive and kmp denote the average case complexities of the two algorithms, respectively. We show that 1?(1/c)+(1/c2) is an accurate approximation for the ratio kmp/Naive, provided both Pattern and Text are random strings over an alphabet of size c.In both cases, the application of Markov chain theory is expedient for performing the analysis. However, in order to get rid of complex conditioning, the Markov chain model for the kmp algorithm is based on some heuristics. This approach is believed to be practically sound. Some indication on the complexity that might be involved in an exact average case analysis of the kmp algorithm can be found in the work by Guibas and Odlyzko (1981). 相似文献
10.
E.A. Abo-Tabl 《Information Sciences》2011,181(12):2587-2596
In the present paper, we investigate the three types of Yao’s lower and upper approximations of any set with respect to any similarity relation. These types based on a right neighborhood. Also, we define and investigate other three types of approximations for any similarity relations. These new types based on the intersection of the right neighborhoods. Moreover, we give a comparison between these types. Lastly, the relationship between these definitions is introduced. 相似文献
11.
Flavio Massimiliano Cecchini Martin Riedl Elisabetta Fersini Chris Biemann 《Language Resources and Evaluation》2018,52(3):733-770
This article presents a comparison of different Word Sense Induction (wsi) clustering algorithms on two novel pseudoword data sets of semantic-similarity and co-occurrence-based word graphs, with a special focus on the detection of homonymic polysemy. We follow the original definition of a pseudoword as the combination of two monosemous terms and their contexts to simulate a polysemous word. The evaluation is performed comparing the algorithm’s output on a pseudoword’s ego word graph (i.e., a graph that represents the pseudoword’s context in the corpus) with the known subdivision given by the components corresponding to the monosemous source words forming the pseudoword. The main contribution of this article is to present a self-sufficient pseudoword-based evaluation framework for wsi graph-based clustering algorithms, thereby defining a new evaluation measure (top2) and a secondary clustering process (hyperclustering). To our knowledge, we are the first to conduct and discuss a large-scale systematic pseudoword evaluation targeting the induction of coarse-grained homonymous word senses across a large number of graph clustering algorithms. 相似文献
12.
Passive microwave algorithms for sea ice concentration: A comparison of two techniques 总被引:6,自引:0,他引:6
Josefino C. Comiso Donald J. Cavalieri Claire L. Parkinson Per Gloersen 《Remote sensing of environment》1997,60(3):357-384
The most comprehensive large-scale characterization of the global sea ice cover so far has been provided by satellite passive microwave data. Accurate retrieval of ice concentrations from these data is important because of the sensitivity of surface flux (e.g., heat, salt, and water) calculations to small changes in the amount of open water (leads and polynyas) within the polar ice packs. Two algorithms that have been used for deriving ice concentrations from multichannel data are compared. One is the NASA Team algorithm and the other is the Bootstrap algorithm, both of which were developed at NASA's Goddard Space Flight Center. The two algorithms use different channel combinations, reference brightness temperatures, weather filters, and techniques. Analyses are made to evaluate the sensitivity of algorithm results to variations of emissivity and temperature with space and time. To assess the difference in the performance of the two algorithms, analyses were performed with data from both hemispheres and for all seasons. The results show only small differences in the central Arctic in winter but larger disagreements in the seasonal regions and in summer. In some areas in the Antarctic, the Bootstrap technique shows ice concentrations higher than those of the Team algorithm by as much as 25%; whereas, in other areas, it shows ice concentrations lower by as much as 30%. The differences in the results are caused by temperature effects, emissivity effects, and tie point differences. The Team and the Bootstrap results were compared with available Landsat, advanced very high resolution radiometer (AVHRR) and synthetic aperture radar (SAR) data. AVHRR, Landsat, and SAR data sets all yield higher concentrations than the passive microwave algorithms. Inconsistencies among results suggest the need for further validation studies. 相似文献
13.
Santana R 《Evolutionary computation》2005,13(1):67-97
The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables. 相似文献
14.
Many real world problems involve several, usually conflicting, objectives. Multiobjective analysis deals with these problems locating trade-offs between different optimal solutions. Regarding graph search problems, several algorithms based on best-first and depth-first approaches have been proposed to return the set of all Pareto optimal solutions. This article presents a detailed comparison between two representatives of multiobjective depth-first algorithms, PIDMOA* and MO-DF-BnB. Both of them extend previous single-objective search algorithms with linear-space requirements to the multiobjective case. Experimental analyses on their time performance over tree-shaped search spaces are presented. The results clarify the fitness of both algorithms to parameters like the number or depth of goal nodes. 相似文献
15.
A theoretical comparison of texture algorithms 总被引:14,自引:0,他引:14
Conners RW Harlow CA 《IEEE transactions on pattern analysis and machine intelligence》1980,(3):204-222
An evaluation of the ability of four texture analysis algorithms to perform automatic texture discrimination will be described. The algorithms which will be examined are the spatial gray level dependence method (SGLDM), the gray level run length method (GLRLM), the gray level difference method (GLDM), and the power spectral method (PSM). The evaluation procedure employed does not depend on the set of features used with each algorithm or the pattern recognition scheme. Rather, what is examined is the amount of texturecontext information contained in the spatial gray level dependence matrices, the gray level run length matrices, the gray level difference density functions, and the power spectrum. The comparison will be performed in two steps. First, only Markov generated textures will be considered. The Markov textures employed are similar to the ones used by perceptual psychologist B. Julesz in his investigations of human texture perception. These Markov textures provide a convenient mechanism for generating certain example texture pairs which are important in the analysis process. In the second part of the analysis the results obtained by considering only Markov textures will be extended to all textures which can be represented by translation stationary random fields of order two. This generalization clearly includes a much broader class of textures than Markovian ones. The results obtained indicate that the SGLDM is the most powerful algorithm of the four considered, and that the GLDM is more powerful than the PSM. 相似文献
16.
We show the existence of nonuniform schemes for the following sampling problem: Given a sample space with n points, an unknown set of size n/2, and s random points, it is possible to generate deterministically from them s + k points such that the probability of not hitting the unknown set is exponentially smaller in k than 2−s. Tight bounds are given for the quality of such schemes. Explicit, uniform versions of these schemes could be used for efficiently reducing the error probability of randomized algorithms. A survey of known constructions (whose quality is very far from the existential result) is included. 相似文献
17.
ART2是基于自适应谐振理论的一种自组织神经网络,通过竞争学习和自稳机制原理实现分类,可以在非平稳的、有干扰的环境中进行无监督的自学习,其学习过程能迅速识别已学习过的样本,并能迅速适应未学习过的新对象。提出了一种基于慢速权值更新的ART2神经网络算法,该算法在对输入模式进行识别分类时,会减慢学习速率,降低模式漂移的速度。新的网络学习规则在分类实验中取得了较好的效果,并在一定程度上解决了模式漂移问题。 相似文献
18.
分析网络群落划分的GN聚类和模式识别中AP聚类两种算法的设计思想和特点;以图书借阅记录为例构建了顾客聚类的数据集,进行了两种算法的聚类比较。研究表明,两种算法从不同角度揭示了顾客群体的结构特征,GN聚类结果与顾客的宏观特征分类相接近,而AP算法结果反映出顾客需求的分布特征。探讨了算法设计原则对实验结果产生的影响。这些工作可为聚类算法的设计改进和顾客行为的数据挖掘等研究提供一定的参考。 相似文献
19.
G. De V. Smit 《Software》1982,12(1):57-66
Three string matching algorithms—straightforward, Knuth-Morris-Pratt and Boyer-Moor—re examined and their time complexities discussed. A comparison of their actual average behaviour is made, based on empirical data presented. It is shown that the Boyel-Moore algorithm is extremely efficient in most cases and that, contrary to the impression one might get from the analytical results, the Knuth-Morris-Pratt algorithm is not significantly better on the average than the straightforward algorithm. 相似文献
20.
P. Dupont Author Vitae F. Denis Author Vitae Y. Esposito Author Vitae 《Pattern recognition》2005,38(9):1349-1371
This article presents an overview of Probabilistic Automata (PA) and discrete Hidden Markov Models (HMMs), and aims at clarifying the links between them. The first part of this work concentrates on probability distributions generated by these models. Necessary and sufficient conditions for an automaton to define a probabilistic language are detailed. It is proved that probabilistic deterministic automata (PDFA) form a proper subclass of probabilistic non-deterministic automata (PNFA). Two families of equivalent models are described next. On one hand, HMMs and PNFA with no final probabilities generate distributions over complete finite prefix-free sets. On the other hand, HMMs with final probabilities and probabilistic automata generate distributions over strings of finite length. The second part of this article presents several learning models, which formalize the problem of PA induction or, equivalently, the problem of HMM topology induction and parameter estimation. These learning models include the PAC and identification with probability 1 frameworks. Links with Bayesian learning are also discussed. The last part of this article presents an overview of induction algorithms for PA or HMMs using state merging, state splitting, parameter pruning and error-correcting techniques. 相似文献