共查询到20条相似文献,搜索用时 15 毫秒
1.
Patrick G. Clark Jerzy W. Grzymala-Busse Wojciech Rzasa 《Journal of Intelligent Information Systems》2016,47(3):515-529
A probabilistic approximation is a generalization of the standard idea of lower and upper approximations, defined for equivalence relations. Recently probabilistic approximations were additionally generalized to an arbitrary binary relation so that probabilistic approximations may be applied for incomplete data. We discuss two ways to induce rules from incomplete data using probabilistic approximations, by applying true MLEM2 algorithm and an emulated MLEM2 algorithm. In this paper we report novel research on a comparison of both approaches: new results of experiments on incomplete data with three interpretations of missing attribute values. Our results show that both approaches do not differ much. 相似文献
2.
Joe Hurd 《The Journal of Logic and Algebraic Programming》2003,56(1-2):3
Using the
theorem prover, we apply our formalization of probability theory to specify and verify the Miller–Rabin probabilistic primality test. The version of the test commonly found in algorithm textbooks implicitly accepts probabilistic termination, but our own verified implementation satisfies the stronger property of guaranteed termination. Completing the proof of correctness requires a significant body of group theory and computational number theory to be formalized in the theorem prover. Once verified, the primality test can either be executed in the logic (using rewriting) and used to prove the compositeness of numbers, or manually extracted to standard ML and used to find highly probable primes. 相似文献
3.
A comparison of algorithms for inference and learning in probabilistic graphical models 总被引:7,自引:0,他引:7
Frey BJ Jojic N 《IEEE transactions on pattern analysis and machine intelligence》2005,27(9):1392-1416
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy” belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy. 相似文献
4.
Two algorithms for least-squares estimation of parameters of a Hammerstein model are compared. Numerical examples demonstrate that the iterative method of Narendra and Gallman produces significantly smaller parameter covariance and slightly smaller rms error than the noniterative method of Chang and Luus, as expected from an analysis of the parameter estimators. In addition, the iterative algorithm is faster for high-order systems. 相似文献
5.
Gerhard Barth 《Information Processing Letters》1984,18(5):249-256
Average case analyses of two algorithms to locate the leftmost occurrence of a string Pattern in a string Text are conducted in this paper. One algorithm is based on a straightforward trial-and-error approach, the other one uses a sophisticated stragegy discovered by Knuth, Morris and Pratt (1977).Costs measured are the expected number of comparisons between individual characters. Let Naive and kmp denote the average case complexities of the two algorithms, respectively. We show that 1?(1/c)+(1/c2) is an accurate approximation for the ratio kmp/Naive, provided both Pattern and Text are random strings over an alphabet of size c.In both cases, the application of Markov chain theory is expedient for performing the analysis. However, in order to get rid of complex conditioning, the Markov chain model for the kmp algorithm is based on some heuristics. This approach is believed to be practically sound. Some indication on the complexity that might be involved in an exact average case analysis of the kmp algorithm can be found in the work by Guibas and Odlyzko (1981). 相似文献
6.
Javier Galbally Arun Ross Marta Gomez-Barrero Julian Fierrez Javier Ortega-Garcia 《Computer Vision and Image Understanding》2013,117(10):1512-1525
A binary iriscode is a very compact representation of an iris image. For a long time it was assumed that the iriscode did not contain enough information to allow for the reconstruction of the original iris. The present work proposes a novel probabilistic approach based on genetic algorithms to reconstruct iris images from binary templates and analyzes the similarity between the reconstructed synthetic iris image and the original one. The performance of the reconstruction technique is assessed by empirically estimating the probability of successfully matching the synthesized iris image against its true counterpart using a commercial matcher. The experimental results indicate that the reconstructed images look reasonably realistic. While a human expert may not be easily deceived by them, they can successfully deceive a commercial matcher. Furthermore, since the proposed methodology is able to synthesize multiple iris images from a single iriscode, it has other potential applications including privacy enhancement of iris-based systems. 相似文献
7.
《国际计算机数学杂志》2012,89(1-4):291-302
Two algorithms for the eigensolution of real symmetric matrices [8] are discussed. The first is a variant of the classical Jacobi method and the second is also based on orthogonal transforms. Both algorithms may be readily expressed in forms which allow the power of SIMD (single instruction multiple data) machines to be exploited. The performances of the algorithms on the ICL DAP (distributed array processor) and the CRAY-1 are compared. 相似文献
8.
E. Ponslet G. Maglaras R. T. Haftka E. Nikolaidis H. H. Cudney 《Structural and Multidisciplinary Optimization》1995,10(3-4):247-257
This paper describes an application of genetic algorithms to deterministic and probabilistic (reliability-based) optimization of damping augmentation for a truss structure. The probabilistic formulation minimizes the probability of exceeding upper limits on the magnitude of the dynamic response of the structure due to uncertainties in the properties of the damping devices. The corresponding deterministic formulation maximizes a safety margin with respect to these limits. Because this work was done in the context of an experimental comparison of the reliabilities of the resulting designs, antioptimization was used to maximize the contrast between the probabilities of failure of the two designs. This contrast maximization was also performed with a genetic algorithm. The paper describes the genetic algorithm used for the optimization and antioptimization, and a number of modifications to the antioptimization formulation intended to reduce the computational expense to an acceptable level. Optimal designs are obtained for both formulations. The probabilistic design shows a very significant improvement in reliability. 相似文献
9.
分析网络群落划分的GN聚类和模式识别中AP聚类两种算法的设计思想和特点;以图书借阅记录为例构建了顾客聚类的数据集,进行了两种算法的聚类比较。研究表明,两种算法从不同角度揭示了顾客群体的结构特征,GN聚类结果与顾客的宏观特征分类相接近,而AP算法结果反映出顾客需求的分布特征。探讨了算法设计原则对实验结果产生的影响。这些工作可为聚类算法的设计改进和顾客行为的数据挖掘等研究提供一定的参考。 相似文献
10.
针对工业共生网络特点,分别应用改进后的谱平分算法和凝聚算法2种典型的社团发现算法对卡伦堡及鲁北工业共生网络进行社团划分,并对其集聚性程度进行定量的比较和评价,从而对卡伦堡及鲁北工业共生网络演化过程做出定量描述,并与其实际工业共生网络的演化发展过程做出对比。结果表明,采用复杂网络社团划分算法可以实现对工业共生网络发展演化过程的定量描述,而改进后的谱平分算法比凝聚算法对工业共生网络演化过程的描述更接近其实际的发展状况。 相似文献
11.
Satellite-based mapping of Canadian boreal forest fires: Evaluation and comparison of algorithms 总被引:1,自引:0,他引:1
This paper evaluates annual fire maps that were produced from NOAA-14/AVHRR imagery using an algorithm described in a companion paper (Li et al., International Journal of Remote Sensing, 21, 3057-3069, 2000 (this issue)). Burned area masks covering the Canadian boreal forest were created by compositing the daily maps of fire hot spots over the summer and by examining Normalized Difference Vegetation Index (NDVI) changes after burning. Both masks were compared with fire polygons derived by Canadian fire agencies through aerial surveillance. It was found that the majority of fire events were captured by the satellite-based techniques, but burnt area was generally underestimated. The burn boundary formed by the fire pixels detected by satellite were in good agreement with the polygons boundaries within which, however, there were some fires missed by the satellite. The presence of clouds and low sampling frequency of satellite observation are the two major causes for the underestimation. While this problem is alleviated by taking advantage of NDVI changes, a simple combination of a hot spot technique with a NDVI method is not an ideal solution due to the introduction of new sources of uncertainty. In addition, the performance of the algorithm used in the International Geosphere-Biosphere Programme (IGBP) Data and Information System (IGBPDIS) for global fire detection was evaluated by comparing its results with ours and with the fire agency reports. It was found that the IGBP-DIS algorithm is capable of detecting the majority of fires over the boreal forest, but also includes many false fires over old burned scars created by fires taking place in previous years. A step-by-step comparison between the two algorithms revealed the causes of the problem and recommendations are made to rectify them. 相似文献
12.
Roberto M. Cesar Jr. Author Vitae Pedro Larrañaga Author Vitae 《Pattern recognition》2005,38(11):2099-2113
A method for segmentation and recognition of image structures based on graph homomorphisms is presented in this paper. It is a model-based recognition method where the input image is over-segmented and the obtained regions are represented by an attributed relational graph (ARG). This graph is then matched against a model graph thus accomplishing the model-based recognition task. This type of problem calls for inexact graph matching through a homomorphism between the graphs since no bijective correspondence can be expected, because of the over-segmentation of the image with respect to the model. The search for the best homomorphism is carried out by optimizing an objective function based on similarities between object and relational attributes defined on the graphs. The following optimization procedures are compared and discussed: deterministic tree search, for which new algorithms are detailed, genetic algorithms and estimation of distribution algorithms. In order to assess the performance of these algorithms using real data, experimental results on supervised classification of facial features using face images from public databases are presented. 相似文献
13.
Pervasive computing promotes the integration of smart devices in our living spaces to develop services providing assistance to people. Such smart devices are increasingly relying on cloud-based Machine Learning, which raises questions in terms of security (data privacy), reliance (latency), and communication costs. In this context, Federated Learning (FL) has been introduced as a new machine learning paradigm enhancing the use of local devices. At the server level, FL aggregates models learned locally on distributed clients to obtain a more general model. In this way, no private data is sent over the network, and the communication cost is reduced. Unfortunately, however, the most popular federated learning algorithms have been shown not to be adapted to some highly heterogeneous pervasive computing environments. In this paper, we propose a new FL algorithm, termed FedDist, which can modify models (here, deep neural network) during training by identifying dissimilarities between neurons among the clients. This permits to account for clients’ specificity without impairing generalization. FedDist evaluated with three state-of-the-art federated learning algorithms on three large heterogeneous mobile Human Activity Recognition datasets. Results have shown the ability of FedDist to adapt to heterogeneous data and the capability of FL to deal with asynchronous situations. 相似文献
14.
15.
Translated from Kibernetika i Sistemnyi Analiz, No. 1, pp. 129–139, January–February, 1994. 相似文献
16.
Wang Zhenhu Li Hao Chen Ziming Li Luoxing Hong He 《Structural and Multidisciplinary Optimization》2020,62(1):387-404
Structural and Multidisciplinary Optimization - To increase the range of applicability of decoupling strategies for reliability-based design optimization (RBDO), a sequential optimization and... 相似文献
17.
18.
Gilles Perrouin Sebastian Oster Sagar Sen Jacques Klein Benoit Baudry Yves le Traon 《Software Quality Journal》2012,20(3-4):605-643
Software Product Lines (SPL) are difficult to validate due to combinatorics induced by variability, which in turn leads to combinatorial explosion of the number of derivable products. Exhaustive testing in such a large products space is hardly feasible. Hence, one possible option is to test SPLs by generating test configurations that cover all possible t feature interactions (t-wise). It dramatically reduces the number of test products while ensuring reasonable SPL coverage. In this paper, we report our experience on applying t-wise techniques for SPL with two independent toolsets developed by the authors. One focuses on generality and splits the generation problem according to strategies. The other emphasizes providing efficient generation. To evaluate the respective merits of the approaches, measures such as the number of generated test configurations and the similarity between them are provided. By applying these measures, we were able to derive useful insights for pairwise and t-wise testing of product lines. 相似文献
19.
Passive microwave algorithms for sea ice concentration: A comparison of two techniques 总被引:6,自引:0,他引:6
Josefino C. Comiso Donald J. Cavalieri Claire L. Parkinson Per Gloersen 《Remote sensing of environment》1997,60(3):357-384
The most comprehensive large-scale characterization of the global sea ice cover so far has been provided by satellite passive microwave data. Accurate retrieval of ice concentrations from these data is important because of the sensitivity of surface flux (e.g., heat, salt, and water) calculations to small changes in the amount of open water (leads and polynyas) within the polar ice packs. Two algorithms that have been used for deriving ice concentrations from multichannel data are compared. One is the NASA Team algorithm and the other is the Bootstrap algorithm, both of which were developed at NASA's Goddard Space Flight Center. The two algorithms use different channel combinations, reference brightness temperatures, weather filters, and techniques. Analyses are made to evaluate the sensitivity of algorithm results to variations of emissivity and temperature with space and time. To assess the difference in the performance of the two algorithms, analyses were performed with data from both hemispheres and for all seasons. The results show only small differences in the central Arctic in winter but larger disagreements in the seasonal regions and in summer. In some areas in the Antarctic, the Bootstrap technique shows ice concentrations higher than those of the Team algorithm by as much as 25%; whereas, in other areas, it shows ice concentrations lower by as much as 30%. The differences in the results are caused by temperature effects, emissivity effects, and tie point differences. The Team and the Bootstrap results were compared with available Landsat, advanced very high resolution radiometer (AVHRR) and synthetic aperture radar (SAR) data. AVHRR, Landsat, and SAR data sets all yield higher concentrations than the passive microwave algorithms. Inconsistencies among results suggest the need for further validation studies. 相似文献
20.
Yasuaki Oishi Author Vitae 《Automatica》2007,43(3):538-545
A randomized approach is considered for a feasibility problem on a parameter-dependent linear matrix inequality (LMI). In particular, a gradient-based and an ellipsoid-based randomized algorithms are improved by introduction of a stopping rule. The improved algorithms stop after a bounded number of iterations and this bound is of polynomial order in the problem size. When the algorithms stop, either of the following two events occurs: (i) they find with high confidence a probabilistic solution, which satisfies the given LMI for most of the parameter values; (ii) they detect in an approximate sense the non-existence of a deterministic solution, which satisfies the given LMI for all the parameter values. These results are important because the original randomized algorithms have issues to be settled on detection of convergence, on the speed of convergence, and on the assumption of feasibility. The improved algorithms can be adapted for an optimization problem constrained by a parameter-dependent LMI. A numerical example shows the efficacy of the proposed algorithms. 相似文献