共查询到20条相似文献,搜索用时 0 毫秒
1.
Edmund Clarke Somesh Jha Will Marrero 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):173-188
In this paper we explore how partial-order reduction can make the task of verifying security protocols more efficient. These
reduction techniques have been implemented in our tool Brutus. Partial-order reductions have proved very useful in the domain
of model checking reactive systems. These reductions are not directly applicable in our context because of additional complications
caused by tracking knowledge of various agents. We present partial-order reductions in the context of verifying security protocols
and prove their correctness. Experimental results demonstrating the effectiveness of this reduction technique are also presented.
Published online: 24 January 2003 相似文献
2.
Godefroid P. Peled D. Staskauskas M. 《IEEE transactions on pattern analysis and machine intelligence》1996,22(7):496-507
Formal validation is a powerful technique for automatically checking that a collection of communicating processes is free from concurrency-related errors. Although validation tools invariably find subtle errors that were missed during thorough simulation and testing, the brute-force search they perform can result in excessive memory usage and extremely long running times. Recently, a number of researchers have been investigating techniques known as partial-order methods that can significantly reduce the computational resources needed for formal validation by avoiding redundant exploration of execution scenarios. This paper investigates the behavior of partial-order methods in an industrial setting. We describe the design of a partial-order algorithm or a formal validation tool that has been used on several projects that are developing software for the Lucent Technologies 5ESS/sup (R/) telephone switching system. We demonstrate the effectiveness of the algorithm by presenting the results of experiments with actual industrial examples drawn from a variety of 5ESS application domains 相似文献
3.
Cybernetics and Systems Analysis - 相似文献
4.
A method for the model reduction of finite-dimensional, linear, time-invariant (FDLTI) plants is proposed which uses normalized fractional representations is proposed. The method, dubbed fractional balanced reduction, applies balance and truncate to a special representation of the graph operator of the plant. This operation yields the graph operator of a reduced order plant. The method has such properties as existence of an a priori error bound in the graph metric and preservation of sector containment. Coupling fractional representations with principal component analysis gives a model reduction method that is capable of producing, in a numerically stable way, a good reduced order model using the whole full order model. Sector properties are also preserved-these are useful for deciding stability when nonlinearities are in the loop 相似文献
5.
6.
Fukunaga K Mantock JM 《IEEE transactions on pattern analysis and machine intelligence》1984,(1):115-118
A nonparametric data reduction technique is proposed. Its goal is to select samples that are ``representative' of the entire data set. The technique is iterative and is based on the use of a criterion function and nearest neighbor density estimates. Experiments are presented to demonstrate the algorithm. 相似文献
7.
Adaptive color reduction 总被引:5,自引:0,他引:5
Papamarkos N. Atsalakis A.E. Strouthopoulos C.P. 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2002,32(1):44-56
The paper proposes an algorithm for reducing the number of colors in an image. The proposed adaptive color reduction (ACR) technique achieves color reduction using a tree clustering procedure. In each node of the tree, a self-organized neural network classifier (NNC) is used which is fed by image color values and additional local spatial features. The NNC consists of a principal component analyzer (PCA) and a Kohonen self-organized feature map (SOFM) neural network (NN). The output neurons of the NNC define the color classes for each node. The final image not only has the dominant image colors, but its texture also approaches the image local characteristics used. Using the adaptive procedure and different local features for each level of the tree, the initial color classes can be split even more. For better classification, split and merging conditions are used in order to define whether color classes must be split or merged. To speed up the entire algorithm and reduce memory requirements, a fractal scanning subsampling technique is used. The method is independent of the color scheme, it is applicable to any type of color images, and it can be easily modified to accommodate any type of spatial features and any type of tree structure. Several experimental and comparative results, exhibiting the performance of the proposed technique, are presented 相似文献
8.
Test-cost-sensitive attribute reduction 总被引:1,自引:0,他引:1
In many data mining and machine learning applications, there are two objectives in the task of classification; one is decreasing the test cost, the other is improving the classification accuracy. Most existing research work focuses on the latter, with attribute reduction serving as an optional pre-processing stage to remove redundant attributes. In this paper, we point out that when tests must be undertaken in parallel, attribute reduction is mandatory in dealing with the former objective. With this in mind, we posit the minimal test cost reduct problem which constitutes a new, but more general, difficulty than the classical reduct problem. We also define three metrics to evaluate the performance of reduction algorithms from a statistical viewpoint. A framework for a heuristic algorithm is proposed to deal with the new problem; specifically, an information gain-based λ-weighted reduction algorithm is designed, where weights are decided by test costs and a non-positive exponent λ, which is the only parameter set by the user. The algorithm is tested with three representative test cost distributions on four UCI (University of California - Irvine) datasets. Experimental results show that there is a trade-off while setting λ, and a competition approach can improve the quality of the result significantly. This study suggests potential application areas and new research trends concerning attribute reduction. 相似文献
9.
10.
Redundancy reduction revisited 总被引:8,自引:0,他引:8
Barlow H 《Network (Bristol, England)》2001,12(3):241-253
Soon after Shannon defined the concept of redundancy it was suggested that it gave insight into mechanisms of sensory processing, perception, intelligence and inference. Can we now judge whether there is anything in this idea, and can we see where it should direct our thinking? This paper argues that the original hypothesis was wrong in over-emphasizing the role of compressive coding and economy in neuron numbers, but right in drawing attention to the importance of redundancy. Furthermore there is a clear direction in which it now points, namely to the overwhelming importance of probabilities and statistics in neuroscience. The brain has to decide upon actions in a competitive, chance-driven world, and to do this well it must know about and exploit the non-random probabilities and interdependences of objects and events signalled by sensory messages. These are particularly relevant for Bayesian calculations of the optimum course of action. Instead of thinking of neural representations as transformations of stimulus energies, we should regard them as approximate estimates of the probable truths of hypotheses about the current environment, for these are the quantities required by a probabilistic brain working on Bayesian principles. 相似文献
11.
Fractional-step dimensionality reduction 总被引:10,自引:0,他引:10
Lotlikar R. Kothari R. 《IEEE transactions on pattern analysis and machine intelligence》2000,22(6):623-627
Linear projections for dimensionality reduction, computed using linear discriminant analysis (LDA), are commonly based on optimization of certain separability criteria in the output space. The resulting optimization problem is linear, but these separability criteria are not directly related to the classification accuracy in the output space. Consequently, a trial and error procedure has to be invoked, experimenting with different separability criteria that differ in the weighting function used and selecting the one that performed best on the training set. Often, even the best weighting function among the trial choices results in poor classification of data in the subspace. In this short paper, we introduce the concept of fractional dimensionality and develop an incremental procedure, called the fractional-step LDA (F-LDA) to reduce the dimensionality in fractional steps. The F-LDA algorithm is more robust to the selection of weighting function and for any given weighting function, it finds a subspace in which the classification accuracy is higher than that obtained using LDA 相似文献
12.
Georg Struth 《The Journal of Logic and Algebraic Programming》2006,66(2):239
We propose novel algebraic proof techniques for rewrite systems. Church–Rosser theorems and further fundamental statements that do not mention termination are proved in Kleene algebra. Certain reduction and transformation theorems for termination that depend on abstract commutation, cooperation or simulation properties are proved in an extension with infinite iteration. Benefits of the algebraic approach are simple concise calculational proofs by equational reasoning, connection with automata-based decision procedures and a natural formal semantics for rewriting diagrams. It is therefore especially suited for mechanization and automation. 相似文献
13.
In this contribution we examine certain variance properties of model reduction. The focus is on L2 model reduction, but some general results are also presented. These general results can be used to analyze various other model reduction schemes. The models we study are finite impulse response (FIR) and output error (OE) models. We compare the variance of two estimated models. The first one is estimated directly from data and the other one is computed by reducing a high order model, by L2 model reduction. In the FIR case we show that it is never better to estimate the model directly from data, compared to estimating it via L2 model reduction of a high order FIR model. For OE models we show that the reduced model has the same variance as the directly estimated one if the reduced model class used contains the true system. 相似文献
14.
15.
Stephen F. Siegel 《Formal Methods in System Design》2012,40(1):1-19
Partial Order Reduction (POR) techniques improve the basic model checking algorithm by reducing the numbers of states and
transitions explored in verifying a property of the model. In the “ample set” POR framework for the verification of an LTL−X
formula φ, one associates to each state s a subset T
s
of the set of all transitions enabled at s. The approach requires that whenever T
s
is a proper subset, the transitions in T
s
must be invisible, i.e., their execution can never change the truth values of the atomic propositions occurring in φ. In this paper, we show that the invisibility restriction can be relaxed: for propositions that only occur negatively in φ, it suffices that the transitions in T
s
merely never change the truth value from true to false, and for those that occur only positively, from false to true. This opens up opportunities for reduction, in many commonly occurring scenarios, that would not be allowed by the stricter
invisibility criterion. 相似文献
16.
17.
A probabilistic interpretation of cyclic reduction and its relationships with logarithmic reduction 总被引:1,自引:0,他引:1
We provide a simple convergence proof for the cyclic reduction algorithm for M/G/1 type Markov chains together with a probabilistic interpretation which helps to understand better the relationships between logarithmic reduction and cyclic reduction. 相似文献
18.
19.
Reductions that aggregate fine-grained transitions into coarser transitions can significantly reduce the cost of automated verification, by reducing the size of the state space. We propose a reduction that can exploit common synchronization disciplines, such as the use of mutual exclusion for accesses to shared data structures. Exploiting them using traditional reduction theorems requires checking that the discipline is followed in the original (i.e., unreduced) system. That check can be prohibitively expensive. This paper presents a reduction that instead requires checking whether the discipline is followed in the reduced system. This check may be much cheaper, because the reachable state space is smaller.
相似文献
Ernie CohenEmail: |
20.
Robust linear dimensionality reduction 总被引:1,自引:0,他引:1
We present a novel family of data-driven linear transformations, aimed at finding low-dimensional embeddings of multivariate data, in a way that optimally preserves the structure of the data. The well-studied PCA and Fisher's LDA are shown to be special members in this family of transformations, and we demonstrate how to generalize these two methods such as to enhance their performance. Furthermore, our technique is the only one, to the best of our knowledge, that reflects in the resulting embedding both the data coordinates and pairwise relationships between the data elements. Even more so, when information on the clustering (labeling) decomposition of the data is known, this information can also be integrated in the linear transformation, resulting in embeddings that clearly show the separation between the clusters, as well as their internal structure. All of this makes our technique very flexible and powerful, and lets us cope with kinds of data that other techniques fail to describe properly. 相似文献