首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Barak et al. gave a first formalization of obfuscation, describing an obfuscator as an efficient, probabilistic “compiler” that takes in input a program P and produces a new program that has the same functionality as P but is unintelligible. This means that any result an obfuscated program can compute is actually computable given only an input/output access (called oracle access) to the program P: we call such results trivial results. On the basis of this informal definition, they suggest a formal definition of obfuscation based on oracle access to programs and show that no obfuscator can exist according to this definition. They also try to relax the definition and show that, even with a restriction to some common classes of programs, there exists no obfuscator. In this work, we show that their definition is too restrictive and lacks a fundamental property, that we formalize by the notion of oracle programs. Oracle programs are an abstract notion which basically refers to perfectly obfuscated programs. We suggest a new definition of obfuscation based on these oracle programs and show that such obfuscators do not exist either. Considering the actual implementations of “obfuscators”, we define a new kind of obfuscators, -obfuscators. These are obfuscators that hide non trivial results at least for time . By restricting the -requirement to deobfuscation (that is outputting an intelligible program when fed with an obfuscated program in input), we show that such obfuscators do exist. Practical -obfuscation methods are presented at the end of this paper: we focus more specifically on code protection techniques in a malware context. Based on the fact that a malware may fulfill its action in an amount of time which may be far larger than the analysis time of any automated detection program, these obfuscation methods can be considered as efficient enough to greatly thwart automated analysis and put check on any antivirus software. This research has been conducted while on stay at the Laboratoire de virologie et de cryptologie.  相似文献   

2.
Hájek introduced the logic enriching the logic BL by a unary connective vt which is a formalization of Zadeh’s fuzzy truth value “very true”. algebras, i.e., BL-algebras with unary operations, called vt-operators, which are among others subdiagonal, are an algebraic counterpart of Partially ordered commutative integral residuated monoids (pocrims) are common generalizations of both BL-algebras and Heyting algebras. The aim of our paper is to introduce and study algebraic properties of pocrims endowed by “very-true” and “very-false”-like operators. Research is supported by the Research and Development Council of Czech Government via project MSN 6198959214.  相似文献   

3.
This paper investigates the self-adaptation behavior of ()-evolution strategies (ES) on the noisy sphere model. To this end, the stochastic system dynamics is approximated on the level of the mean value dynamics. Being based on this “microscopic” analysis, the steady state behavior of the ES for the scaled noise scenario and the constant noise strength scenario will be theoretically analyzed and compared with real ES runs. An explanation will be given for the random walk like behavior of the mutation strength in the vicinity of the steady state. It will be shown that this is a peculiarity of the -ES and that intermediate recombination strategies do not suffer from such behavior.  相似文献   

4.
Solving agreement problems deterministically, such as consensus and k-set agreement, in asynchronous distributed systems prone to an unbounded number of process failures has been shown to be impossible. To circumvent this impossibility, unreliable failure detectors for the crash failure model have been widely studied. These are oracles that provide information on failures. The exact nature of such information is defined by a set of abstract properties that a particular class of failure detectors satisfy. The weakest class of such failure detectors that allow to solve consensus is Ω. This paper considers failure detector classes from the literature that solve k-set agreement in the crash failure model, and studies their relative power. It shows that the family of failure detector classes (1 ≤ xn), and (0 ≤ y ≤ n), can be “added” to provide a failure detector of the class Ω z (1 ≤ z ≤ n, a generalization of Ω). It also characterizes the power of such an “addition”, namely, , can construct Ω z iff y + z > t, and can construct Ω z iff x + z > t + 1, where t is the maximum number of processes that can crash in a run. As an example, the paper shows that, while allows solving 2-set agreement (but not consensus) and allows solving t-set agreement (but not (t − 1)-set agreement), a system with failure detectors of both classes can solve consensus for any value of t. More generally, the paper studies the failure detector classes , and Ω z , and shows which reductions among these classes are possible and which are not. The paper also presents a message-passing Ω k -based k-set agreement protocol and shows that Ω k is not enough to solve (k − 1)-set agreement. In that sense, it can be seen as a step toward the characterization of the weakest failure detector class that allows solving the k-set agreement problem. An extended abstract of this paper has appeared in the proceedings of PODC 2006 [20]. This work has been supported partially by a grant from LAFMI (Franco-Mexican Lab in Computer Science), the European Network of Excellence ReSIST and PAPIIT-UNAM.  相似文献   

5.
The Atomic Broadcast algorithm described in this paper can deliver messages in two communication steps, even if multiple processes broadcast at the same time. It tags all broadcast messages with the local real time, and delivers all messages in the order of these timestamps. Both positive and negative statements are used: “m broadcast at time 51” vs. “no messages broadcast between times 31 and 51”. To prevent crashed processes from blocking the system, the -elected leader broadcasts negative statements on behalf of the processes it suspects () to have crashed. A new cheap Generic Broadcast algorithm is used to ensure consistency between conflicting statements. It requires only a majority of correct processes (n > 2f) and, in failure-free runs, delivers all non-conflicting messages in two steps. The main algorithm satisfies several new lower bounds, which are proved in this paper.  相似文献   

6.
We present a novel logic-based framework to automate multi-issue bilateral negotiation in e-commerce settings. The approach exploits logic as communication language among agents, and optimization techniques in order to find Pareto-efficient agreements. We introduce , a propositional logic extended with concrete domains, which allows one to model relations among issues (both numerical and non-numerical ones) via logical entailment, differently from well-known approaches that describe issues as uncorrelated. Through it is possible to represent buyer’s request, seller’s supply and their respective preferences as formulas endowed with a formal semantics, e.g., “if I spend more than 30000 € for a sedan then I want more than a two-years warranty and a GPS system included”. We mix logic and utility theory in order to express preferences in a qualitative and quantitative way. We illustrate the theoretical framework, the logical language, the one-shot negotiation protocol we adopt, and show we are able to compute Pareto-efficient outcomes, using a mediator to solve an optimization problem. We prove the computational adequacy of our method by studying the complexity of the problem of finding Pareto-efficient solutions in our setting.  相似文献   

7.
Coupling and self-stabilization   总被引:1,自引:0,他引:1  
A randomized self-stabilizing algorithm is an algorithm that, whatever the initial configuration is, reaches a set of М legal configurations} in finite time with probability 1. The proof of convergence towards is generally done by exhibiting a potential function , which measures the “vertical” distance of any configuration to , such that decreases with non-null probability at each step of . We propose here a method, based on the notion of coupling, which makes use of a “horizontal” distance between any pair of configurations, such that decreases in expectation at each step of . In contrast with classical methods, our coupling method does not require the knowledge of . In addition to the proof of convergence, the method allows us to assess the convergence rate according to two different measures. Proofs produced by the method are often simpler or give better upper bounds than their classical counterparts, as examplified here on Herman's mutual exclusion and Iterated Prisoner's Dilemma algorithms in the case of cyclic graphs.  相似文献   

8.
In this paper we deal with the problem of controlling a safe place/transition net so as to avoid a set of forbidden markings . We say that a given set of markings has property REACH if it is closed under the reachability operator. We assume that all transitions of the net are controllable and that the set of forbidden markings has the property REACH. The technique of unfolding is used to design a maximally permissive supervisor to solve this control problem. The supervisor takes the form of a set of control places to be added to the unfolding of the original net. The approach is also extended to the problem of preventing a larger set of impending forbidden marking. This is a superset of the forbidden markings that also includes all those markings from which—unless the supervisor blocks the plant—a marking in is inevitably reached in a finite number of steps. Finally, we consider the particular case in which the control objective is that of designing a maximally permissive supervisor for deadlock avoidance and we show that in this particular case our procedure can be efficiently implemented by means of linear algebraic techniques. Submitted to Discrete Event Dynamic Systems. A preliminary version of this paper titled “Control of safe ordinary Petri nets with marking specifications using unfolding,” was published in the Proc. IFAC WODES'04: 7th Work. on Discrete Event Systems (Reims, France), September 2004. Contact author is Alessandro Giua.  相似文献   

9.
Summary The sufficient-completeness property of equational algebraic specifications has been found useful in providing guidelines for designing abstract data type specifications as well as in proving inductive properties using the induction-less-induction method. The sufficient-completeness property is known to be undecidable in general. In an earlier paper, it was shown to be decidable for constructor-preserving, complete (canonical) term rewriting systems, even when there are relations among constructor symbols. In this paper, the complexity of the sufficient-completeness property is analyzed for different classes of term rewriting systems. A number of results about the complexity of the sufficient-completeness property for complete (canonical) term rewriting systems are proved: (i) The problem is co-NP-complete for term rewriting systems with free constructors (i.e., no relations among constructors are allowed), (ii) the problem remains co-NP-complete for term rewriting systems with unary and nullary constructors, even when there are relations among constructors, (iii) the problem is provably in almost exponential time for left-linear term rewriting systems with relations among constructors, and (iv) for left-linear complete constructor-preserving rewriting systems, the problem can be decided in steps exponential innlogn wheren is the size of the rewriting system. No better lower-bound for the complexity of the sufficient-completeness property for complete (canonical) term rewriting system with nonlinear left-hand sides is known. An algorithm for left-linear complete constructor-preserving rewriting systems is also discussed. Finally, the sufficient-completeness property is shown to be undecidable for non-linear complete term rewriting systems with associative functions. These complexity results also apply to the ground-reducibility property (also called inductive-reducibility) which is known to be directly related to the sufficient-completeness property.Some of the results in this paper were reported in a paper titled Complexity of Sufficient-Completeness presented at theSixth Conf. on Foundations of Software Technology and Theoretical Computer Science, New Delhi, India, Dec. 1986. The term quasi-reducibility is replaced in this paper by ground-reducibility as the latter seems to convey a lot more about the concept than the former.Partially supported by the National Science Foundation Grant nos. CCR-8408461 and CCR-8906678Partially supported by the National Science Foundation Grant nos. CCR-8408461 and CCR-9009414Partially supported by the National Science Foundation Grant no. DCR-8603184  相似文献   

10.
In constructing local Fourier bases and in solving differential equations with nonperiodic solutions through Fourier spectral algorithms, it is necessary to solve the Fourier Extension Problem. This is the task of extending a nonperiodic function, defined on an interval , to a function which is periodic on the larger interval . We derive the asymptotic Fourier coefficients for an infinitely differentiable function which is one on an interval , identically zero for , and varies smoothly in between. Such smoothed “top-hat” functions are “bells” in wavelet theory. Our bell is (for x ≥ 0) where where . By applying steepest descents to approximate the coefficient integrals in the limit of large degree j, we show that when the width L is fixed, the Fourier cosine coefficients a j of on are proportional to where Λ(j) is an oscillatory factor of degree given in the text. We also show that to minimize error in a Fourier series truncated after the Nth term, the width should be chosen to increase with N as . We derive similar asymptotics for the function f(x)=x as extended by a more sophisticated scheme with overlapping bells; this gives an even faster rate of Fourier convergence  相似文献   

11.
A hybrid approach for identification of concurrent control chart patterns   总被引:1,自引:1,他引:0  
Control chart patterns (CCPs) are widely used to identify the potential process problems in modern manufacturing industries. The earliest statistical techniques, including chart and R chart, are respectively used for monitoring process mean and process variance. Recently, pattern recognition techniques based on artificial neural network (ANN) are very popular to be applied to recognize unnatural CCPs. However, most of them are limited to recognize simple CCPs arising from single type of unnatural variation. In other words, they are incapable to handle the problem of concurrent CCPs where two types of unnatural variation exist together within the manufacturing process. To facilitate the research gap, this paper presents a hybrid approach based on independent component analysis (ICA) and decision tree (DT) to identify concurrent CCPs. Without loss of generality, six types of concurrent CCPs are used to validate the proposed method. Experimental results show that the proposed approach is very successful to handle most of the concurrent CCPs. The proposed method has two limitations in real application: it needs at least two concurrent CCPs to reconstruct their source patterns and it may be incapable to handle the concurrent pattern incurred by two correlated process (“upward trend” and “upward shift”).  相似文献   

12.
This paper provides an effective method to create an abstract simplicial complex homotopy equivalent to a given set described by non-linear inequalities (polynomial or not). To our knowledge, no other numerical algorithm is able to deal with this type of problem. The proposed approach divides into subsets that have been proven to be contractible using interval arithmetic. The method is close to Čech cohomology and uses the nerve theorem. Some examples illustrate the principle of the approach. This algorithm has been implemented.  相似文献   

13.
In statistical analysis of measurement results it is often necessary to compute the range of the population variance when we only know the intervals of possible values of the x i . While can be computed efficiently, the problem of computing is, in general, NP-hard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm” (Reliable Computing 12 (4) (2006), pp. 273–280) we showed that in a practically important case we can use constraints techniques to compute in time O(n · log(n)). In this paper we provide new algorithms that compute (in all cases) and (for the above case) in linear time O(n). Similar linear-time algorithms are described for computing the range of the entropy when we only know the intervals of possible values of probabilities p i . In general, a statistical characteristic ƒ can be more complex so that even computing ƒ can take much longer than linear time. For such ƒ, the question is how to compute the range in as few calls to ƒ as possible. We show that for convex symmetric functions ƒ, we can compute in n calls to ƒ.  相似文献   

14.
15.
16.
One method malware authors use to defeat detection of their programs is to use morphing engines to rapidly generate a large number of variants. Inspired by previous works in author attribution of natural language text, we investigate a problem of attributing a malware to a morphing engine. Specifically, we present the malware engine attribution problem and formally define its three variations: MVRP, DENSITY and GEN, that reflect the challenges malware analysts face nowadays. We design and implement heuristics to address these problems and show their effectiveness on a set of well-known malware morphing engines and a real-world malware collection reaching detection accuracies of 96 % and higher. Our experiments confirm the applicability of the proposed approach in practice and indicate that engine attribution may offer a viable enhancement of current defenses against malware.  相似文献   

17.
18.
19.
20.
The computational challenges posed by fluid–structure interaction (FSI) modeling of parachutes include the lightness of the parachute canopy compared to the air masses involved in the parachute dynamics, in the case of “ringsail” parachutes the geometric complexities created by the construction of the canopy from “rings” and “sails” with hundreds of ring “gaps” and sail “slits”, and in the case of parachute clusters the contact between the parachutes. The Team for Advanced Flow Simulation and Modeling () has successfully addressed these computational challenges with the Stabilized Space–Time FSI (SSTFSI) technique, which was developed and improved over the years by the and serves as the core numerical technology, and a number of special techniques developed in conjunction with the SSTFSI technique. The quasi-direct and direct coupling techniques developed by the , which are applicable to cases with incompatible fluid and structure meshes at the interface, yield more robust algorithms for FSI computations where the structure is light and therefore more sensitive to the variations in the fluid dynamics forces. The special technique used in dealing with the geometric complexities of the rings and sails is the Homogenized Modeling of Geometric Porosity, which was developed and improved in recent years by the . The Surface-Edge-Node Contact Tracking (SENCT) technique was introduced by the as a contact algorithm where the objective is to prevent the structural surfaces from coming closer than a minimum distance in an FSI computation. The recently-introduced conservative version of the SENCT technique is more robust and is now an essential technology in the parachute cluster computations carried out by the . We provide an overview of the core and special techniques developed by the , present single-parachute FSI computations carried out for design-parameter studies, and report FSI computation and dynamical analysis of two-parachute clusters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号