首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A. Murli 《Calcolo》1967,4(4):647-676
In this paper we get three formulas to compute Fourier integral. The formulas are essentially a linear combination of values of the functionf(t) with weight coefficients, which are independent of thef(t) and of the parameter ω. The formulas are obtained by means of a polinomial approximation off(t) on each interval (kmτ, (k+1)mτ) wherem is an integer and τ is the period of the trigonometric function.   相似文献   

2.
Regular Random k-SAT: Properties of Balanced Formulas   总被引:1,自引:0,他引:1  
We consider a model for generating random k-SAT formulas, in which each literal occurs approximately the same number of times in the formula clauses (regular random and k-SAT). Our experimental results show that such regular random k-SAT instances are much harder than the usual uniform random k-SAT problems. This is in agreement with other results that show that more balanced instances of random combinatorial problems are often much more difficult to solve than uniformly random instances, even at phase transition boundaries. There are almost no formal results known for such problem distributions. The balancing constraints add a dependency between variables that complicates a standard analysis. Regular random 3-SAT exhibits a phase transition as a function of the ratio α of clauses to variables. The transition takes place at approximately α = 3.5. We show that for α > 3.78 with high probability (w.h.p.) random regular 3-SAT formulas are unsatisfiable. Specifically, the events hold with high probability if Pr when n → ∞. We also show that the analysis of a greedy algorithm proposed by Kaporis et al. for the uniform 3-SAT model can be adapted for regular random 3-SAT. In particular, we show that for formulas with ratio α < 2.46, a greedy algorithm finds a satisfying assignment with positive probability.  相似文献   

3.
We present lower and upper bounds on the size of pi-sigma-pi (Π ∑ Π) formulas computing threshold functions for small thresholds. Our results show that the limitations of Π ∑ Π formulas for computing threshold functions for small thresholds are more pronounced than suggested by the lower bounds for small depth circuits computing the majority function.  相似文献   

4.
5.
Propositional satisfiability (SAT) is a success story in Computer Science and Artificial Intelligence: SAT solvers are currently used to solve problems in many different application domains, including planning and formal verification. The main reason for this success is that modern SAT solvers can successfully deal with problems having millions of variables. All these solvers are based on the Davis–Logemann–Loveland procedure (dll). In its original version, dll is a decision procedure, but it can be very easily modified in order to return one or all assignments satisfying the input set of clauses, assuming at least one exists. However, in many cases it is not enough to compute assignments satisfying all the input clauses: Indeed, the returned assignments have also to be “optimal” in some sense, e.g., they have to satisfy as many other constraints—expressed as preferences—as possible. In this paper we start with qualitative preferences on literals, defined as a partially ordered set (poset) of literals. Such a poset induces a poset on total assignments and leads to the definition of optimal model for a formula ψ as a minimal element of the poset on the models of ψ. We show (i) how dll can be extended in order to return one or all optimal models of ψ (once converted in clauses and assuming ψ is satisfiable), and (ii) how the same procedures can be used to compute optimal models wrt a qualitative preference on formulas and/or wrt a quantitative preference on literals or formulas. We implemented our ideas and we tested the resulting system on a variety of very challenging structured benchmarks. The results indicate that our implementation has comparable performances with other state-of-the-art systems, tailored for the specific problems we consider.  相似文献   

6.
The concept of truth degrees of formulas in Łukasiewiczn-valued propositional logicL n is proposed. A limit theorem is obtained, which says that the truth functionτ n induced by truth degrees converges to the integrated truth functionτ whenn converges to infinite. Hence this limit theorem builds a bridge between the discrete valued Łukasiewicz logic and the continuous valued Łukasiewicz logic. Moreover, the results obtained in the present paper is a natural generalization of the corresponding results obtained in two-valued propositional logic.  相似文献   

7.
In this paper we consider the classical capacities of quantum-classical channels corresponding to measurement of observables. Special attention is paid to the case of continuous observables. We give formulas for unassisted and entanglement-assisted classical capacities C and C ea and consider some explicitly solvable cases, which give simple examples of entanglement-breaking channels with C < C ea. We also elaborate on the ensemble-observable duality to show that C ea for the measurement channel is related to the χ-quantity for the dual ensemble in the same way as C is related to the accessible information. This provides both accessible information and the χ-quantity for quantum ensembles dual to our examples.  相似文献   

8.
In this paper, we consider the implementation of a product c=A b, where A is N 1×N 3 band matrix with bandwidth ω and b is a vector of size N 3×1, on bidirectional and unidirectional linear systolic arrays (BLSA and ULSA, respectively). We distinguish the cases when the matrix bandwidth ω is 1≤ωN 3 and N 3ωN 1+N 3−1. A modification of the systolic array synthesis procedure based on data dependencies and space-time transformations of data dependency graph is proposed. The modification enables obtaining both BLSA and ULSA with an optimal number of processing elements (PEs) regardless of the matrix bandwidth. The execution time of the synthesized arrays has been minimized. We derive explicit formulas for the synthesis of these arrays. The performances of the designed arrays are discussed and compared to the performances of the arrays obtained by the standard design procedure.  相似文献   

9.
Model checking for a probabilistic branching time logic with fairness   总被引:4,自引:0,他引:4  
We consider concurrent probabilistic systems, based on probabilistic automata of Segala & Lynch [55], which allow non-deterministic choice between probability distributions. These systems can be decomposed into a collection of “computation trees” which arise by resolving the non-deterministic, but not probabilistic, choices. The presence of non-determinism means that certain liveness properties cannot be established unless fairness is assumed. We introduce a probabilistic branching time logic PBTL, based on the logic TPCTL of Hansson [30] and the logic PCTL of [55], resp. pCTL [14]. The formulas of the logic express properties such as “every request is eventually granted with probability at least p”. We give three interpretations for PBTL on concurrent probabilistic processes: the first is standard, while in the remaining two interpretations the branching time quantifiers are taken to range over a certain kind of fair computation trees. We then present a model checking algorithm for verifying whether a concurrent probabilistic process satisfies a PBTL formula assuming fairness constraints. We also propose adaptations of existing model checking algorithms for pCTL [4, 14] to obtain procedures for PBTL under fairness constraints. The techniques developed in this paper have applications in automatic verification of randomized distributed systems. Received: June 1997 / Accepted: May 1998  相似文献   

10.
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities: temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March 1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal relationships between intervals, instants, and durations can be defined, allowing one to query the database with different granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems, which can be solved in polynomial time. Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001  相似文献   

11.
A method is considered for constructing weighted arithmetic means from IR m to IR, which possess the interpolation property. Necessary and sufficient conditions are then given in order to solve the convergence problem to a continuous real functionf(ξ), ξ∈IR m of such interpolating mean functions. Finally a few formulas are considered as an application of the proposed method. This work has been supported by the Italian Ministry of Scientific and Technological Research (M.U.R.S.T.) and the National Research Council (C.N.R.).  相似文献   

12.
HARDI (High Angular Resolution Diffusion Imaging) is a recent magnetic resonance imaging (MRI) technique for imaging water diffusion processes in fibrous tissues such as brain white matter and muscles. In this article we study left-invariant diffusion on the group of 3D rigid body movements (i.e. 3D Euclidean motion group) SE(3) and its application to crossing-preserving smoothing of HARDI images. The linear left-invariant (convection-)diffusions are forward Kolmogorov equations of Brownian motions on the space of positions and orientations in 3D embedded in SE(3) and can be solved by ℝ3 S 2-convolution with the corresponding Green’s functions. We provide analytic approximation formulas and explicit sharp Gaussian estimates for these Green’s functions. In our design and analysis for appropriate (nonlinear) convection-diffusions on HARDI data we explain the underlying differential geometry on SE(3). We write our left-invariant diffusions in covariant derivatives on SE(3) using the Cartan connection. This Cartan connection has constant curvature and constant torsion, and so have the exponential curves which are the auto-parallels along which our left-invariant diffusion takes place. We provide experiments of our crossing-preserving Euclidean-invariant diffusions on artificial HARDI data containing crossing-fibers.  相似文献   

13.
Propositional dynamic logic (PDL) provides a natural setting for semantics of means-end relations involving non-determinism, but such models do not include probabilistic features common to much practical reasoning involving means and ends. We alter the semantics for PDL by adding probabilities to the transition systems and interpreting dynamic formulas 〈α〉 ϕ as fuzzy predicates about the reliability of α as a means to ϕ. This gives our semantics a measure of efficacy for means-end relations.  相似文献   

14.
Reverse triple Ⅰ method of fuzzy reasoning   总被引:8,自引:1,他引:8  
A theory of reverse triple I method with sustention degree is presented by using the implication operator R0 in every step of the fuzzy reasoning. Its computation formulas of supremum for fuzzy modus ponens and infimum for fuzzy modus tollens are given respectively. Moreover, through the generalization of this problem, the corresponding formulas of α-reverse triple I method with sustention degree are also obtained. In addition, the theory of reverse triple I method with restriction degree is proposed as well by using the operator R0, and the computation formulas of infimum for fuzzy modus ponens and supremum for fuzzy modus tollens are shown.  相似文献   

15.
We present a theory of evaluating integrals of rapidly oscillating functions in various classes of subintegral functions with the use of a mesh information operator on subintegral functions. The theory allows us to derive and prove optimal (with respect to accuracy and (or) performance) and nearly optimal quadrature formulas and to test their quality against well-known and proposed numerical integration algorithms and to determine their efficiency domains. A technique is proposed to determine the optimal parameters of computational algorithms that obtain the ε-solution of the problem.  相似文献   

16.
In this paper, we re-examine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas for the major ad hoc join methods found in the relational database literature. We show that various pieces of “common wisdom” about join algorithm performance fail to hold up when analyzed carefully, and we use our detailed cost model to derive op timal buffer allocation schemes for each of the join methods examined here. We show that optimizing their buffer allocations can lead to large performance improvements, e.g., as much as a 400% improvement in some cases. We also validate our cost model's predictions by measuring an actual implementation of each join algorithm considered. The results of this work should be directly useful to implementors of relational query optimizers and query processing systems. Edited by M. Adiba. Received May 1993 / Accepted April 1996  相似文献   

17.
Interactive provers typically use higher-order logic, while automatic provers typically use first-order logic. To integrate interactive provers with automatic ones, one must translate higher-order formulas to first-order form. The translation should ideally be both sound and practical. We have investigated several methods of translating function applications, types, and λ-abstractions. Omitting some type information improves the success rate but can be unsound, so the interactive prover must verify the proofs. This paper presents experimental data that compares the translations in respect of their success rates for three automatic provers.  相似文献   

18.
We study algorithms simulating a system evolving with Hamiltonian H = ?j=1m Hj{H = \sum_{j=1}^m H_j} , where each of the H j , j = 1, . . . ,m, can be simulated efficiently. We are interested in the cost for approximating e-iHt, t ? \mathbbR{e^{-iHt}, t \in \mathbb{R}} , with error e{\varepsilon} . We consider algorithms based on high order splitting formulas that play an important role in quantum Hamiltonian simulation. These formulas approximate e iHt by a product of exponentials involving the H j , j = 1, . . . ,m. We obtain an upper bound for the number of required exponentials. Moreover, we derive the order of the optimal splitting method that minimizes our upper bound. We show significant speedups relative to previously known results.  相似文献   

19.
DPLL (for Davis, Putnam, Logemann, and Loveland) algorithms form the largest family of contemporary algorithms for SAT (the propositional satisfiability problem) and are widely used in applications. The recursion trees of DPLL algorithm executions on unsatisfiable formulas are equivalent to treelike resolution proofs. Therefore, lower bounds for treelike resolution (known since the 1960s) apply to them. However, these lower bounds say nothing about the behavior of such algorithms on satisfiable formulas. Proving exponential lower bounds for them in the most general setting is impossible without proving PNP; therefore, to prove lower bounds, one has to restrict the power of branching heuristics. In this paper, we give exponential lower bounds for two families of DPLL algorithms: generalized myopic algorithms, which read up to n 1−ε of clauses at each step and see the remaining part of the formula without negations, and drunk algorithms, which choose a variable using any complicated rule and then pick its value at random. Extended abstract of this paper appeared in Proceedings of ICALP 2004, LNCS 3142, Springer, 2004, pp. 84–96. Supported by CCR grant CCR-0324906. Supported in part by Russian Science Support Foundation, RAS program of fundamental research “Research in principal areas of contemporary mathematics,” and INTAS grant 04-77-7173. §Supported in part by INTAS grant 04-77-7173.  相似文献   

20.
In this paper we are interested in quantified propositional formulas in conjunctive normal form with “clauses” of arbitrary shapes. i.e., consisting of applying arbitrary relations to variables. We study the complexity of the evaluation problem, the model checking problem, the equivalence problem, and the counting problem for such formulas, both with and without a bound on the number of quantifier alternations. For each of these computational goals we get full complexity classifications: We determine the complexity of each of these problems depending on the set of relations allowed in the input formulas. Thus, on the one hand we exhibit syntactic restrictions of the original problems that are still computationally hard, and on the other hand we identify non-trivial subcases that admit efficient algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号