首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
4.
In this paper we study small depth circuits that contain threshold gates (with or without weights) and parity gates. All circuits we consider are of polynomial size. We prove several results which complete the work on characterizing possible inclusions between many classes defined by small depth circuits. These results are the following:
1.  A single threshold gate with weights cannot in general be replaced by a polynomial fan-in unweighted threshold gate of parity gates.
2.  On the other hand it can be replaced by a depth 2 unweighted threshold circuit of polynomial size. An extension of this construction is used to prove that whatever can be computed by a depthd polynomial size threshold circuit with weights can be computed by a depthd+1 polynomial size unweighted threshold circuit, whered is an arbitrary fixed integer.
3.  A polynomial fan-in threshold gate (with weights) of parity gates cannot in general be replaced by a depth 2 unweighted threshold circuit of polynomial size.
  相似文献   

5.
We show thatBPP can be simulated in subexponential time for infinitely many input lengths unless exponential time
–  collapses to the second level of the polynomial-time hierarchy.
–  has polynomial-size circuits and
–  has publishable proofs (EXPTIME=MA).
We also show thatBPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, we showBPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages inMA-P.  相似文献   

6.
It is known that constant-depth Frege proofs of some tautologies require exponential size. No such lower bound result is known for more general proof systems. We consider tree-like sequent calculus proofs in which formulas can contain modular connectives and only the cut formulas are restricted to be of constant depth. Under a plausible hardness assumption concerning small-depth Boolean circuits, we prove exponential lower bounds for such proofs. We prove these lower bounds directly from the computational hardness assumption. We start with a lower bound for cut-free proofs and “lift” it so it applies to proofs with constant-depth cuts. By using the same approach, we obtain the following additional results. We provide a much simpler proof of a known unconditional lower bound in the case where modular connectives are not used. We establish a conditional exponential separation between the power of constant-depth proofs that use different modular connectives. We show that these tree-like proofs with constant-depth cuts cannot polynomially simulate similar dag-like proofs, even when the dag-like proofs are cut-free. We present a new proof of the non-finite axiomatizability of the theory of bounded arithmetic I Δ0(R). Finally, under a plausible hardness assumption concerning the polynomial-time hierarchy, we show that the hierarchy \({G_i^*}\) of quantified propositional proof systems does not collapse.  相似文献   

7.
In this paper we classify several algorithmic problems in group theory in the complexity classes PZK and SZK (problems with perfect/statistical zero-knowledge proofs respectively). Prior to this, these problems were known to be in . As , we have a tighter upper bound for these problems. Specifically:
•  We show that the permutation group problems Coset Intersection, Double Coset Membership, Group Conjugacy are in PZK. Further, the complements of these problems also have perfect zero knowledge proofs (in the liberal sense). We also show that permutation group isomorphism for solvable groups is in PZK. As an ingredient of this protocol, we design a randomized algorithm for sampling short presentations of solvable permutation groups.
•  We show that the complement of all the above problems have concurrent zero knowledge proofs.
•  We prove that the above problems for black-box groups are in SZK.
•  Finally, we also show that some of the problems have SZK protocols with efficient provers in the sense of Micciancio and Vadhan (Lecture Notes in Comput. Sci. 2729, 282–298, 2003).
  相似文献   

8.
A logic-based system of knowledge representation for natural language discourse has three primary advantages:
–  • It has adequate expressive power,
–  • it has a well-defined semantics, and
–  • it uses simple, sound, general rules of inference.
On the other hand, a standard logic-based system has the following disadvantages:
–  • It supports only an exceedingly complex mapping from surface discourse sentences to internal representations, and
–  • reasoning about the content of semantically complex discourses is difficult because of the incommodious complexity of the internalized formulas.
  相似文献   

9.
We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences:
•  The measure hypothesis: NP does not have p-measure 0.
•  The pseudo-NP hypothesis: there is an NP language that can be distinguished from any DTIME language by an NP refuter.
•  The NP-machine hypothesis: there is an NP machine accepting 0* for which no -time machine can find infinitely many accepting computations.
We show that the NP-machine hypothesis is implied by each of the first two. Previously, no relationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuit-size lower bounds that are known to follow from the first two hypotheses also follow from the NP-machine hypothesis. In particular, the NP-machine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizes AM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses.   相似文献   

10.
Zusammenfassung  Die Entwicklung komplexer eingebetteter Softwaresysteme, wie sie heute beispielsweise in Telekommunikationssystemen, Fahr- oder Flugzeugen oder mit der Steuersoftware von Automatisierungssystemen im Einsatz sind, erfordert ein strukturiertes, modulares Vorgehen und angemessene Techniken zur pr?zisen Beschreibung von Anforderungen, der Architektur des Systems mit ihren Komponenten, der Schnittstellen zur Systemumgebung und zwischen den internen Komponenten, der Wechselwirkung zwischen gesteuertem und steuerndem Teilsystem und schlie?lich der Implementierung. Mit dem frühzeitigen und durchg?ngigen Einsatz geeigneter Modelle (Stichwort UML (,,Unified Modeling Language“) und MDA (,,Model Driven Architecture“)) werden gro?e Hoffnungen verbunden, die Entwicklungsaufgaben beherrschbarer zu gestalten. Dieser Artikel beschreibt die theoretischen Grundlagen für ein konsequent modellbasiertes Vorgehen in Form eines zusammengeh?rigen, homogenen und dennoch modularen Baukastens von Modellen, der hierfür zwingend erforderlich ist. Besondere Schwerpunkte liegen hierbei auf den Themen
–  Schnittstellen,
–  Hierarchische Zerlegung,
–  Architekturen durch Komposition und Dekomposition,
–  Abstraktion durch Schichtenbildung,
–  Realisierung durch Zustandsmaschinen,
–  Verfeinerung von Hierarchie, Schnittstellen und Verhalten,
–  Wechsel der Abstraktionsebenen und
–  Integrierte Sicht auf die gesteuerten und steuernden Teilsysteme.
Dieser Baukasten der Modellierung muss wie bei allen anderen Ingenieursdisziplinen einer durchdachten, in sich stimmigen logisch-mathematischen Theorie entsprechen. Die hier vorgestellte Theorie besteht aus einem Satz von Notationen und Theoremen, die eine Basis für wissenschaftlich fundierte, werkzeugunterstützbare Methoden liefern und eine den Anwendungsdom?nen (Stichwort Dom?nenspezifische Sprachen) pragmatisch angepasste Vorgehensweise bringt. Für eine wissenschaftlich abgesicherte Methode steht weniger die syntaktische Form der Modellierungssprache als vielmehr die Modellierungstheorie im Zentrum. Die Repr?sentation von Modellen durch textuelle oder grafische Beschreibungsmittel ist ohne Zweifel eine wichtige Voraussetzung für den praktischen Einsatz von Modellierungstechniken, muss aber als komfortabler und grunds?tzlich austauschbarer ,,Syntactic Sugar“ gesehen werden.  相似文献   

11.
Web-based bid invitation platforms and reverse auctions are increasingly used by consumers for the procurement of goods and services. An empirical examination shows that with B-2-C these procurement methods generate considerable benefits for the consumer:
–  ⊎ Reverse auctions and bid invitation platforms generate high consumer surplus in the procurement of general and crafts services.
–  ⊎ The level of this consumer surplus is affected by the number of bidders. The duration of the auction and the starting price are less important.
–  ⊎ In the painting business prices are considerably lower than with traditional procurement channels.
–  ⊎ On bid invitation platforms, in most cases (> 55%) the bids with the lowest price are chosen.
  相似文献   

12.
We work with an extension of Resolution, called Res(2), that allows clauses with conjunctions of two literals. In this system there are rules to introduce and eliminate such conjunctions. We prove that the weak pigeonhole principle PHPcnn and random unsatisfiable CNF formulas require exponential-size proofs in this system. This is the strongest system beyond Resolution for which such lower bounds are known. As a consequence to the result about the weak pigeonhole principle, Res(log) is exponentially more powerful than Res(2). Also we prove that Resolution cannot polynomially simulate Res(2) and that Res(2) does not have feasible monotone interpolation solving an open problem posed by Krají ek.  相似文献   

13.
Applications of structural optimization are often multidisciplinary in nature and require nontraditional problem formulations.In the optimization system that has been developed, flexibility in terms of system design and problem formulation have been important aspects. A mathematical programming approach has been adopted where function values and sensitivity information is required from each discipline involved. To be able to deal with the demands of ongoing projects, a number of extensions have been implemented in recent years.The system is today used within the Saab and ABB companies. Four recent applications are presented to illustrate how the extended capabilities have been used.
1.  A train structure, optimized with respect to strength and dynamic properties.
2.  A combined shape and thickness optimization of a force measurement sensor.
3.  The shape optimization of an electric motor component with respect to both structural and magnetic properties.
4.  Finally a composite wing study including structural and aeroelastic considerations.
  相似文献   

14.
This paper explores the connections between two areas pioneered by Shannon: the transmission of information with a fidelity criterion, and the realization of Boolean functions by networks and formulae. We study three phenomena:
1.  The effect of the relative number of O's and l's in a function's table on its complexity.
2.  The effect of the number of unspecified entries in a partially specified function's table on its complexity.
3.  The effect of the number of errors allowed in the realization of a function on its complexity.
Our main result is a precise version of the following statement:  相似文献   

15.
Alan Bundy 《AI & Society》2007,21(4):659-668
This paper is a modified version of my acceptance lecture for the 1986 SPL-Insight Award. It turned into something of a personal credo -describing my view of
  the nature of AI
  the potential social benefit of applied AI
  the importance of basic AI research
  the role of logic and the methodology of rational construction
  the interplay of applied and basic AI research, and
  the importance of funding basic AI.
These points are knitted together by an analogy between AI and structural engineering: in particular, between building expert systems and building bridges.  相似文献   

16.
In this paper we prove an exponential lower bound on the size of bounded-depth Frege proofs for the pigeonhole principle (PHP). We also obtain an (loglogn)-depth lower bound for any polynomial-sized Frege proof of the pigeonhole principle. Our theorem nearly completes the search for the exact complexity of the PHP, as S. Buss has constructed polynomial-size, logn-depth Frege proofs for the PHP. The main lemma in our proof can be viewed as a general Håstad-style Switching Lemma for restrictions that are partial matchings. Our lower bounds for the pigeonhole principle improve on previous superpolynomial lower bounds.  相似文献   

17.
We prove new results on the circuit complexity of approximate majority, which is the problem of computing the majority of a given bit string whose fraction of 1’s is bounded away from 1/2 (by a constant). We then apply these results to obtain new relationships between probabilistic time, BPTime (t), and alternating time, ∑O(1)Time (t). Our main results are the following:
1.  We prove that depth-3 circuits with bottom fan-in (log n)/2 that compute approximate majority on n bits must have size at least 2n0.12^{n^{0.1}}. As a corollary we obtain that there is no black-box proof that BPTime (t) í ?2\subseteq \sum_2Time (o(t2)). This complements the (black-box) result that BPTime (t) í ?2\subseteq \sum_2Time (t2 · poly log t) (Sipser and Gács, STOC ’83; Lautemann, IPL ’83).
2.  We prove that approximate majority is computable by uniform polynomial-size circuits of depth 3. Prior to our work, the only known polynomial-size depth-3 circuits for approximate majority were non-uniform (Ajtai, Ann. Pure Appl. Logic ’83). We also prove that BPTime (t) í ?3\subseteq \sum_3Time (t · poly log t). This complements our results in (1).
3.  We prove new lower bounds for solving QSAT3 ? ?3\in \sum_3Time (n · poly log n) on probabilistic computational models. In particular, we prove that solving QSAT3 requires time n1+Ω(1) on Turing machines with a random-access input tape and a sequential-access work tape that is initialized with random bits. No nontrivial lower bound was previously known on this model (for a function computable in linear space).
  相似文献   

18.
The model of the device of reading (visualization) of the hidden magnetic information from the holograms combined with magneto-optical layer is presented in the article. Ways of magnetic images formation on the protected documentation and their reading by magneto-optical methods are proposed. The reading head with the help of magneto-optical meridional Kerr effect allows to observe visually of “effect of blinking” from the hologram with the hidden magnetic layer. During the work the mathematical analysis magneto-optical Kerr or Faraday effects was carried out. The hidden magnetic image based on:
–  hard magnetic layer on basis Tb-Fe with perpendicular anisotropy
–  soft magnetic layers on a basis or permalloy.
Advantages of the device:
–  non contact reading of the magnetic information
–  difficulty of recurrence of magnetic images formation technology.
The optical scheme of devices contains a light source, the polarizer, the analyzer, the hologram with magneto-optic layers, and constant magnet. The hologram is placed between the polarizer and the analyzer. The text was submitted by the authors in English.  相似文献   

19.
It has been long argued that the well known Milner type inference system implemented in the programming language Standard ML is inadequate [4]. The arguments were presented on the basis of intersection-type inference systems [3]. However no algorithm has been developed. The intersection-type inference systems are closed under-equality [1] and has been proved undecidable [7]. The paper presents a new type inference system using conjunction-types which extends the notion of intersection-types. The algorithm presented in the author's previous paper [9] is easily adoptable into the Standard ML. In this paper we shall discuss the features of the system and its semantic soundness. Unlike the intersection-type inference systems, the conjunction-type inference system is:
1.  decidable
2.  semantically sound for all types
3.  semantically sound and complete for basic types.
  相似文献   

20.
Conclusions  
1.  The parameters (periods, amplitudes, and phases) of 87 multiyear SA rhythms were estimated with high significance from the annual average dynamics of Wolf numbers for 1749–1992; the annual average SA dynamics was then forecast for the 21st century.
2.  The 11-year SA cycle is the superposition of phase-amplitude characteristics of no fewer than 35 rhythms (from No. 25 to No. 59 in Table 1). The most significant heliorhythms in this family are the high-amplitude heliorhythms with periods of 11.00, 10.04, 10.52, 11.95, 8.38, 10.64, and 8.10 years (listed in the order of decreasing amplitude).
3.  The results of our study confirm the high efficiency of the proposed mathematical method for the detection of hidden heliorhythms of various frequencies and unknown periods.
4.  Using our results as a benchmark for comparison of multiyear heliorhythms detected by other mathematical techniques for annual average Wolf numbers (1749–1992), we can analyze comprehensively the strengths and weaknesses of all mathematical methods (including the proposed method) employed in modern heliobiology, chronomedicine, and economics.
Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 143–149, March–April, 1998.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号