首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we study small depth circuits that contain threshold gates (with or without weights) and parity gates. All circuits we consider are of polynomial size. We prove several results which complete the work on characterizing possible inclusions between many classes defined by small depth circuits. These results are the following:
1.  A single threshold gate with weights cannot in general be replaced by a polynomial fan-in unweighted threshold gate of parity gates.
2.  On the other hand it can be replaced by a depth 2 unweighted threshold circuit of polynomial size. An extension of this construction is used to prove that whatever can be computed by a depthd polynomial size threshold circuit with weights can be computed by a depthd+1 polynomial size unweighted threshold circuit, whered is an arbitrary fixed integer.
3.  A polynomial fan-in threshold gate (with weights) of parity gates cannot in general be replaced by a depth 2 unweighted threshold circuit of polynomial size.
  相似文献   

2.
3.
4.
In this paper, we show that the problem of deterministically factoring multivariate polynomials reduces to the problem of deterministic polynomial identity testing. Specifically, we show that given an arithmetic circuit (either explicitly or via black-box access) that computes a multivariate polynomial f, the task of computing arithmetic circuits for the factors of f can be solved deterministically, given a deterministic algorithm for the polynomial identity testing problem (we require either a white-box or a black-box algorithm, depending on the representation of f).Together with the easy observation that deterministic factoring implies a deterministic algorithm for polynomial identity testing, this establishes an equivalence between these two central derandomization problems of arithmetic complexity.Previously, such an equivalence was known only for multilinear circuits (Shpilka & Volkovich, 2010).  相似文献   

5.
6.
Consider an information network with threats called attackers; each attacker uses a probability distribution to choose a node of the network to damage. Opponent to the attackers is a protector entity called defender; the defender scans and cleans from attacks some part of the network (in particular, a link), which it chooses independently using its own probability distribution. Each attacker wishes to maximize the probability of escaping its cleaning by the defender; towards a conflicting objective, the defender aims at maximizing the expected number of attackers it catches. We model this network security scenario as a non-cooperative strategic game on graphs. We are interested in its associated Nash equilibria, where no network entity can unilaterally increase its local objective. We obtain the following results:
•  We obtain an algebraic characterization of (mixed) Nash equilibria.
•  No (non-trivial) instance of the graph-theoretic game has a pure Nash equilibrium. This is an immediate consequence of some covering properties we prove for the supports of the players in all (mixed) Nash equilibria.
•  We coin a natural subclass of mixed Nash equilibria, which we call Matching Nash equilibria, for this graph-theoretic game. Matching Nash equilibria are defined by enriching the necessary covering properties we proved with some additional conditions involving other structural parameters of graphs, such as Independent Sets.
–  We derive a characterization of graphs admitting Matching Nash equilibria. All such graphs have an Expanding Independent Set. The characterization enables a non-deterministic, polynomial time algorithm to compute a Matching Nash equilibrium for any such graph.
–  Bipartite graphs are shown to satisfy the characterization. So, using a polynomial time algorithm to compute a Maximum Matching for a bipartite graph, we obtain, as our main result, a deterministic, polynomial time algorithm to compute a Matching Nash equilibrium for any instance of the game with a bipartite graph.
A preliminary version of this work appeared in the Proceedings of the 16th Annual International Symposium on Algorithms and Computation, X. Deng and D. Du, eds., Lecture Notes in Computer Science, vol. 3827, pp. 288–297, Springer, December 2005. This work has been partially supported by the IST Program of the European Union under contract 001907 ( ), and by research funds at University of Cyprus.  相似文献   

7.
This special section is devoted to a selection of papers that originally appeared in two thematic sessions on high-level testing of complex systems at IDPT 2002 and 2003, the 6th and 7th World Conference on Integrated Design and Process Technology, which took place in Pasadena, CA in June 2002 and in Austin, TX in December 2003, respectively.This collection of papers spans a wide panoramic view on the development of testing and validation technology along several dimmensions. It touches on issues such as
– Kinds of systems
– Kinds of testing or validation
– Practical difficulties (in the application area)
– Technical difficulties, e.g., state explosion, heterogeneity, etc.
– Particular approaches, i.e., methods tools and whether or not they are linked to other areas such as formal verification, simulation, abstract interpretation, etc.
– Current state of advancement and uptake (conceptual, implemented, industrial product, etc.)
All seven papers present methods, tools, and case studies that aim at using diverse formal techniques for testing complex systems.  相似文献   

8.
We consider distribution-free property-testing of graph connectivity. In this setting of property testing, the distance between functions is measured with respect to a fixed but unknown distribution D on the domain, and the testing algorithm has an oracle access to random sampling from the domain according to this distribution D. This notion of distribution-free testing was previously defined, and testers were shown for very few properties. However, no distribution-free property testing algorithm was known for any graph property. We present the first distribution-free testing algorithms for one of the central properties in this area—graph connectivity (specifically, the problem is mainly interesting in the case of sparse graphs). We introduce three testing models for sparse graphs:
•  A model for bounded-degree graphs,
•  A model for graphs with a bound on the total number of edges (both models were already considered in the context of uniform distribution testing), and
•  A model which is a combination of the two previous testing models; i.e., bounded-degree graphs with a bound on the total number of edges.
We prove that connectivity can be tested in each of these testing models, in a distribution-free manner, using a number of queries that is independent of the size of the graph. This is done by providing a new analysis to previously known connectivity testers (from “standard”, uniform distribution property-testing) and by introducing some new testers. An extended abstract of this work appeared in the proceedings of RANDOM-APPROX 2004.  相似文献   

9.
Alan Bundy 《AI & Society》2007,21(4):659-668
This paper is a modified version of my acceptance lecture for the 1986 SPL-Insight Award. It turned into something of a personal credo -describing my view of
  the nature of AI
  the potential social benefit of applied AI
  the importance of basic AI research
  the role of logic and the methodology of rational construction
  the interplay of applied and basic AI research, and
  the importance of funding basic AI.
These points are knitted together by an analogy between AI and structural engineering: in particular, between building expert systems and building bridges.  相似文献   

10.
A logic-based system of knowledge representation for natural language discourse has three primary advantages:
–  • It has adequate expressive power,
–  • it has a well-defined semantics, and
–  • it uses simple, sound, general rules of inference.
On the other hand, a standard logic-based system has the following disadvantages:
–  • It supports only an exceedingly complex mapping from surface discourse sentences to internal representations, and
–  • reasoning about the content of semantically complex discourses is difficult because of the incommodious complexity of the internalized formulas.
  相似文献   

11.
It has been long argued that the well known Milner type inference system implemented in the programming language Standard ML is inadequate [4]. The arguments were presented on the basis of intersection-type inference systems [3]. However no algorithm has been developed. The intersection-type inference systems are closed under-equality [1] and has been proved undecidable [7]. The paper presents a new type inference system using conjunction-types which extends the notion of intersection-types. The algorithm presented in the author's previous paper [9] is easily adoptable into the Standard ML. In this paper we shall discuss the features of the system and its semantic soundness. Unlike the intersection-type inference systems, the conjunction-type inference system is:
1.  decidable
2.  semantically sound for all types
3.  semantically sound and complete for basic types.
  相似文献   

12.
Web-based bid invitation platforms and reverse auctions are increasingly used by consumers for the procurement of goods and services. An empirical examination shows that with B-2-C these procurement methods generate considerable benefits for the consumer:
–  ⊎ Reverse auctions and bid invitation platforms generate high consumer surplus in the procurement of general and crafts services.
–  ⊎ The level of this consumer surplus is affected by the number of bidders. The duration of the auction and the starting price are less important.
–  ⊎ In the painting business prices are considerably lower than with traditional procurement channels.
–  ⊎ On bid invitation platforms, in most cases (> 55%) the bids with the lowest price are chosen.
  相似文献   

13.
We show thatBPP can be simulated in subexponential time for infinitely many input lengths unless exponential time
–  collapses to the second level of the polynomial-time hierarchy.
–  has polynomial-size circuits and
–  has publishable proofs (EXPTIME=MA).
We also show thatBPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, we showBPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages inMA-P.  相似文献   

14.
The rectilinear polygon cover problem is one in which a certain class of features of a rectilinear polygon ofn vertices has to be covered with the minimum number of rectangles included in the polygon. In particular, we consider covering the entire interior, the boundary, and the set of corners of the polygon. These problems have important applications in storing images and in the manufacture of integrated circuits. Unfortunately, most of these problems are known to be NP-complete. Hence it is necessary to develop efficient heuristics for these problems or to show that the design of efficient heuristics is impossible. In this paper we show:
(a)  The corner cover problem is NP-complete.
(b)  The boundary and the corner cover problem can be approximated within a ratio of 4 of the optimum inO(n logn) andO(n 1.5) time, respectively.
(c)  No polynomial-time approximation scheme exists for the interior and the boundary cover problems, unless P=NP.
A preliminary version of this result appeared inProceedings of the Fourth Canadian Conference on Computational Geometry, 1992, pp. 229–235. This research was partially supported by NSF Grant CCR-9114545.  相似文献   

15.
Zusammenfassung  Die Entwicklung komplexer eingebetteter Softwaresysteme, wie sie heute beispielsweise in Telekommunikationssystemen, Fahr- oder Flugzeugen oder mit der Steuersoftware von Automatisierungssystemen im Einsatz sind, erfordert ein strukturiertes, modulares Vorgehen und angemessene Techniken zur pr?zisen Beschreibung von Anforderungen, der Architektur des Systems mit ihren Komponenten, der Schnittstellen zur Systemumgebung und zwischen den internen Komponenten, der Wechselwirkung zwischen gesteuertem und steuerndem Teilsystem und schlie?lich der Implementierung. Mit dem frühzeitigen und durchg?ngigen Einsatz geeigneter Modelle (Stichwort UML (,,Unified Modeling Language“) und MDA (,,Model Driven Architecture“)) werden gro?e Hoffnungen verbunden, die Entwicklungsaufgaben beherrschbarer zu gestalten. Dieser Artikel beschreibt die theoretischen Grundlagen für ein konsequent modellbasiertes Vorgehen in Form eines zusammengeh?rigen, homogenen und dennoch modularen Baukastens von Modellen, der hierfür zwingend erforderlich ist. Besondere Schwerpunkte liegen hierbei auf den Themen
–  Schnittstellen,
–  Hierarchische Zerlegung,
–  Architekturen durch Komposition und Dekomposition,
–  Abstraktion durch Schichtenbildung,
–  Realisierung durch Zustandsmaschinen,
–  Verfeinerung von Hierarchie, Schnittstellen und Verhalten,
–  Wechsel der Abstraktionsebenen und
–  Integrierte Sicht auf die gesteuerten und steuernden Teilsysteme.
Dieser Baukasten der Modellierung muss wie bei allen anderen Ingenieursdisziplinen einer durchdachten, in sich stimmigen logisch-mathematischen Theorie entsprechen. Die hier vorgestellte Theorie besteht aus einem Satz von Notationen und Theoremen, die eine Basis für wissenschaftlich fundierte, werkzeugunterstützbare Methoden liefern und eine den Anwendungsdom?nen (Stichwort Dom?nenspezifische Sprachen) pragmatisch angepasste Vorgehensweise bringt. Für eine wissenschaftlich abgesicherte Methode steht weniger die syntaktische Form der Modellierungssprache als vielmehr die Modellierungstheorie im Zentrum. Die Repr?sentation von Modellen durch textuelle oder grafische Beschreibungsmittel ist ohne Zweifel eine wichtige Voraussetzung für den praktischen Einsatz von Modellierungstechniken, muss aber als komfortabler und grunds?tzlich austauschbarer ,,Syntactic Sugar“ gesehen werden.  相似文献   

16.
In this paper we consider the p-ary transitive reduction (TR p ) problem where p>0 is an integer; for p=2 this problem arises in inferring a sparsest possible (biological) signal transduction network consistent with a set of experimental observations with a goal to minimize false positive inferences even if risking false negatives. Special cases of TR p have been investigated before in different contexts; the best previous results are as follows:
(1)  The minimum equivalent digraph problem, that correspond to a special case of TR1 with no critical edges, is known to be MAX-SNP-hard, admits a polynomial time algorithm with an approximation ratio of 1.617+ε for any constant ε>0 (Chiu and Liu in Sci. Sin. 4:1396–1400, 1965) and can be solved in linear time for directed acyclic graphs (Aho et al. in SIAM J. Comput. 1(2):131–137, 1972).
(2)  A 2-approximation algorithm exists for TR1 (Frederickson and JàJà in SIAM J. Comput. 10(2):270–283, 1981; Khuller et al. in 19th Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 937–938, 1999).
In this paper, our contributions are as follows:
•  We observe that TR p , for any integer p>0, can be solved in linear time for directed acyclic graphs using the ideas in Aho et al. (SIAM J. Comput. 1(2):131–137, 1972).
•  We provide a 1.78-approximation for TR1 that improves the 2-approximation mentioned in (2) above.
•  We provide a 2+o(1)-approximation for TR p on general graphs for any fixed prime p>1.
R. Albert’s research was partly supported by a Sloan Research Fellowship in Science and Technology. B. DasGupta’s research was partly supported by NSF grants DBI-0543365, IIS-0612044 and IIS-0346973. E. Sontag’s research was partly supported by NSF grants EIA 0205116 and DMS-0504557.  相似文献   

17.
This paper studies the allocation of discrete resources among multiple agents from a preference theory perspective. More specifically, the paper explores the process of decision making where:
(a)  information is obtained about the preference profiles of each agent
(b)  the information acquired is then used as a basis for finding a socially optimal resource allocation, and
(c)  the costs involved in acquiring information are considered as an integral part of the process.
  相似文献   

18.
Conclusions  
1.  The parameters (periods, amplitudes, and phases) of 87 multiyear SA rhythms were estimated with high significance from the annual average dynamics of Wolf numbers for 1749–1992; the annual average SA dynamics was then forecast for the 21st century.
2.  The 11-year SA cycle is the superposition of phase-amplitude characteristics of no fewer than 35 rhythms (from No. 25 to No. 59 in Table 1). The most significant heliorhythms in this family are the high-amplitude heliorhythms with periods of 11.00, 10.04, 10.52, 11.95, 8.38, 10.64, and 8.10 years (listed in the order of decreasing amplitude).
3.  The results of our study confirm the high efficiency of the proposed mathematical method for the detection of hidden heliorhythms of various frequencies and unknown periods.
4.  Using our results as a benchmark for comparison of multiyear heliorhythms detected by other mathematical techniques for annual average Wolf numbers (1749–1992), we can analyze comprehensively the strengths and weaknesses of all mathematical methods (including the proposed method) employed in modern heliobiology, chronomedicine, and economics.
Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 143–149, March–April, 1998.  相似文献   

19.
Applications of structural optimization are often multidisciplinary in nature and require nontraditional problem formulations.In the optimization system that has been developed, flexibility in terms of system design and problem formulation have been important aspects. A mathematical programming approach has been adopted where function values and sensitivity information is required from each discipline involved. To be able to deal with the demands of ongoing projects, a number of extensions have been implemented in recent years.The system is today used within the Saab and ABB companies. Four recent applications are presented to illustrate how the extended capabilities have been used.
1.  A train structure, optimized with respect to strength and dynamic properties.
2.  A combined shape and thickness optimization of a force measurement sensor.
3.  The shape optimization of an electric motor component with respect to both structural and magnetic properties.
4.  Finally a composite wing study including structural and aeroelastic considerations.
  相似文献   

20.
This paper explores the connections between two areas pioneered by Shannon: the transmission of information with a fidelity criterion, and the realization of Boolean functions by networks and formulae. We study three phenomena:
1.  The effect of the relative number of O's and l's in a function's table on its complexity.
2.  The effect of the number of unspecified entries in a partially specified function's table on its complexity.
3.  The effect of the number of errors allowed in the realization of a function on its complexity.
Our main result is a precise version of the following statement:  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号