首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In this paper we study small depth circuits that contain threshold gates (with or without weights) and parity gates. All circuits we consider are of polynomial size. We prove several results which complete the work on characterizing possible inclusions between many classes defined by small depth circuits. These results are the following:
1.  A single threshold gate with weights cannot in general be replaced by a polynomial fan-in unweighted threshold gate of parity gates.
2.  On the other hand it can be replaced by a depth 2 unweighted threshold circuit of polynomial size. An extension of this construction is used to prove that whatever can be computed by a depthd polynomial size threshold circuit with weights can be computed by a depthd+1 polynomial size unweighted threshold circuit, whered is an arbitrary fixed integer.
3.  A polynomial fan-in threshold gate (with weights) of parity gates cannot in general be replaced by a depth 2 unweighted threshold circuit of polynomial size.
  相似文献   

3.
4.
A logic-based system of knowledge representation for natural language discourse has three primary advantages:
–  • It has adequate expressive power,
–  • it has a well-defined semantics, and
–  • it uses simple, sound, general rules of inference.
On the other hand, a standard logic-based system has the following disadvantages:
–  • It supports only an exceedingly complex mapping from surface discourse sentences to internal representations, and
–  • reasoning about the content of semantically complex discourses is difficult because of the incommodious complexity of the internalized formulas.
  相似文献   

5.
We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences:
•  The measure hypothesis: NP does not have p-measure 0.
•  The pseudo-NP hypothesis: there is an NP language that can be distinguished from any DTIME language by an NP refuter.
•  The NP-machine hypothesis: there is an NP machine accepting 0* for which no -time machine can find infinitely many accepting computations.
We show that the NP-machine hypothesis is implied by each of the first two. Previously, no relationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuit-size lower bounds that are known to follow from the first two hypotheses also follow from the NP-machine hypothesis. In particular, the NP-machine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizes AM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses.   相似文献   

6.
We consider the problem of efficiently sampling Web search engine query results. In turn, using a small random sample instead of the full set of results leads to efficient approximate algorithms for several applications, such as:
•  Determining the set of categories in a given taxonomy spanned by the search results;
•  Finding the range of metadata values associated with the result set in order to enable “multi-faceted search”;
•  Estimating the size of the result set;
•  Data mining associations to the query terms.
We present and analyze efficient algorithms for obtaining uniform random samples applicable to any search engine that is based on posting lists and document-at-a-time evaluation. (To our knowledge, all popular Web search engines, for example, Google, Yahoo Search, MSN Search, Ask, belong to this class.) Furthermore, our algorithm can be modified to follow the modern object-oriented approach whereby posting lists are viewed as streams equipped with a next method, and the next method for Boolean and other complex queries is built from the next method for primitive terms. In our case we show how to construct a basic sample-next(p) method that samples term posting lists with probability p, and show how to construct sample-next(p) methods for Boolean operators (AND, OR, WAND) from primitive methods. Finally, we test the efficiency and quality of our approach on both synthetic and real-world data. A preliminary version of this work has appeared in [3]. Work performed while A. Anagnostopoulos and A.Z. Broder were at IBM T. J. Watson Research Center.  相似文献   

7.
Alan Bundy 《AI & Society》2007,21(4):659-668
This paper is a modified version of my acceptance lecture for the 1986 SPL-Insight Award. It turned into something of a personal credo -describing my view of
  the nature of AI
  the potential social benefit of applied AI
  the importance of basic AI research
  the role of logic and the methodology of rational construction
  the interplay of applied and basic AI research, and
  the importance of funding basic AI.
These points are knitted together by an analogy between AI and structural engineering: in particular, between building expert systems and building bridges.  相似文献   

8.
We construct a randomness-efficient averaging sampler that is computable by uniform constant-depth circuits with parity gates (i.e., in uniform AC 0[⊕]). Our sampler matches the parameters achieved by random walks on constant-degree expander graphs, allowing us to apply a variety expander-based techniques within NC 1. For example, we obtain the following results:
•  Randomness-efficient error-reduction for uniform probabilistic NC 1, TC 0, AC 0[⊕] and AC 0: Any function computable by uniform probabilistic circuits with error 1/3 using r random bits is computable by circuits of the same type with error δ using r + O(log(1/δ)) random bits.
•  An optimal bitwise ϵ-biased generator in AC 0[⊕]: There exists a 1/2Ω(n)-biased generator G : {0, 1} O(n) → {0, 1}2n for which poly(n)-size uniform AC 0[⊕] circuits can compute G(s) i given (s, i) ∈ {0, 1} O(n)  ×  {0, 1} n . This resolves question raised by Gutfreund and Viola (Random 2004).
•  uniform BP · AC 0 ⊆ uniform AC 0/O(n).
Our sampler is based on the zig-zag graph product of Reingold, Vadhan & Wigderson (Annals of Math 2002) and as part of our analysis we givean elementary proof of a generalization of Gillman’s Chernoff Bound for Expander Walks (SIAM Journal on Computing 1998).   相似文献   

9.
10.
This special section is devoted to a selection of papers that originally appeared in two thematic sessions on high-level testing of complex systems at IDPT 2002 and 2003, the 6th and 7th World Conference on Integrated Design and Process Technology, which took place in Pasadena, CA in June 2002 and in Austin, TX in December 2003, respectively.This collection of papers spans a wide panoramic view on the development of testing and validation technology along several dimmensions. It touches on issues such as
– Kinds of systems
– Kinds of testing or validation
– Practical difficulties (in the application area)
– Technical difficulties, e.g., state explosion, heterogeneity, etc.
– Particular approaches, i.e., methods tools and whether or not they are linked to other areas such as formal verification, simulation, abstract interpretation, etc.
– Current state of advancement and uptake (conceptual, implemented, industrial product, etc.)
All seven papers present methods, tools, and case studies that aim at using diverse formal techniques for testing complex systems.  相似文献   

11.
If a message can have n different values and all values are equally probable, then the entropy of the message is log(n). In the present paper, we discuss the expectation value of the entropy, for an arbitrary probability distribution. We introduce a mixture of all possible probability distributions. We assume that the mixing function is uniform
•  either in flat probability space, i.e. the unitary n-dimensional hypertriangle
•  or in Bhattacharyya’s spherical statistical space, i.e. the unitary n-dimensional hyperoctant.
A computation is a manipulation of an incoming message, i.e. a mapping in probability space:
•  either a reversible mapping, i.e. a symmetry operation (rotation or reflection) in n-dimen sional space
•  or an irreversible mapping, i.e. a projection operation from n-dimensional to lower-dimensional space.
During a reversible computation, no isentropic path in the probability space can be found. Therefore we have to conclude that a computation cannot be represented by a message which merely follows a path in n-dimensional probability space. Rather, the point representing the mixing function travels along a path in an infinite-dimensional Hilbert space. In honour of prof. dr. Henrik Farkas (Department of Chemical Physics, Technical University of Budapest) an outstanding scientist and most remarkable human being who unfortunately left us on 21 July 2005.  相似文献   

12.
Consider an information network with threats called attackers; each attacker uses a probability distribution to choose a node of the network to damage. Opponent to the attackers is a protector entity called defender; the defender scans and cleans from attacks some part of the network (in particular, a link), which it chooses independently using its own probability distribution. Each attacker wishes to maximize the probability of escaping its cleaning by the defender; towards a conflicting objective, the defender aims at maximizing the expected number of attackers it catches. We model this network security scenario as a non-cooperative strategic game on graphs. We are interested in its associated Nash equilibria, where no network entity can unilaterally increase its local objective. We obtain the following results:
•  We obtain an algebraic characterization of (mixed) Nash equilibria.
•  No (non-trivial) instance of the graph-theoretic game has a pure Nash equilibrium. This is an immediate consequence of some covering properties we prove for the supports of the players in all (mixed) Nash equilibria.
•  We coin a natural subclass of mixed Nash equilibria, which we call Matching Nash equilibria, for this graph-theoretic game. Matching Nash equilibria are defined by enriching the necessary covering properties we proved with some additional conditions involving other structural parameters of graphs, such as Independent Sets.
–  We derive a characterization of graphs admitting Matching Nash equilibria. All such graphs have an Expanding Independent Set. The characterization enables a non-deterministic, polynomial time algorithm to compute a Matching Nash equilibrium for any such graph.
–  Bipartite graphs are shown to satisfy the characterization. So, using a polynomial time algorithm to compute a Maximum Matching for a bipartite graph, we obtain, as our main result, a deterministic, polynomial time algorithm to compute a Matching Nash equilibrium for any instance of the game with a bipartite graph.
A preliminary version of this work appeared in the Proceedings of the 16th Annual International Symposium on Algorithms and Computation, X. Deng and D. Du, eds., Lecture Notes in Computer Science, vol. 3827, pp. 288–297, Springer, December 2005. This work has been partially supported by the IST Program of the European Union under contract 001907 ( ), and by research funds at University of Cyprus.  相似文献   

13.
14.
Web-based bid invitation platforms and reverse auctions are increasingly used by consumers for the procurement of goods and services. An empirical examination shows that with B-2-C these procurement methods generate considerable benefits for the consumer:
–  ⊎ Reverse auctions and bid invitation platforms generate high consumer surplus in the procurement of general and crafts services.
–  ⊎ The level of this consumer surplus is affected by the number of bidders. The duration of the auction and the starting price are less important.
–  ⊎ In the painting business prices are considerably lower than with traditional procurement channels.
–  ⊎ On bid invitation platforms, in most cases (> 55%) the bids with the lowest price are chosen.
  相似文献   

15.
In this paper we classify several algorithmic problems in group theory in the complexity classes PZK and SZK (problems with perfect/statistical zero-knowledge proofs respectively). Prior to this, these problems were known to be in . As , we have a tighter upper bound for these problems. Specifically:
•  We show that the permutation group problems Coset Intersection, Double Coset Membership, Group Conjugacy are in PZK. Further, the complements of these problems also have perfect zero knowledge proofs (in the liberal sense). We also show that permutation group isomorphism for solvable groups is in PZK. As an ingredient of this protocol, we design a randomized algorithm for sampling short presentations of solvable permutation groups.
•  We show that the complement of all the above problems have concurrent zero knowledge proofs.
•  We prove that the above problems for black-box groups are in SZK.
•  Finally, we also show that some of the problems have SZK protocols with efficient provers in the sense of Micciancio and Vadhan (Lecture Notes in Comput. Sci. 2729, 282–298, 2003).
  相似文献   

16.
The model of the device of reading (visualization) of the hidden magnetic information from the holograms combined with magneto-optical layer is presented in the article. Ways of magnetic images formation on the protected documentation and their reading by magneto-optical methods are proposed. The reading head with the help of magneto-optical meridional Kerr effect allows to observe visually of “effect of blinking” from the hologram with the hidden magnetic layer. During the work the mathematical analysis magneto-optical Kerr or Faraday effects was carried out. The hidden magnetic image based on:
–  hard magnetic layer on basis Tb-Fe with perpendicular anisotropy
–  soft magnetic layers on a basis or permalloy.
Advantages of the device:
–  non contact reading of the magnetic information
–  difficulty of recurrence of magnetic images formation technology.
The optical scheme of devices contains a light source, the polarizer, the analyzer, the hologram with magneto-optic layers, and constant magnet. The hologram is placed between the polarizer and the analyzer. The text was submitted by the authors in English.  相似文献   

17.
18.
We show thatBPP can be simulated in subexponential time for infinitely many input lengths unless exponential time
–  collapses to the second level of the polynomial-time hierarchy.
–  has polynomial-size circuits and
–  has publishable proofs (EXPTIME=MA).
We also show thatBPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In addition, we showBPP can be simulated in subexponential time for infinitely many input lengths unless there exist unary languages inMA-P.  相似文献   

19.
Zusammenfassung  Die Entwicklung komplexer eingebetteter Softwaresysteme, wie sie heute beispielsweise in Telekommunikationssystemen, Fahr- oder Flugzeugen oder mit der Steuersoftware von Automatisierungssystemen im Einsatz sind, erfordert ein strukturiertes, modulares Vorgehen und angemessene Techniken zur pr?zisen Beschreibung von Anforderungen, der Architektur des Systems mit ihren Komponenten, der Schnittstellen zur Systemumgebung und zwischen den internen Komponenten, der Wechselwirkung zwischen gesteuertem und steuerndem Teilsystem und schlie?lich der Implementierung. Mit dem frühzeitigen und durchg?ngigen Einsatz geeigneter Modelle (Stichwort UML (,,Unified Modeling Language“) und MDA (,,Model Driven Architecture“)) werden gro?e Hoffnungen verbunden, die Entwicklungsaufgaben beherrschbarer zu gestalten. Dieser Artikel beschreibt die theoretischen Grundlagen für ein konsequent modellbasiertes Vorgehen in Form eines zusammengeh?rigen, homogenen und dennoch modularen Baukastens von Modellen, der hierfür zwingend erforderlich ist. Besondere Schwerpunkte liegen hierbei auf den Themen
–  Schnittstellen,
–  Hierarchische Zerlegung,
–  Architekturen durch Komposition und Dekomposition,
–  Abstraktion durch Schichtenbildung,
–  Realisierung durch Zustandsmaschinen,
–  Verfeinerung von Hierarchie, Schnittstellen und Verhalten,
–  Wechsel der Abstraktionsebenen und
–  Integrierte Sicht auf die gesteuerten und steuernden Teilsysteme.
Dieser Baukasten der Modellierung muss wie bei allen anderen Ingenieursdisziplinen einer durchdachten, in sich stimmigen logisch-mathematischen Theorie entsprechen. Die hier vorgestellte Theorie besteht aus einem Satz von Notationen und Theoremen, die eine Basis für wissenschaftlich fundierte, werkzeugunterstützbare Methoden liefern und eine den Anwendungsdom?nen (Stichwort Dom?nenspezifische Sprachen) pragmatisch angepasste Vorgehensweise bringt. Für eine wissenschaftlich abgesicherte Methode steht weniger die syntaktische Form der Modellierungssprache als vielmehr die Modellierungstheorie im Zentrum. Die Repr?sentation von Modellen durch textuelle oder grafische Beschreibungsmittel ist ohne Zweifel eine wichtige Voraussetzung für den praktischen Einsatz von Modellierungstechniken, muss aber als komfortabler und grunds?tzlich austauschbarer ,,Syntactic Sugar“ gesehen werden.  相似文献   

20.
This paper summarizes our recent activities to support people to communicate with each other using public computer network systems. Unlike conventional teleconferencing systems, which are mainly for business meetings, we focus on informal communication in open orgnizations. So far, three different systems have been developed and actually tested.
–  • InSocia, we introduced vision agents which act on behalf of their users in a network. To enable a meeting to be scheduled at a mutually acceptable time, we proposed the scheme called non-committed scheduling.
–  Free Walk supports casual meetings among more than a few people. For this purpose, we provide a 3-D virtual space calledcommunity common where participants can behave just as in real life.
–  • In theICMAS’96 Mobile Assistant Project, on the other hand, we conducted an experiment in an actual international conference using 100 personal digital assistants and wireless phones. Various services were provided to increase the interactions among participants of the conference.
Based on these experiences, we are now moving towardscommunity-ware to support people to form a community based on computer network technologies. Toru Ishida, Dr. Eng.: He received the B. E., M. Eng. and D. Eng. degrees from Kyoto University, Kyoto, Japan, in 1976, 1978 and 1989, respectively. He is currently a professor of Department of Information Science, Kyoto University. He has been working on “Parallel, Distributed and Multiagent Production Systems (Springer, 1994)” from 1983. He first proposed parallel rule firing, and extended it to distributed rule firing. Organizational self-design was then introduced into distributed production systems for increasing adaptiveness. From 1990, he started working on “Real-time Search for Learning Autonomous Agents (Kluwer Academic Publishers, 1997).” Again, organizational adaptation becomes a central issue in controlling multiple problem solving agents. He recently initiated the study of “Communityware: Towards Global Collaboration (John Wiley and Sons, 1998)” with his colleagues.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号