首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study random-self-reductions from a structural complexity-theoretic point of view. Specifically, we look at relationships between adaptive and nonadaptive random-self-reductions. We also look at what happens to random-self-reductions if we restrict the number of queries they are allowed to make. We show the following results:
–  There exist sets that are adaptively random-self-reducible but not nonadaptively random-self-reducible. Under plausible assumptions, there exist such sets inNP.
–  There exists a function that has a nonadaptive (k(n)+1)-random-self-reduction but does not have an adaptivek(n)-random-self-reduction.
–  Forany countable class of functionsC andany unbounded functionk(n), there exists a function that is nonadaptivelyk(n)-uniformly-random-self-reducible but is not inC/poly. This should be contrasted with Feigenbaum, Kannan, and Nisan's theorem that all nonadaptively 2-uniformly-random-self-reducible sets are inNP/poly.
  相似文献   

2.
This special section is devoted to a selection of papers that originally appeared in two thematic sessions on high-level testing of complex systems at IDPT 2002 and 2003, the 6th and 7th World Conference on Integrated Design and Process Technology, which took place in Pasadena, CA in June 2002 and in Austin, TX in December 2003, respectively.This collection of papers spans a wide panoramic view on the development of testing and validation technology along several dimmensions. It touches on issues such as
– Kinds of systems
– Kinds of testing or validation
– Practical difficulties (in the application area)
– Technical difficulties, e.g., state explosion, heterogeneity, etc.
– Particular approaches, i.e., methods tools and whether or not they are linked to other areas such as formal verification, simulation, abstract interpretation, etc.
– Current state of advancement and uptake (conceptual, implemented, industrial product, etc.)
All seven papers present methods, tools, and case studies that aim at using diverse formal techniques for testing complex systems.  相似文献   

3.
Garibaldo  F.  Rebecchi  E. 《AI & Society》2004,18(1):44-67
In this paper the authors, starting from the experience described and commented on in earlier work by Mancini and Sbordone, deal with the three main epistemological problems that the research group they participated in had to face:
–  The conflicting and ambiguous relationship between psychoanalysis and social research
–  The classical epistemological problem of the relationship between the subject and object of research within the perspective of action research
–  The problem arising from their experience, i.e., the risk of manipulation, and the way to deal with it from an epistemic perspective
The three problems are dealt with one at a time, but from a common perspective, i.e., the attempt to integrate the richness and variety of human subjectivity in social research. As to the relationship between psychoanalysis and social research, a special section is devoted to the implications of an integrated or convergent methodology on team-working in organisations.
F. GaribaldoEmail:
  相似文献   

4.
Trakhtenbrot calls a setA autoreducible ifA is Turing-reducible toA via an oracle Turing machine that never queries the oracle about the input string. Yao considers sets that are autoreducible via probabilistic, polynomial-time oracle Turing machines, and he calls such setscoherent. We continue the study of autoreducible sets, including those that are autoreducible via a family of polynomial-sized circuits, which we callweakly coherent. Sets that are not weakly coherent are calledstrongly incoherent. We show
–  Ifs is superpolynomial and space-constructible, then there is a strongly incoherent set in DSPACE (s(n)).
–  If NEEEXP BPEEEXP, then there is a set in NP that is incoherent.
–  IfA is complete for any of the classes i p , i p , or i p ,i0, thenA is coherent. In particular, all NP-complete sets are coherent.
As corollaries, we obtain new lower bounds onprogram checkability andrandom-self-reducibility. These results answer open questions posed by Yao and by Blum and Kannan.  相似文献   

5.
Alan Bundy 《AI & Society》2007,21(4):659-668
This paper is a modified version of my acceptance lecture for the 1986 SPL-Insight Award. It turned into something of a personal credo -describing my view of
  the nature of AI
  the potential social benefit of applied AI
  the importance of basic AI research
  the role of logic and the methodology of rational construction
  the interplay of applied and basic AI research, and
  the importance of funding basic AI.
These points are knitted together by an analogy between AI and structural engineering: in particular, between building expert systems and building bridges.  相似文献   

6.
It has been long argued that the well known Milner type inference system implemented in the programming language Standard ML is inadequate [4]. The arguments were presented on the basis of intersection-type inference systems [3]. However no algorithm has been developed. The intersection-type inference systems are closed under-equality [1] and has been proved undecidable [7]. The paper presents a new type inference system using conjunction-types which extends the notion of intersection-types. The algorithm presented in the author's previous paper [9] is easily adoptable into the Standard ML. In this paper we shall discuss the features of the system and its semantic soundness. Unlike the intersection-type inference systems, the conjunction-type inference system is:
1.  decidable
2.  semantically sound for all types
3.  semantically sound and complete for basic types.
  相似文献   

7.
A logic-based system of knowledge representation for natural language discourse has three primary advantages:
–  • It has adequate expressive power,
–  • it has a well-defined semantics, and
–  • it uses simple, sound, general rules of inference.
On the other hand, a standard logic-based system has the following disadvantages:
–  • It supports only an exceedingly complex mapping from surface discourse sentences to internal representations, and
–  • reasoning about the content of semantically complex discourses is difficult because of the incommodious complexity of the internalized formulas.
  相似文献   

8.
We consider hypotheses about nondeterministic computation that have been studied in different contexts and shown to have interesting consequences:
•  The measure hypothesis: NP does not have p-measure 0.
•  The pseudo-NP hypothesis: there is an NP language that can be distinguished from any DTIME language by an NP refuter.
•  The NP-machine hypothesis: there is an NP machine accepting 0* for which no -time machine can find infinitely many accepting computations.
We show that the NP-machine hypothesis is implied by each of the first two. Previously, no relationships were known among these three hypotheses. Moreover, we unify previous work by showing that several derandomizations and circuit-size lower bounds that are known to follow from the first two hypotheses also follow from the NP-machine hypothesis. In particular, the NP-machine hypothesis becomes the weakest known uniform hardness hypothesis that derandomizes AM. We also consider UP versions of the above hypotheses as well as related immunity and scaled dimension hypotheses.   相似文献   

9.
An implicit approximate factorization (AF) algorithm is constructed, which has the following characteristics.
–  In two dimensions: The scheme is unconditionally stable, has a 3×3 stencil and at steady state has a fourth-order spatial accuracy. The temporal evolution is time accurate either to first or second order through choice of parameter.
–  In three dimensions: The scheme has almost the same properties as in two dimensions except that it is now only conditionally stable, with the stability condition (the CFL number) being dependent on the cell aspect ratios,y/x andz/x. The stencil is still compact and fourth-order accuracy at steady state is maintained.
Numerical experiments on a two-dimensional shock-reflection problem show the expected improvement over lower-order schemes, not only in accuracy (measured by theL 2 error) but also in the dispersion. It is also shown how the same technique is immediately extendable to Runge-Kutta type schemes, resulting in improved stability in addition to the enhanced accuracy.  相似文献   

10.
This paper studies the allocation of discrete resources among multiple agents from a preference theory perspective. More specifically, the paper explores the process of decision making where:
(a)  information is obtained about the preference profiles of each agent
(b)  the information acquired is then used as a basis for finding a socially optimal resource allocation, and
(c)  the costs involved in acquiring information are considered as an integral part of the process.
  相似文献   

11.
A formal definition of a combinatorial topology is presented in this paper for the discrete N-dimensional space defined by the An* lattice. The use of this grid instead of the classical ℤn is based on two arguments:
– It is the optimal sampling grid in the sense of Shannon’s sampling theorem in 2 and 3 dimensions,
– It induces the simplest discrete topology definition because its dual is a K-simplex.
Their interest in image processing and in the medical imaging field is presented with some examples.  相似文献   

12.
13.
The SRL (speciate re-entrant logic) of King (1989) is a sound, complete and decidable logic designed specifically to support formalisms for the HPSG (head-driven phrase structure grammar) of Pollard and Sag (1994). The SRL notion of modellability in a signature is particularly important for HPSG, and the present paper modifies an elegant method due to Blackburn and Spaan (1993) in order to prove that
–  modellability in each computable signature is 1 0
–  modellability in some finite signature is 1 0 -hard (hence not decidable), and
–  modellability in some finite signature is decidable.
Since each finite signature is a computable signature, we conclude that 01-completeness is the least upper bound on the complexity of modellability both in finite signatures and in computable signatures, though not a lower bound in either.  相似文献   

14.
Traditionally, when the term integration is used to refer to the interoperability of disparate, heterogeneous computer systems, it means the ability to exchange digital data between the systems. For the more sophisticated systems designers, integration may mean shared, distributed databases or a federated database system. Within the development of the STandard for the Exchange of Product model data (STEP — ISO 10303), integration refers to an information architecture composed of conceptual constructs that is independent of implementation considerations.The Integration Information Architecture of STEP is presented and explained. Instead of a flat representation of abstract (i.e., conceptual) data structures, integration within STEP takes place at four different levels:
1.  intra-resource integration;
2.  structural integration of application protocols through integrated resources;
3.  semantic integration of application protocols through application interpreted constructs (AICs);
4.  operational integration through application protocols.
  相似文献   

15.
Self-calibration for imaging sensors is essential to many computer vision applications. In this paper, a new stratified self-calibration and metric reconstruction method is proposed for zooming/refocusing cameras under circular motion. With the assumption of known rotation angles, the circular motion constraints are first formulated. By enforcing the constraints gradually, metric reconstruction is retrieved up to a two-parameter ambiguity. The closed form expression of the absolute conic w.r.t. the two parameters is deduced. The ambiguity is then resolved with the square pixel assumption of the camera. The advantages of this method are mainly as follows:
(i)  It gives precise results by defining and enforcing the circular motion constraints;
(ii)  It is flexible that it allows both the focal lengths and the principal point to vary;
(iii)  It requires no scene constraint. Experimental results with both synthetic data and real images are presented, demonstrating the accuracy and robustness of the new method.
Y. S. HungEmail:
  相似文献   

16.
17.
In this paper we classify several algorithmic problems in group theory in the complexity classes PZK and SZK (problems with perfect/statistical zero-knowledge proofs respectively). Prior to this, these problems were known to be in . As , we have a tighter upper bound for these problems. Specifically:
•  We show that the permutation group problems Coset Intersection, Double Coset Membership, Group Conjugacy are in PZK. Further, the complements of these problems also have perfect zero knowledge proofs (in the liberal sense). We also show that permutation group isomorphism for solvable groups is in PZK. As an ingredient of this protocol, we design a randomized algorithm for sampling short presentations of solvable permutation groups.
•  We show that the complement of all the above problems have concurrent zero knowledge proofs.
•  We prove that the above problems for black-box groups are in SZK.
•  Finally, we also show that some of the problems have SZK protocols with efficient provers in the sense of Micciancio and Vadhan (Lecture Notes in Comput. Sci. 2729, 282–298, 2003).
  相似文献   

18.
We present quantum algorithms for the following matching problems in unweighted and weighted graphs with n vertices and m edges:
•  Finding a maximal matching in general graphs in time .
•  Finding a maximum matching in general graphs in time .
•  Finding a maximum weight matching in bipartite graphs in time , where N is the largest edge weight.
Our quantum algorithms are faster than the best known classical deterministic algorithms for the corresponding problems. In particular, the second result solves an open question stated in a paper by Ambainis and Špalek (Proceedings of STACS’06, pp. 172–183, 2006).  相似文献   

19.
Web-based bid invitation platforms and reverse auctions are increasingly used by consumers for the procurement of goods and services. An empirical examination shows that with B-2-C these procurement methods generate considerable benefits for the consumer:
–  ⊎ Reverse auctions and bid invitation platforms generate high consumer surplus in the procurement of general and crafts services.
–  ⊎ The level of this consumer surplus is affected by the number of bidders. The duration of the auction and the starting price are less important.
–  ⊎ In the painting business prices are considerably lower than with traditional procurement channels.
–  ⊎ On bid invitation platforms, in most cases (> 55%) the bids with the lowest price are chosen.
  相似文献   

20.
The GeodesicViewer realizes exocentric two- and three-dimensional illustrations of lightlike and timelike geodesics in the general theory of relativity. By means of an intuitive graphical user interface, all parameters of a spacetime as well as the initial conditions of the geodesics can be modified interactively.

New version program summary

Program title: GeodesicViewerCatalogue identifier: AEFP_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFP_v2_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 76 202No. of bytes in distributed program, including test data, etc.: 1 722 290Distribution format: tar.gzProgramming language: C++, OpenGLComputer: All platforms with a C++ compiler, Qt, OpenGLOperating system: Linux, Mac OS X, WindowsRAM: 24 MBytesClassification: 1.5External routines:
  • • 
    Motion4D (included in the package)
  • • 
    Gnu Scientific Library (GSL) (http://www.gnu.org/software/gsl/)
  • • 
    Qt (http://qt.nokia.com/downloads)
  • • 
    OpenGL (http://www.opengl.org/)
Catalogue identifier of previous version: AEFP_v1_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 413Does the new version supersede the previous version?: YesNature of problem: Illustrate geodesics in four-dimensional Lorentzian spacetimes.Solution method: Integration of ordinary differential equations. 3D-Rendering via OpenGL.Reasons for new version: The main reason for the new version was to visualize the parallel transport of the Sachs legs and to show the influence of curved spacetime on a bundle of light rays as is realized in the new version of the Motion4D library (http://cpc.cs.qub.ac.uk/summaries/AEEX_v3_0.html).Summary of revisions:
  • • 
    By choosing the new geodesic type “lightlike_sachs”, the parallel transport of the Sachs basis and the integration of the Jacobi equation can be visualized.
  • • 
    The 2D representation via Qwt was replaced by an OpenGL 2D implementation to speed up the visualization.
  • • 
    Viewing parameters can now be stored in a configuration file (.cfg).
  • • 
    Several new objects can be used in 3D and 2D representation.
  • • 
    Several predefined local tetrads can be choosen.
  • • 
    There are some minor modifications: new mouse control (rotate on sphere); line smoothing; current last point in coordinates is shown; mutual-coordinate representation extended; current cursor position in 2D; colors for 2D view.
Running time: Interactive. The examples given take milliseconds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号