首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper deals with the question: how is computation best individuated?
  1. 1.
    The semantic view of computation: computation is best individuated by its semantic properties.
     
  2. 2.
    The causal view of computation: computation is best individuated by its causal properties.
     
  3. 3.
    The functional view of computation: computation is best individuated by its functional properties.
     
Some scientific theories explain the capacities of brains by appealing to computations that they supposedly perform. The reason for that is usually that computation is individuated semantically. I criticize the reasons in support of this view and its presupposition of representation and semantics. Furthermore, I argue that the only justified appeal to a representational individuation of computation might be that it is partly individuated by implicit intrinsic representations.
  相似文献   

2.
A graph is König-Egerváry if the size of a minimum vertex cover equals that of a maximum matching in the graph. These graphs have been studied extensively from a graph-theoretic point of view. In this paper, we introduce and study the algorithmic complexity of finding König-Egerváry subgraphs of a given graph. In particular, given a graph G and a nonnegative integer k, we are interested in the following questions:
  1. 1.
    does there exist a set of k vertices (edges) whose deletion makes the graph König-Egerváry?
     
  2. 2.
    does there exist a set of k vertices (edges) that induce a König-Egerváry subgraph?
     
We show that these problems are NP-complete and study their complexity from the points of view of approximation and parameterized complexity. Towards this end, we first study the algorithmic complexity of Above Guarantee Vertex Cover, where one is interested in minimizing the additional number of vertices needed beyond the maximum matching size for the vertex cover. Further, while studying the parameterized complexity of the problem of deleting k vertices to obtain a König-Egerváry graph, we show a number of interesting structural results on matchings and vertex covers which could be useful in other contexts.
  相似文献   

3.
We present barriers to provable security of two important cryptographic primitives, perfect non-interactive zero knowledge (NIZK) and non-interactive non-alleable commitments:
  1. Black-box reductions cannot be used to demonstrate adaptive soundness (i.e., that soundness holds even if the statement to be proven is chosen as a function of the common reference string) of any statistical NIZK for NP based on any “standard” intractability assumptions.
     
  2. Black-box reductions cannot be used to demonstrate non-malleability of non-interactive, or even 2-message, commitment schemes based on any “standard” intractability assumptions.
     
We emphasize that the above separations apply even if the construction of the considered primitives makes a non-black-box use of the underlying assumption.
As an independent contribution, we suggest a taxonomy of game-based intractability assumptions.  相似文献   

4.
5.
A circuit C compresses a function \({f : \{0,1\}^n\rightarrow \{0,1\}^m}\) if given an input \({x\in \{0,1\}^n}\), the circuit C can shrink x to a shorter ?-bit string x′ such that later, a computationally unbounded solver D will be able to compute f(x) based on x′. In this paper we study the existence of functions which are incompressible by circuits of some fixed polynomial size \({s=n^c}\). Motivated by cryptographic applications, we focus on average-case \({(\ell,\epsilon)}\) incompressibility, which guarantees that on a random input \({x\in \{0,1\}^n}\), for every size s circuit \({C:\{0,1\}^n\rightarrow \{0,1\}^{\ell}}\) and any unbounded solver D, the success probability \({\Pr_x[D(C(x))=f(x)]}\) is upper-bounded by \({2^{-m}+\epsilon}\). While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known. In this work, we present the following results:
  1. (1)
    Assuming that E is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function \({f:\{0,1\}^n\rightarrow \{0,1\}}\) which is incompressible by size n c circuits with communication \({\ell=(1-o(1)) \cdot n}\) and error \({\epsilon=n^{-c}}\). Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14).
     
  2. (2)
    We show that it is possible to achieve negligible error parameter \({\epsilon=n^{-\omega(1)}}\) for nonboolean functions. Specifically, assuming that E is hard for exponential size \({\Sigma_3}\)-circuits, we construct a nonboolean function \({f:\{0,1\}^n\rightarrow \{0,1\}^m}\) which is incompressible by size n c circuits with \({\ell=\Omega(n)}\) and extremely small \({\epsilon=n^{-c} \cdot 2^{-m}}\). Our construction combines the techniques of Trevisan and Vadhan (FOCS 00) with a new notion of relative error deterministic extractor which may be of independent interest.
     
  3. (3)
    We show that the task of constructing an incompressible boolean function \({f:\{0,1\}^n\rightarrow \{0,1\}}\) with negligible error parameter \({\epsilon}\) cannot be achieved by “existing proof techniques”. Namely, nondeterministic reductions (or even \({\Sigma_i}\) reductions) cannot get \({\epsilon=n^{-\omega(1)}}\) for boolean incompressible functions. Our results also apply to constructions of standard Nisan-Wigderson type PRGs and (standard) boolean functions that are hard on average, explaining, in retrospect, the limitations of existing constructions. Our impossibility result builds on an approach of Shaltiel and Viola (STOC 08).
     
  相似文献   

6.
Information used for decision-making is often semi-quantitative, rather than quantitative in nature. For example, knowing that a student received the top mark in an examination may be more useful than knowing the precise mark achieved; similarly, to be given the information that a company has low market penetration in a given census district might be more important than to be given a numerical value derived from (say) sales divided by average take home pay. The purpose of this paper is to examine how database semantics can be enhanced to provide support for the interpretation of raw data, in order that the significance of the data can be understood. The significance of a reported result is usually dependent on its context, that is, the domain in which it appears, so it is clear that the notion of context must be supported. This paper describes extensions to the object-oriented model which involve equipping it with the means to handle contextual knowledge relating to the patterns of data values, and the means to perform analyses on the data held within the system. We hope to make good use of these extensions in two ways.
  1. i)
    As data is retrieved from the database it can be placed in context, that is, some measure of the significance of reported values can be given.
     
  2. ii)
    This new knowledge can be used as a guide to query optimisation, since it allows the optimiser to determine, with greater accuracy, the expected size of the intermediate stages when performing a search operation.
     
This paper concentrates on the rules necessary for the reporting of significance information, along with a control strategy for firing the rules, and a mechanism for the inheritance of significance data.  相似文献   

7.
A degree-constrained graph orientation of an undirected graph G is an assignment of a direction to each edge in G such that the outdegree of every vertex in the resulting directed graph satisfies a specified lower and/or upper bound. Such graph orientations have been studied for a long time and various characterizations of their existence are known. In this paper, we consider four related optimization problems introduced in reference (Asahiro et al. LNCS 7422, 332–343 (2012)): For any fixed non-negative integer W, the problems MAX W-LIGHT, MIN W-LIGHT, MAX W-HEAVY, and MIN W-HEAVY take as input an undirected graph G and ask for an orientation of G that maximizes or minimizes the number of vertices with outdegree at most W or at least W. As shown in Asahiro et al. LNCS 7422, 332–343 (2012)).  相似文献   

8.
In this paper we consider both the maximization variant Max Rep and the minimization variant Min Rep of the famous Label Cover problem. So far the best approximation ratios known for these two problems were \(O(\sqrt{n})\) and indeed some authors suggested the possibility that this ratio is the best approximation factor for these two problems. We show, in fact, that there are a O(n 1/3)-approximation algorithm for Max Rep and a O(n 1/3log?2/3 n)-approximation algorithm for Min Rep. In addition, we also exhibit a randomized reduction from Densest k-Subgraph to Max Rep, showing that any approximation factor for Max Rep implies the same factor (up to a constant) for Densest k-Subgraph.  相似文献   

9.
Given a simple undirected graph G = (V, E) and an integer k < |V|, the Sparsest k-Subgraph problem asks for a set of k vertices which induces the minimum number of edges. As a generalization of the classical independent set problem, Sparsest k-Subgraph is ????-hard and even not approximable unless ?????? in general graphs. Thus, we investigate Sparsest k-Subgraph in graph classes where independent set is polynomial-time solvable, such as subclasses of perfect graphs. Our two main results are the ????-hardness of Sparsest k-Subgraph on chordal graphs, and a greedy 2-approximation algorithm. Finally, we also show how to derive a P T A S for Sparsest k-Subgraph on proper interval graphs.  相似文献   

10.
The Planar Feedback Vertex Set problem asks whether an n-vertex planar graph contains at most k vertices meeting all its cycles. The Face Cover problem asks whether all vertices of a plane graph G lie on the boundary of at most k faces of G. Standard techniques from parameterized algorithm design indicate that both problems can be solved by sub-exponential parameterized algorithms (where k is the parameter). In this paper we improve the algorithmic analysis of both problems by proving a series of combinatorial results relating the branchwidth of planar graphs with their face cover. Combining this fact with duality properties of branchwidth, allows us to derive analogous results on feedback vertex set. As a consequence, it follows that Planar Feedback Vertex Set and Face Cover can be solved in \(O(2^{15.11\cdot\sqrt{k}}+n^{2})\) and \(O(2^{10.1\cdot\sqrt {k}}+n^{2})\) steps, respectively.  相似文献   

11.
Given a road network G = (V,E), where V (E) denotes the set of vertices(edges) in G, a set of points of interest P and a query point q residing in G, the reverse furthest neighbors (Rfn R ) query in road networks fetches a set of points pP that take q as their furthest neighbor compared with all points in P ∪ {q}. This is the monochromatic Rfn R (Mrfn R ) query. Another interesting version of Rfn R query is the bichromatic reverse furthest neighbor (Brfn R ) query. Given two sets of points P and Q, and a query point qQ, a Brfn R query fetches a set of points pP that take q as their furthest neighbor compared with all points in Q. This paper presents efficient algorithms for both Mrfn R and Brfn R queries, which utilize landmarks and partitioning-based techniques. Experiments on real datasets confirm the efficiency and scalability of proposed algorithms.  相似文献   

12.
The paper studies three fundamental problems in graph analytics, computing connected components (CCs), biconnected components (BCCs), and 2-edge-connected components (ECCs) of a graph. With the recent advent of big data, developing efficient distributed algorithms for computing CCs, BCCs and ECCs of a big graph has received increasing interests. As with the existing research efforts, we focus on the Pregel programming model, while the techniques may be extended to other programming models including MapReduce and Spark. The state-of-the-art techniques for computing CCs and BCCs in Pregel incur \(O(m\times \#\text {supersteps})\) total costs for both data communication and computation, where m is the number of edges in a graph and #supersteps is the number of supersteps. Since the network communication speed is usually much slower than the computation speed, communication costs are the dominant costs of the total running time in the existing techniques. In this paper, we propose a new paradigm based on graph decomposition to compute CCs and BCCs with O(m) total communication cost. The total computation costs of our techniques are also smaller than that of the existing techniques in practice, though theoretically almost the same. Moreover, we also study distributed computing ECCs. We are the first to study this problem and an approach with O(m) total communication cost is proposed. Comprehensive empirical studies demonstrate that our approaches can outperform the existing techniques by one order of magnitude regarding the total running time.  相似文献   

13.
A system for monitoring track chamber measurements by using an on-line computer has been developed and realized in Pisa.

    The system consists of

  1. 1)
    np to seven measuring machines (MM), each provided with a console for displaying messages from the computer.  相似文献   

14.
How can we discover interesting patterns from time-evolving high-speed data streams? How to analyze the data streams quickly and accurately, with little space overhead? How to guarantee the found patterns to be self-consistent? High-speed data stream has been receiving increasing attention due to its wide applications such as sensors, network traffic, social networks, etc. The most fundamental task on the data stream is frequent pattern mining; especially, focusing on recentness is important in real applications. In this paper, we develop two algorithms for discovering recently frequent patterns in data streams. First, we propose TwMinSwap to find top-k recently frequent items in data streams, which is a deterministic version of our motivating algorithm TwSample providing theoretical guarantees based on item sampling. TwMinSwap improves TwSample in terms of speed, accuracy, and memory usage. Both require only O(k) memory spaces and do not require any prior knowledge on the stream such as its length and the number of distinct items in the stream. Second, we propose TwMinSwap-Is to find top-k recently frequent itemsets in data streams. We especially focus on keeping self-consistency of the discovered itemsets, which is the most important property for reliable results, while using O(k) memory space with the assumption of a constant itemset size. Through extensive experiments, we demonstrate that TwMinSwap outperforms all competitors in terms of accuracy and memory usage, with fast running time. We also show that TwMinSwap-Is is more accurate than the competitor and discovers recently frequent itemsets with reasonably large sizes (at most 5–7) depending on datasets. Thanks to TwMinSwap and TwMinSwap-Is, we report interesting discoveries in real world data streams, including the difference of trends between the winner and the loser of U.S. presidential candidates, and temporal human contact patterns.  相似文献   

15.
In the Intervalizing Coloured Graphs problem, one must decide for a given graph G = (V, E) with a proper vertex colouring of G whether G is the subgraph of a properly coloured interval graph. For the case that the number of colors is fixed, we give an exact algorithm that uses \(2^{\mathcal {O}(n/\log n)}\) time. We also give an \(\mathcal {O}^{\ast }(2^{n})\) algorithm for the case that the number of colors is not fixed.  相似文献   

16.
How do the k-core structures of real-world graphs look like? What are the common patterns and the anomalies? How can we exploit them for applications? A k-core is the maximal subgraph in which all vertices have degree at least k. This concept has been applied to such diverse areas as hierarchical structure analysis, graph visualization, and graph clustering. Here, we explore pervasive patterns related to k-cores and emerging in graphs from diverse domains. Our discoveries are: (1) Mirror Pattern: coreness (i.e., maximum k such that each vertex belongs to the k-core) is strongly correlated with degree. (2) Core-Triangle Pattern: degeneracy (i.e., maximum k such that the k-core exists) obeys a 3-to-1 power-law with respect to the count of triangles. (3) Structured Core Pattern: degeneracy–cores are not cliques but have non-trivial structures such as core–periphery and communities. Our algorithmic contributions show the usefulness of these patterns. (1) Core-A, which measures the deviation from Mirror Pattern, successfully spots anomalies in real-world graphs, (2) Core-D, a single-pass streaming algorithm based on Core-Triangle Pattern, accurately estimates degeneracy up to 12 \(\times \) faster than its competitor. (3) Core-S, inspired by Structured Core Pattern, identifies influential spreaders up to 17 \(\times \) faster than its competitors with comparable accuracy.  相似文献   

17.
We consider a class of graphs G(n, r, s) = (V (n, r),E(n, r, s)) defined as follows:
$$V(n,r) = \{ x = ({x_{1,}},{x_2}...{x_n}):{x_i} \in \{ 0,1\} ,{x_{1,}} + {x_2} + ... + {x_n} = r\} ,E(n,r,s) = \{ \{ x,y\} :(x,y) = s\} $$
where (x, y) is the Euclidean scalar product. We study random subgraphs G(G(n, r, s), p) with edges independently chosen from the set E(n, r, s) with probability p each. We find nontrivial lower and upper bounds on the clique number of such graphs.
  相似文献   

18.
We present two parameterized algorithms for the Minimum Fill-in problem, also known as Chordal Completion: given an arbitrary graph G and integer k, can we add at most k edges to G to obtain a chordal graph? Our first algorithm has running time \(\mathcal {O}(k^{2}nm+3.0793^{k})\), and requires polynomial space. This improves the base of the exponential part of the best known parameterized algorithm time for this problem so far. We are able to improve this running time even further, at the cost of more space. Our second algorithm has running time \(\mathcal {O}(k^{2}nm+2.35965^{k})\) and requires \(\mathcal {O}^{\ast}(1.7549^{k})\) space. To achieve these results, we present a new lemma describing the edges that can safely be added to achieve a chordal completion with the minimum number of edges, regardless of k.  相似文献   

19.
Disjunctive Temporal Problems (DTPs) with Preferences (DTPPs) extend DTPs with piece-wise constant preference functions associated to each constraint of the form lx ? yu, where x,y are (real or integer) variables, and l,u are numeric constants. The goal is to find an assignment to the variables of the problem that maximizes the sum of the preference values of satisfied DTP constraints, where such values are obtained by aggregating the preference functions of the satisfied constraints in it under a “max” semantic. The state-of-the-art approach in the field, implemented in the native DTPP solver Maxilitis, extends the approach of the native DTP solver Epilitis. In this paper we present alternative approaches that translate DTPPs to Maximum Satisfiability of a set of Boolean combination of constraints of the form l?x ? y?u, ? ∈{<,≤}, that extend previous work dealing with constant preference functions only. We prove correctness and completeness of the approaches. Results obtained with the Satisfiability Modulo Theories (SMT) solvers Yices and MathSAT on randomly generated DTPPs and DTPPs built from real-world benchmarks, show that one of our translation is competitive to, and can be faster than, Maxilitis (This is an extended and revised version of Bourguet et al. 2013).  相似文献   

20.
In many parallel and distributed multiprocessor systems, the processors are connected based on different types of interconnection networks. The topological structure of an interconnection network is typically modeled as a graph. Among the many kinds of network topologies, the crossed cube is one of the most popular. In this paper, we investigate the panpositionable panconnectedness problem with respect to the crossed cube. A graph G is r-panpositionably panconnected if for any three distinct vertices x, y, z of G and for any integer \(l_1\) satisfying \(r \le l_1 \le |V(G)| - r - 1\), there exists a path \(P = [x, P_1, y, P_2, z]\) in G such that (i) \(P_1\) joins x and y with \(l(P_1) = l_1\) and (ii) \(P_2\) joins y and z with \(l(P_2) = l_2\) for any integer \(l_2\) satisfying \(r \le l_2 \le |V(G)| - l_1 - 1\), where |V(G)| is the total number of vertices in G and \(l(P_1)\) (respectively, \(l(P_2)\)) is the length of path \(P_1\) (respectively, \(P_2\)). By mathematical induction, we demonstrate that the n-dimensional crossed cube \(CQ_n\) is n-panpositionably panconnected. This result indicates that the path embedding of joining x and z with a mediate vertex y in \(CQ_n\) is extremely flexible. Moreover, applying our result, crossed cube problems such as panpositionable pancyclicity, panpositionably Hamiltonian connectedness, and panpositionable Hamiltonicity can be solved.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号