首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We propose a novel approach,namely local reduction of networks,to extract the global core(GC,for short)from a complex network.The algorithm is built based on the small community phenomenon of networks.The global cores found by our local reduction from some classical graphs and benchmarks convince us that the global core of a network is intuitively the supporting graph of the network,which is"similar to"the original graph,that the global core is small and essential to the global properties of the network,and that the global core,together with the small communities gives rise to a clear picture of the structure of the network,that is,the galaxy structure of networks.We implement the local reduction to extract the global cores for a series of real networks,and execute a number of experiments to analyze the roles of the global cores for various real networks.For each of the real networks,our experiments show that the found global core is small,that the global core is similar to the original network in the sense that it follows the power law degree distribution with power exponent close to that of the original network,that the global core is sensitive to errors for both cascading failure and physical attack models,in the sense that a small number of random errors in the global core may cause a major failure of the whole network,and that the global core is a good approximate solution to the r-radius center problem,leading to a galaxy structure of the network.  相似文献   

2.
We initiate the study of property testing of submodularity on the boolean hypercube. Submodular functions come up in a variety of applications in combinatorial optimization. For a vast range of algorithms, the existence of an oracle to a submodular function is assumed. But how does one check if this oracle indeed represents a submodular function? Consider a function f:{0,1} n →?. The distance to submodularity is the minimum fraction of values of f that need to be modified to make f submodular. If this distance is more than ?>0, then we say that f is ?-far from being submodular. The aim is to have an efficient procedure that, given input f that is ?-far from being submodular, certifies that f is not submodular. We analyze a natural tester for this problem, and prove that it runs in subexponential time. This gives the first non-trivial tester for submodularity. On the other hand, we prove an interesting lower bound (that is, unfortunately, quite far from the upper bound) suggesting that this tester cannot be efficient in terms of ?. This involves non-trivial examples of functions which are far from submodular and yet do not exhibit too many local violations. We also provide some constructions indicating the difficulty in designing a tester for submodularity. We construct a partial function defined on exponentially many points that cannot be extended to a submodular function, but any strict subset of these values can be extended to a submodular function.  相似文献   

3.
4.
We consider the problem of finding a square low-rank correction (λC ? B)F to a given square pencil (λE ? A) such that the new pencil λ(E ? CF) ? (A ? BF) has all its generalised eigenvalues at the origin. We give necessary and sufficient conditions for this problem to have a solution and we also provide a constructive algorithm to compute F when such a solution exists. We show that this problem is related to the deadbeat control problem of a discrete-time linear system and that an (almost) equivalent formulation is to find a square embedding that has all its finite generalised eigenvalues at the origin.  相似文献   

5.
Rainbow Sort is an unconventional method for sorting, which is based on the physical concepts of refraction and dispersion. It is inspired by the observation that light that traverses a prism is sorted by wavelength. At first sight this “rainbow effect” that appears in nature has nothing to do with a computation in the classical sense, still it can be used to design a sorting method that has the potential of running in Θ (n) with a space complexity of Θ (n), where n denotes the number of elements that are sorted. In Section 1, some upper and lower bounds for sorting are presented in order to provide a basis for comparisons. In Section 2, the physical background is outlined, the setup and the algorithm are presented and a lower bound for Rainbow Sort of Ω (n) is derived. In Section 3, we describe essential difficulties that arise when Rainbow Sort is implemented. Particularly, restrictions that apply due to the Heisenberg uncertainty principle have to be considered. Furthermore, we sketch a possible implementation that leads to a running time of O(n+m), where m is the maximum key value, i.e., we assume that there are integer keys between 0 and m. Section 4 concludes with a summary of the complexity and some remarks on open questions, particularly on the treatment of duplicates and the preservation of references from the keys to records that contain the actual data. In Appendix A, a simulator is introduced that can be used to visualise Rainbow Sort.  相似文献   

6.
It is commonplace in cognitive science that concepts are individuated in terms of the roles they play in the cognitive lives of thinkers, a view that Jerry Fodor has recently been dubbed ‘Concept Pragmatism’. Quinean critics of Pragmatism have long argued that it founders on its commitment to the analytic/synthetic distinction, since without such a distinction there is plausibly no way to distinguish constitutive from non-constitutive roles in cognition. This paper considers Fodor’s empirical arguments against analyticity, and in particular his arguments against lexical decomposition and definitions, and argues that Concept Pragmatists have two viable options with respect to them. First, Concept Pragmatists can confront them head-on, and argue that they do not show that lexical items are semantically primitive or that lexical concepts are internally unstructured. Second, Pragmatists may accept that these arguments show that lexical concepts are atomic, but insist that this need not entail that Pragmatism is false. For there is a viable version of Concept Pragmatism that does not take lexical items to be semantically structured or lexical concepts to be internally structured. Adopting a version of Pragmatism that takes meaning relations to be specified by inference rules, or meaning postulates, allows one to accept the empirical arguments in favor of Concept Atomism, while at the same time deny that such arguments show that there are no analyticities. The paper concludes by responding to Fodor’s recent objection that such a version of Concept Pragmatism has unhappy consequences concerning the relation between concept constitution and concept possession.
Bradley RivesEmail:
  相似文献   

7.
The software model checker Blast   总被引:2,自引:0,他引:2  
Blast is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, Blast either statically proves that the program satisfies the safety property, or provides an execution path that exhibits a violation of the property (or, since the problem is undecidable, does not terminate). Blast constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. This paper gives an introduction to Blast and demonstrates, through two case studies, how it can be applied to program verification and test-case generation. In the first case study, we use Blast to statically prove memory safety for C programs. We use CCured, a type-based memory-safety analyzer, to annotate a program with run-time assertions that check for safe memory operations. Then, we use Blast to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate execution scenarios that violate the assertions for the remaining run-time checks. In our second case study, we use Blast to automatically generate test suites that guarantee full coverage with respect to a given predicate. Given a C program and a target predicate p, Blast determines the program locations q for which there exists a program execution that reaches q with p true, and automatically generates a set of test vectors that cause such executions. Our experiments show that Blast can provide automated, precise, and scalable analysis for C programs.  相似文献   

8.
We introduce a new class of random graph models for complex real-world networks, based on the protean graph model by Łuczak and Prałat. Our generalized protean graph models have two distinguishing features. First, they are not growth models, but instead are based on the assumption that a “steady state” of large but finite size has been reached. Second, the models assume that the vertices are ranked according to a given ranking scheme, and the rank of a vertex determines the probability that that vertex receives a link in a given time step. Precisely, the link probability is proportional to the rank raised to the power −α, where the attachment strength α is a tunable parameter. We show that the model leads to a power law degree distribution with exponent 1+1/α for ranking schemes based on a given prestige label, or on the degree of a vertex. We also study a scheme where each vertex receives an initial rank chosen randomly according to a biased distribution. In this case, the degree distribution depends on the distribution of the initial rank. For one particular choice of parameters we obtain a power law with an exponent that depends both on α and on a parameter determining the initial rank distribution.  相似文献   

9.
Data cleaning is a pervasive problem for organizations as they try to reap value from their data. Recent advances in networking and cloud computing technology have fueled a new computing paradigm called Database-as-a-Service, where data management tasks are outsourced to large service providers. In this paper, we consider a Data Cleaning-as-a-Service model that allows a client to interact with a data cleaning provider who hosts curated, and sensitive data. We present PACAS: a Privacy-Aware data Cleaning-As-a-Service model that facilitates interaction between the parties with client query requests for data, and a service provider using a data pricing scheme that computes prices according to data sensitivity. We propose new extensions to the model to define generalized data repairs that obfuscate sensitive data to allow data sharing between the client and service provider. We present a new semantic distance measure to quantify the utility of such repairs, and we re-define the notion of consistency in the presence of generalized values. The PACAS model uses (X, Y, L)-anonymity that extends existing data publishing techniques to consider the semantics in the data while protecting sensitive values. Our evaluation over real data show that PACAS safeguards semantically related sensitive values, and provides lower repair errors compared to existing privacy-aware cleaning techniques.  相似文献   

10.
We provide a novel model to formalize a well-known algorithm, by Chandra and Toueg, that solves Consensus among asynchronous distributed processes in the presence of a particular class of failure detectors (◊S\mathcal{S} or, equivalently, Ω), under the hypothesis that only a minority of processes may crash. The model is defined as a global transition system that is unambigously generated by local transition rules. The model is syntax-free in that it does not refer to any form of programming language or pseudo code. We use our model to formally prove that the algorithm is correct.  相似文献   

11.
The trace transform is a novel algorithm that has been shown to be effective in a number of image recognition tasks. It is a generalisation of the Radon transform that has been widely used in image processing for decades. Its generality—allowing multiple functions to be used in the mapping—leads to an algorithm that can be tailored to specific applications. However, its computation complexity has been a barrier to its widespread adoption. By harnessing the heterogeneous resources on a modern FPGA, the algorithm is significantly accelerated. Here, a flexible system is developed that allows for a wide array of functionals to be computed without re-implementing the design. The system is fully scalable, such that the number and complexity of functionals does not affect the speed of the circuit. The heterogeneous resources of the FPGA platform are then used to develop a set of flexible functional blocks that can each implement a number of different mathematical functionals. The combined result of this design is a system that can compute the trace transform on a 256 × 256 pixel image at 26 fps, enabling real-time processing of captured video frames.
Suhaib A. FahmyEmail:
  相似文献   

12.
Given a set of positive integers S, the minimum generating set problem consists in finding a set of positive integers T with a minimum cardinality such that every element of S can be expressed as the sum of a subset of elements in T. It constitutes a natural problem in combinatorial number theory and is related to some real-world problems, such as planning radiation therapies.We present a new formulation to this problem (based on the terminology for the multiple knapsack problem) that is used to design an evolutionary approach whose performance is driven by three search strategies; a novel random greedy heuristic scheme that is employed to construct initial solutions, a specialized crossover operator inspired by real-parameter crossovers and a restart mechanism that is incorporated to avoid premature convergence. Computational results for problem instances involving up to 100,000 elements show that our innovative genetic algorithm is a very attractive alternative to the existing approaches.  相似文献   

13.
Objective: The current study examines the changes in functional connectivity that occurs when expert users adapt to an alternate mapping. Background: Research has indicated that interfaces that are similar will result in more errors and may contribute to confusion. Methods: Six volunteers were recruited to determine the neurophysiological changes that occur when users are exposed to an alternate mapping once an internal mental model is formed. Results: The results indicated a change in synchronization after alterations to the button mappings occurred. By altering the layout or order of the task, a difference in the activation pattern was observed. New areas became synchronized while synchronized activity that was present in the developed internal model became desynchronized. Altering the complexity of the task resulted in different patterns of activation recorded on the quantitative electroencephalogram (QEEG). Conclusion: Users often form a schema when learning a device and subsequent interactions are compared to the mental model formed during the initial learning phase. If the newer interface differs significantly a new schema is formed, resulting in a different pattern of synchronization recorded on the QEEG. Application: The use of this knowledge can assist in the development of new interfaces. If the intent is to create a similar interface design, the activation pattern should remain the same indicating that the old schema can be applied. An interface that displays a different cognitive pattern will indicate that a new schema was developed.  相似文献   

14.
This paper considers the problem of electing an eventual leader in an asynchronous shared memory system. While this problem has received a lot of attention in message-passing systems, very few solutions have been proposed for shared memory systems. As an eventual leader cannot be elected in a pure asynchronous system prone to process crashes, the paper first proposes to enrich the asynchronous system model with an additional assumption. That assumption (denoted AWB) is particularly weak. It is made up of two complementary parts. More precisely, it requires that, after some time, (1) there is a process whose write accesses to some shared variables be timely, and (2) the timers of (tf) other processes be asymptotically well-behaved (t denotes the maximal number of processes that may crash, and f the actual number of process crashes in a run). The asymptotically well-behaved timer notion is a new notion that generalizes and weakens the traditional notion of timers whose durations are required to monotonically increase when the values they are set to increase (a timer works incorrectly when it expires at arbitrary times, i.e., independently of the value it has been set to). The paper then focuses on the design of t-resilient AWB-based eventual leader protocols. “t-resilient” means that each protocol can cope with up to t process crashes (taking t=n−1 provides wait-free protocols, i.e., protocols that can cope with any number of process failures). Two protocols are presented. The first enjoys the following noteworthy properties: after some time only the elected leader has to write the shared memory, and all but one shared variables have a bounded domain, be the execution finite or infinite. This protocol is consequently optimal with respect to the number of processes that have to write the shared memory. The second protocol guarantees that all the shared variables have a bounded domain. This is obtained at the following additional price: t+1 processes are required to forever write the shared memory. A theorem is proved which states that this price has to be paid by any protocol that elects an eventual leader in a bounded shared memory model. This second protocol is consequently optimal with respect to the number of processes that have to write in such a constrained memory model. In a very interesting way, these protocols show an inherent tradeoff relating the number of processes that have to write the shared memory and the bounded/unbounded attribute of that memory.  相似文献   

15.
We propose a kinematic model of a system moving in an (m?+?1)-dimensional euclidean space and consisting of n rigid bars attached successively to each other and subject to the nonholonomic constraints that the instantaneous velocity of the source point of each bar is parallel to that bar. We prove that the associated control system is controllable and feedback equivalent to the m-chained form around any regular configuration. As a consequence, we deduce that the n-bar system is flat and show that the Cartesian position of the source point of the last (from the top) bar is a flat output. The n-bar system is a natural generalisation of the n-trailer system and we provide a comparison of flatness properties of both systems.  相似文献   

16.
This paper deals with the problem of derivational redundancy in scientific explanation, i.e. the problem that there can be extremely many different explanatory derivations for a natural phenomenon while students and experts mostly come up with one and the same derivation for a phenomenon (modulo the order of applying laws). Given this agreement among humans, we need to have a story of how to select from the space of possible derivations of a phenomenon the derivation that humans come up with. In this paper we argue that the problem of derivational redundancy can be solved by a new notion of “shortest derivation”, by which we mean the derivation that can be constructed by the fewest (and therefore largest) partial derivations of previously derived phenomena that function as “exemplars”. We show how the exemplar-based framework known as “Data-Oriented Parsing” or “DOP” can be employed to select the shortest derivation in scientific explanation. DOP’s shortest derivation of a phenomenon maximizes what is called the “derivational similarity” between a phenomenon and a corpus of exemplars. A preliminary investigation with exemplars from classical and fluid mechanics shows that the shortest derivation closely corresponds to the derivations that humans construct. Our approach also proposes a concrete solution to Kuhn’s problem of how we know on which exemplar a phenomenon can be modeled. We argue that humans model a phenomenon on the exemplar that is derivationally most similar to the phenomenon, i.e. the exemplar from which the largest subtree(s) can be used to derive the phenomenon.  相似文献   

17.
Technology offers substantial benefits to the many people with some form of cognitive disability. But the power of technology often comes in a package whose complexity is a barrier to many users, leading to calls for designs, and especially designs for user interfaces, that are “simple”. This paper analyzes the idea of simplicity, and suggests (a) that simplicity in a user interface is not a unified concept, but rather has distinguishable facets, and (b) that simplicity must be defined in terms of the cognitive capabilities of a user, so that what is “simpler” for one user may be “more complex” for another. Despite (b), the prospects for universal design in this area are good, in that interface technology with the flexibility needed to produce “simple” interfaces for a range of users with different cognitive strengths will be of value in addressing the overall design space of interfaces for a broad audience. While it is possible to sketch the outlines of a useful theory of simplicity, the sketch reveals much that is not fully understood. It also reveals opportunities to rethink the architecture of user interfaces in a way that will benefit user interface development generally.  相似文献   

18.
It has been known that an n-dimensional hypercube (n-cube for short) can always embed a Hamiltonian cycle when the n-cube has no more than n−2 faulty links. In this paper, we study the link-fault tolerant embedding of a Hamiltonian cycle into the folded hypercube, which is a variant of the hypercube, obtained by adding a link to every pair of nodes with complementary addresses. We will show that a folded n-cube can tolerate up to n−1 faulty links when embedding a Hamiltonian cycle. We present an algorithm, FT_HAMIL, that finds a Hamiltonian cycle while avoiding any set of faulty links F provided that |F|⩽n−1. An operation, called bit-flip, on links of hyper-cube is introduced. Simple yet elegant, bit-flip will be employed by FT_HAMIL as a basic operation to generate a new Hamiltonian cycle from an old one (that contains faulty links). It is worth pointing out that the algorithm is optimal in the sense that for a folded n-cube, n−1 is the maximum number for |F| that can be tolerated, F being an arbitrary set of faulty links.  相似文献   

19.
In some applications of triangulation, such as finite-element mesh generation, the aim is to triangulate a given domain, not just a set of points. One approach to meeting this requirement, while maintaining the desirable properties of Delaunay triangulation, has been to enforce the empty circumcircle property of Delaunay triangulation, subject to the additional constraint that the edges of a polygon be covered by edges of the triangulation. In finite-element mesh generation it is usually necessary to include additional points besides the vertices of the domain boundary. This motivates us to ask whether it is possible to trinagulate a domain by introducing additional points in such a manner that the Delaunay triangulation of the points includes the edges of the domain boundary. We present algorithms that given a multiply connected polygonal domain withN vertices, placeK additional points on the boundary inO(N logN + K) time such that the polygon is covered by the edges of the Delaunay triangulation of theN + K points. Furthermore,K is the minimum number of additional points such that a circle, passing through the endpoints of each boundary edge segment, exists that does not contain in its interior any other part of the domain boundary. We also show that by adding only one more point per edge, certain degeneracies that may otherwise arise can be avoided.  相似文献   

20.
In this paper, we present new monolithic and compositional algorithms to solve the LTL realizability problem. Those new algorithms are based on a reduction of the LTL realizability problem to a game whose winning condition is defined by a universal automaton on infinite words with a k-co-Büchi acceptance condition. This acceptance condition asks that runs visit at most k accepting states, so it implicitly defines a safety game. To obtain efficient algorithms from this construction, we need several additional ingredients. First, we study the structure of the underlying automata constructions, and we show that there exists a partial order that structures the state space of the underlying safety game. This partial order can be used to define an efficient antichain algorithm. Second, we show that the algorithm can be implemented in an incremental way by considering increasing values of k in the acceptance condition. Finally, we show that for large LTL formulas that are written as conjunctions of smaller formulas, we can solve the problem compositionally by first computing winning strategies for each conjunct that appears in the large formula. We report on the behavior of those algorithms on several benchmarks. We show that the compositional algorithms are able to handle LTL formulas that are several pages long.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号