首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
bsp is a bridging model between abstract execution and concrete parallel systems. Structure and abstraction brought by bsp allow to have portable parallel programs with scalable performance predictions, without dealing with low-level details of architectures. In the past, we designed bsml for programming bsp algorithms in ml. However, the simplicity of the bsp model does not fit the complexity of today’s hierarchical architectures such as clusters of machines with multiple multi-core processors. The multi-bsp model is an extension of the bsp model which brings a tree-based view of nested components of hierarchical architectures. To program multi-bsp algorithms in ml, we propose the multi-ml language as an extension of bsml where a specific kind of recursion is used to go through a hierarchy of computing nodes. We define a formal semantics of the language and present preliminary experiments which show performance improvements with respect to bsml.  相似文献   

2.
3.
Given a simple undirected graph G = (V, E) and an integer k < |V|, the Sparsest k-Subgraph problem asks for a set of k vertices which induces the minimum number of edges. As a generalization of the classical independent set problem, Sparsest k-Subgraph is ????-hard and even not approximable unless ?????? in general graphs. Thus, we investigate Sparsest k-Subgraph in graph classes where independent set is polynomial-time solvable, such as subclasses of perfect graphs. Our two main results are the ????-hardness of Sparsest k-Subgraph on chordal graphs, and a greedy 2-approximation algorithm. Finally, we also show how to derive a P T A S for Sparsest k-Subgraph on proper interval graphs.  相似文献   

4.
Providing high level tools for parallel programming while sustaining a high level of performance has been a challenge that techniques like Domain Specific Embedded Languages try to solve. In previous works, we investigated the design of such a DSEL—NT\(^2\)—providing a Matlab -like syntax for parallel numerical computations inside a C++ library. In this paper, we show how NT\(^2\!\) has been redesigned for shared memory systems in an extensible and portable way. The new NT\(^2\!\) design relies on a tiered Parallel Skeleton system built using asynchronous task management and automatic compile-time taskification of user level code. We describe how this system can operate various shared memory runtimes and evaluate the design by using two benchmarks implementing linear algebra algorithms.  相似文献   

5.
In this paper we consider both the maximization variant Max Rep and the minimization variant Min Rep of the famous Label Cover problem. So far the best approximation ratios known for these two problems were \(O(\sqrt{n})\) and indeed some authors suggested the possibility that this ratio is the best approximation factor for these two problems. We show, in fact, that there are a O(n 1/3)-approximation algorithm for Max Rep and a O(n 1/3log?2/3 n)-approximation algorithm for Min Rep. In addition, we also exhibit a randomized reduction from Densest k-Subgraph to Max Rep, showing that any approximation factor for Max Rep implies the same factor (up to a constant) for Densest k-Subgraph.  相似文献   

6.
A degree-constrained graph orientation of an undirected graph G is an assignment of a direction to each edge in G such that the outdegree of every vertex in the resulting directed graph satisfies a specified lower and/or upper bound. Such graph orientations have been studied for a long time and various characterizations of their existence are known. In this paper, we consider four related optimization problems introduced in reference (Asahiro et al. LNCS 7422, 332–343 (2012)): For any fixed non-negative integer W, the problems MAX W-LIGHT, MIN W-LIGHT, MAX W-HEAVY, and MIN W-HEAVY take as input an undirected graph G and ask for an orientation of G that maximizes or minimizes the number of vertices with outdegree at most W or at least W. As shown in Asahiro et al. LNCS 7422, 332–343 (2012)).  相似文献   

7.
Let a program-predicate t testing another program p with respect to a given postcondition be given. Concrete tests d (data of the program p) are input data for t. Let us consider the program t when values of its argument d are unknown. Then a proof of the fact that the prediate t is true for all input data of the program p is verification of p with respect to the given postcondition. In this paper, we describe experiments on automatic verification of a number of cache coherence protocols with the SCP4 supercompiler (an optimizer of programs written in the REFAL-5 functional language).  相似文献   

8.
We present new filtering algorithms for Disjunctive and Cumulative constraints, each of which improves the complexity of the state-of-the-art algorithms by a factor of log n. We show how to perform Time-Tabling and Detectable Precedences in linear time on the Disjunctive constraint. Furthermore, we present a linear-time Overload Checking for the Disjunctive and Cumulative constraints. Finally, we show how the rule of Not-first/Not-last can be enforced in quadratic time for the Cumulative constraint. These algorithms rely on the union find data structure, from which we take advantage to introduce a new data structure that we call it time line. This data structure provides constant time operations that were previously implemented in logarithmic time by the Θ-tree data structure. Experiments show that these new algorithms are competitive even for a small number of tasks and outperform existing algorithms as the number of tasks increases. We also show that the time line can be used to solve specific scheduling problems.  相似文献   

9.
The task assignment on the Internet has been widely applied to many areas, e.g., online labor market, online paper review and social activity organization. In this paper, we are concerned with the task assignment problem related to the online labor market, termed as ClusterHire. We improve the definition of the ClusterHire problem, and propose an efficient and effective algorithm, entitled Influence. In addition, we place a participation constraint on ClusterHire. It constrains the load of each expert in order to keep all members from overworking. For the participation-constrained ClusterHire problem, we devise two algorithms, named ProjectFirst and Era. The former generates a participationconstrained team by adding experts to an initial team, and the latter generates a participation-constrained team by removing the experts with the minimum influence from the universe of experts. The experimental evaluations indicate that 1) Influence performs better than the state-of-the-art algorithms in terms of effectiveness and time efficiency; 2) ProjectFirst performs better than Era in terms of time efficiency, yet Era performs better than ProjectFirst in terms of effectiveness.  相似文献   

10.
This paper discusses the implementation of deformable ring gears in Multi-Body-Simulation-Models (MBS-Models) of planetary gearboxes using the example of a Wind Turbine (WT). For this purpose an add-on to the MBS-Software Simpack was developed and tested by the project partners Bosch Rexroth and the Institute for Machine Elements and Machine Design at Rwth-Aachen University (Ime).The presented research includes a measurement campaign on a test rig owned by Bosch Rexroth to obtain data for a validation of the newly developed add-on.  相似文献   

11.
In the framework of parameterized complexity, exploring how one parameter affects the complexity of a different parameterized (or unparameterized problem) is of general interest. A well-developed example is the investigation of how the parameter treewidth influences the complexity of (other) graph problems. The reason why such investigations are of general interest is that real-world input distributions for computational problems often inherit structure from the natural computational processes that produce the problem instances (not necessarily in obvious, or well-understood ways). The max leaf number ml(G) of a connected graph G is the maximum number of leaves in a spanning tree for G. Exploring questions analogous to the well-studied case of treewidth, we can ask: how hard is it to solve 3-Coloring, Hamilton Path, Minimum Dominating Set, Minimum Bandwidth or many other problems, for graphs of bounded max leaf number? What optimization problems are W[1]-hard under this parameterization? We do two things:
  1. (1)
    We describe much improved FPT algorithms for a large number of graph problems, for input graphs G for which ml(G)≤k, based on the polynomial-time extremal structure theory canonically associated to this parameter. We consider improved algorithms both from the point of view of kernelization bounds, and in terms of improved fixed-parameter tractable (FPT) runtimes O *(f(k)).
     
  2. (2)
    The way that we obtain these concrete algorithmic results is general and systematic. We describe the approach, and raise programmatic questions.
     
  相似文献   

12.
Given a large collection of co-evolving online activities, such as searches for the keywords “Xbox”, “PlayStation” and “Wii”, how can we find patterns and rules? Are these keywords related? If so, are they competing against each other? Can we forecast the volume of user activity for the coming month? We conjecture that online activities compete for user attention in the same way that species in an ecosystem compete for food. We present EcoWeb, (i.e., Ecosystem on the Web), which is an intuitive model designed as a non-linear dynamical system for mining large-scale co-evolving online activities. Our second contribution is a novel, parameter-free, and scalable fitting algorithm, EcoWeb-Fit, that estimates the parameters of EcoWeb. Extensive experiments on real data show that EcoWeb is effective, in that it can capture long-range dynamics and meaningful patterns such as seasonalities, and practical, in that it can provide accurate long-range forecasts. EcoWeb consistently outperforms existing methods in terms of both accuracy and execution speed.  相似文献   

13.
Many program optimizations require exact knowledge of the sets of array elements that are referenced in or that flow between statements or procedures. Some examples are array privatization, generation of communications in distributed memory machines, or compile-time optimization of cache behavior in hierarchical memory machines. Exact array region analysis is introduced in this article. These regions exactly represent the effects of statements and procedures upon array variables. To represent the flow of these data, we also introduce two new types of array region analyses: in and out regions. The intraprocedural propagation is presented, as well as a general linear framework for interprocedural analyses, which handles array reshapes. The intra- and inter-procedural propagation of array regions is implemented in pips, the interprocedural parallelizer of fortran programs developed at École des mines de Paris.  相似文献   

14.
Satisfiability Modulo Theories (SMT) have been widely investigated over the last decade. Recently researchers have extended SMT to the optimization problem over linear arithmetic constraints. To the best of our knowledge, Symba and OPT-MathSAT are two most efficient solvers available for this problem. The key algorithms used by Symba and OPT-MathSAT consist of the loop of two procedures: 1) critical finding for detecting a critical point, which is very likely to be globally optimal, and 2) global checking for confirming the critical point is really globally optimal. In this paper, we propose a new approach based on the Simplex method widely used in operation research. Our fundamental idea is to find several critical points by constructing and solving a series of linear problems with the Simplex method. Our approach replaces the algorithms of critical finding in Symba and OPT-MathSAT, and reduces the runtime of critical finding and decreases the number of executions of global checking. The correctness of our approach is proved. The experiment evaluates our implementation against Symba and OPT-MathSAT on a critical class of problems in real-time systems. Our approach outperforms Symba on 99.6% of benchmarks and is superior to OPT-MathSAT in large-scale cases where the number of tasks is more than 24. The experimental results demonstrate that our approach has great potential and competitiveness for the optimization problem.  相似文献   

15.
The paper studies three fundamental problems in graph analytics, computing connected components (CCs), biconnected components (BCCs), and 2-edge-connected components (ECCs) of a graph. With the recent advent of big data, developing efficient distributed algorithms for computing CCs, BCCs and ECCs of a big graph has received increasing interests. As with the existing research efforts, we focus on the Pregel programming model, while the techniques may be extended to other programming models including MapReduce and Spark. The state-of-the-art techniques for computing CCs and BCCs in Pregel incur \(O(m\times \#\text {supersteps})\) total costs for both data communication and computation, where m is the number of edges in a graph and #supersteps is the number of supersteps. Since the network communication speed is usually much slower than the computation speed, communication costs are the dominant costs of the total running time in the existing techniques. In this paper, we propose a new paradigm based on graph decomposition to compute CCs and BCCs with O(m) total communication cost. The total computation costs of our techniques are also smaller than that of the existing techniques in practice, though theoretically almost the same. Moreover, we also study distributed computing ECCs. We are the first to study this problem and an approach with O(m) total communication cost is proposed. Comprehensive empirical studies demonstrate that our approaches can outperform the existing techniques by one order of magnitude regarding the total running time.  相似文献   

16.
We study the computational complexity of the existence and the verification problem for wonderfully stable partitions (WSPE and WSPV) and of the existence problem for strictly core stable coalition structures (SCSCS) in enemy-oriented hedonic games. In this note, we show that WSPV is NP-complete and both WSPE and SCSCS are DP-hard, where DP is the second level of the boolean hierarchy, and we discuss an approach for classifying the latter two problems in terms of their complexity.  相似文献   

17.
We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions Sum, defined as the sum of the players’ costs, and Max, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function Sum and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of \(2+\sqrt{5}\approx 4.24\) given in Christodoulou et al. (Proceedings of the 23rd International Symposium on Theoretical Aspects of Computer Science (STACS), LNCS, vol. 3884, pp. 349–360, Springer, Berlin, 2006) and the lower bound of 4 derived in Caragiannis et al. (Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP), LNCS, vol. 4051, pp. 311–322, Springer, Berlin, 2006) by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function Max, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of \(\Theta(\sqrt[4]{n^{3}})\) (resp. \(\Theta(n\sqrt{n})\)), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.  相似文献   

18.
This paper presents a distributed (Bulk-Synchronous Parallel or bsp) algorithm to compute on-the-fly whether a structured model of a security protocol satisfies a ctl \(^*\) formula. Using the structured nature of the security protocols allows us to design a simple method to distribute the state space under consideration in a need-driven fashion. Based on this distribution of the states, the algorithm for logical checking of a ltl formula can be simplified and optimised allowing, with few tricky modifications, the design of an efficient algorithm for ctl \(^*\) checking. Some prototype implementations have been developed, allowing to run benchmarks to investigate the parallel behaviour of our algorithms.  相似文献   

19.
How can we discover interesting patterns from time-evolving high-speed data streams? How to analyze the data streams quickly and accurately, with little space overhead? How to guarantee the found patterns to be self-consistent? High-speed data stream has been receiving increasing attention due to its wide applications such as sensors, network traffic, social networks, etc. The most fundamental task on the data stream is frequent pattern mining; especially, focusing on recentness is important in real applications. In this paper, we develop two algorithms for discovering recently frequent patterns in data streams. First, we propose TwMinSwap to find top-k recently frequent items in data streams, which is a deterministic version of our motivating algorithm TwSample providing theoretical guarantees based on item sampling. TwMinSwap improves TwSample in terms of speed, accuracy, and memory usage. Both require only O(k) memory spaces and do not require any prior knowledge on the stream such as its length and the number of distinct items in the stream. Second, we propose TwMinSwap-Is to find top-k recently frequent itemsets in data streams. We especially focus on keeping self-consistency of the discovered itemsets, which is the most important property for reliable results, while using O(k) memory space with the assumption of a constant itemset size. Through extensive experiments, we demonstrate that TwMinSwap outperforms all competitors in terms of accuracy and memory usage, with fast running time. We also show that TwMinSwap-Is is more accurate than the competitor and discovers recently frequent itemsets with reasonably large sizes (at most 5–7) depending on datasets. Thanks to TwMinSwap and TwMinSwap-Is, we report interesting discoveries in real world data streams, including the difference of trends between the winner and the loser of U.S. presidential candidates, and temporal human contact patterns.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号