首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Inspired by applications in parallel computing, we analyze the setting of work stealing in multithreaded computations. We obtain tight upper bounds on the number of steals when the computation can be modeled by rooted trees. In particular, we show that if the computation with n processors starts with one processor having a complete k-ary tree of height h (and the remaining n ? 1 processors having nothing), the maximum possible number of steals is \({\sum }_{i=1}^{n}(k-1)^{i}\binom {h}{i}\).  相似文献   

2.
It is known that the controllable system x′ = Bx + Du, where the x is the n-dimensional vector, can be transferred from an arbitrary initial state x(0) = x 0 to an arbitrary finite state x(T) = x T by the control function u(t) in the form of the polynomial in degrees t. In this work, the minimum degree of the polynomial is revised: it is equal to 2p + 1, where the number (p ? 1) is a minimum number of matrices in the controllability matrix (Kalman criterion), whose rank is equal to n. A simpler and a more natural algorithm is obtained, which first brings to the discovery of coefficients of a certain polynomial from the system of algebraic equations with the Wronskian and then, with the aid of differentiation, to the construction of functions of state and control.  相似文献   

3.
We provide a constant time schedulability test for an on-line multiprocessor server handling aperiodic tasks. Dhall's effect is avoided by dividing the tasks in two priority classes based on task utilization: heavy and light. We prove that if the load on the multiprocessor server stays below U threshold = 3 ? √7 ≈ 35.425%, the server can accept an incoming aperiodic task and guarantee that the deadlines of all accepted tasks will be met. The same number 35.425% is also a threshold for a task to be characterized as heavy.The bound U threshold = 3 ? √7≈ 35.425% is easy-to-use, but not sharp if we know the number of processors in the multiprocessor system. Assuming the server to be equipped with m processors, we calculate a formula for the sharp bound U threshold (m), which converges to U threshold from above as m → ∞.The results are based on a utilization function u(x) = 2(1 ? x)/(2 + √2+2x). By using this function, the performance of the multiprocessor server can in some cases be improved beyond U threshold(m) by paying the extra overhead of monitoring the individual utilization of the current tasks.  相似文献   

4.
We analyze the necessary existence conditions for (a, d)-distance antimagic labeling of a graph G = (V, E) of order n. We obtain theorems that expand the family of not (a, d) -distance antimagic graphs. In particular, we prove that the crown P n P 1 does not admit an (a, 1)-distance antimagic labeling for n ≥ 2 if a ≥ 2. We determine the values of a at which path P n can be an (a, 1)-distance antimagic graph. Among regular graphs, we investigate the case of a circulant graph.  相似文献   

5.
Given two sorted arrays \({A=(a_1,a_2,\ldots ,a_{n_1})}\) and \({B=(b_1,b_2,\ldots ,b_{n_2})}\), where their elements are drawn from a linear range in n and n = Max(n 1, n 2). The merging of two sorted arrays is one of the fundamental problems in computer science. The main contribution of this work is to give a new merging algorithm on a EREW PRAM. The algorithm is cost optimal, deterministic and simple. The algorithm uses \({\frac{n}{\log{n}}}\) processors and O(n) storage. We also give the conditions that make the algorithm run in a constant time on a EREW PRAM.  相似文献   

6.
Given a tree T=(V,E) of n nodes such that each node v is associated with a value-weight pair (val v ,w v ), where value val v is a real number and weight w v is a non-negative integer, the density of T is defined as \(\frac{\sum_{v\in V}{\mathit{val}}_{v}}{\sum_{v\in V}w_{v}}\). A subtree of T is a connected subgraph (V′,E′) of T, where V′?V and E′?E. Given two integers w min? and w max?, the weight-constrained maximum-density subtree problem on T is to find a maximum-density subtree T′=(V′,E′) satisfying w min?≤∑vV w v w max?. In this paper, we first present an O(w max? n)-time algorithm to find a weight-constrained maximum-density path in a tree T, and then present an O(w max? 2 n)-time algorithm to find a weight-constrained maximum-density subtree in T. Finally, given a node subset S?V, we also present an O(w max? 2 n)-time algorithm to find a weight-constrained maximum-density subtree in T which covers all the nodes in S.  相似文献   

7.
For q-ary Hamming spaces we address the problem of the minimum number of points such that any point of the space is uniquely determined by its (Hamming) distances to them. It is conjectured that for a fixed q and growing dimension n of the Hamming space this number asymptotically behaves as 2n/ log q n. We prove this conjecture for q = 3 and q = 4; for q = 2 its validity has been known for half a century.  相似文献   

8.
We study the physical behavior of the transition of a 5D perfect fluid universe from an early decelerating phase to the current accelerating phase in the framework of f(R, T) theory of gravity in the presence of domain walls. The fifth dimension is not observed because it is compact. To determine the solution of the field equations, we use the concept of a time-dependent deceleration parameter which yields the scale factor a(t) = sinh1/n(αt), where n and α are positive constants. For 0 < n ≤ 1, this generates a class of accelerating models, while for n > 1 the universe attains a phase transition from an early decelerating phase to the present accelerating phase, consistent with the recent observations. Some physical and geometric properties of the models are also discussed.  相似文献   

9.
We consider a class of graphs G(n, r, s) = (V (n, r),E(n, r, s)) defined as follows:
$$V(n,r) = \{ x = ({x_{1,}},{x_2}...{x_n}):{x_i} \in \{ 0,1\} ,{x_{1,}} + {x_2} + ... + {x_n} = r\} ,E(n,r,s) = \{ \{ x,y\} :(x,y) = s\} $$
where (x, y) is the Euclidean scalar product. We study random subgraphs G(G(n, r, s), p) with edges independently chosen from the set E(n, r, s) with probability p each. We find nontrivial lower and upper bounds on the clique number of such graphs.
  相似文献   

10.
The distance graph G(n, 2, 1) is a graph where vertices are identified with twoelement subsets of {1, 2,..., n}, and two vertices are connected by an edge whenever the corresponding subsets have exactly one common element. A random subgraph G p (n, 2, 1) in the Erd?os–Rényi model is obtained by selecting each edge of G(n, 2, 1) with probability p independently of other edges. We find a lower bound on the independence number of the random subgraph G1/2(n, 2, 1).  相似文献   

11.
The recognition of primitives in digital geometry is deeply linked with separability problems. This framework leads us to consider the following problem of pattern recognition : given a finite lattice set \(S\subset \mathbb {Z}^d\) and a positive integer n, is it possible to separate S from \(\mathbb {Z}^d \setminus S\) by n half-spaces? In other words, does there exist a polyhedron P defined by at most n half-spaces satisfying \(P\cap \mathbb {Z}^d = S\)? The difficulty comes from the infinite number of constraints generated by all the points of \(\mathbb {Z}^d\setminus S\). It makes the decidability of the problem non-straightforward since the classical algorithms of polyhedral separability can not be applied in this framework. We conjecture that the problem is nevertheless decidable and prove it under some assumptions: in arbitrary dimension, if the interior of the convex hull of S contains at least one lattice point or if the dimension d is 2 or if the dimension \(d=3\) and S is not in a specific configuration of lattice width 0 or 1. The proof strategy is to reduce the set of outliers \(\mathbb {Z}^d\setminus S\) to its minimal elements according to a partial order “is in the shadow of.” These minimal elements are called the lattice jewels of S. We prove that under some assumptions, the set S admits only a finite number of lattice jewels. The result about the decidability of the problem is a corollary of this fundamental property.  相似文献   

12.
We address the problem of minimizing power consumption when broadcasting a message from one node to all the other nodes in a radio network. To enable power savings for such a problem, we introduce a compelling new data streaming problem which we call the Bad Santa problem. Our results on this problem apply for any situation where: (1) a node can listen to a set of n nodes, out of which at least half are non-faulty and know the correct message; and (2) each of these n nodes sends according to some predetermined schedule which assigns each of them its own unique time slot. In this situation, we show that in order to receive the correct message with probability 1, it is necessary and sufficient for the listening node to listen to a \(\Theta(\sqrt{n})\) expected number of time slots. Moreover, if we allow for repetitions of transmissions so that each sending node sends the message O(log?? n) times (i.e. in O(log?? n) rounds each consisting of the n time slots), then listening to O(log?? n) expected number of time slots suffices. We show that this is near optimal.We describe an application of our result to the popular grid model for a radio network. Each node in the network is located on a point in a two dimensional grid, and whenever a node sends a message m, all awake nodes within L distance r receive m. In this model, up to \(t<\frac{r}{2}(2r+1)\) nodes within any 2r+1 by 2r+1 square in the grid can suffer Byzantine faults. Moreover, we assume that the nodes that suffer Byzantine faults are chosen and controlled by an adversary that knows everything except for the random bits of each non-faulty node. This type of adversary models worst-case behavior due to malicious attacks on the network; mobile nodes moving around in the network; or static nodes losing power or ceasing to function. Let n=r(2r+1). We show how to solve the broadcast problem in this model with each node sending and receiving an expected \(O(n\log^{2}{|m|}+\sqrt{n}|m|)\) bits where |m| is the number of bits in m, and, after broadcasting a fingerprint of m, each node is awake only an expected \(O(\sqrt{n})\) time slots. Moreover, for t≤(1?ε)(r/2)(2r+1), for any constant ε>0, we can achieve an even better energy savings. In particular, if we allow each node to send O(log?? n) times, we achieve reliable broadcast with each node sending O(nlog?2|m|+(log?? n)|m|) bits and receiving an expected O(nlog?2|m|+(log?? n)|m|) bits and, after broadcasting a fingerprint of m, each node is awake for only an expected O(log?? n) time slots. Our results compare favorably with previous protocols that required each node to send Θ(|m|) bits, receive Θ(n|m|) bits and be awake for Θ(n) time slots.  相似文献   

13.
Learning from data that are too big to fit into memory poses great challenges to currently available learning approaches. Averaged n-Dependence Estimators (AnDE) allows for a flexible learning from out-of-core data, by varying the value of n (number of super parents). Hence, AnDE is especially appropriate for learning from large quantities of data. Memory requirement in AnDE, however, increases combinatorially with the number of attributes and the parameter n. In large data learning, number of attributes is often large and we also expect high n to achieve low-bias classification. In order to achieve the lower bias of AnDE with higher n but with less memory requirement, we propose a memory constrained selective AnDE algorithm, in which two passes of learning through training examples are involved. The first pass performs attribute selection on super parents according to available memory, whereas the second one learns an AnDE model with parents only on the selected attributes. Extensive experiments show that the new selective AnDE has considerably lower bias and prediction error relative to A\(n'\)DE, where \(n' = n-1\), while maintaining the same space complexity and similar time complexity. The proposed algorithm works well on categorical data. Numerical data sets need to be discretized first.  相似文献   

14.
The Laplace matrix is a square matrix L = (? ij ) ∈ ? n×n in which all nondiagonal elements are nonpositive and all row sums are equal to zero. Each Laplace matrix corresponds to a weighted orgraph with positive arc weights. The problem of reality of Laplace matrix spectrum for orgraphs of a special type consisting of two “counter” Hamiltonian cycles in one of which one or two arcs are removed is studied. Characteristic polynomials of Laplace matrices of these orgraphs are expressed through polynomials Z n (x) that can be obtained from Chebyshev second-kind polynomials P 2n (y) by the substitution of y 2 = x. The obtained results relate to properties of the product of Chebyshev second-kind polynomials. A direct method for computing the spectrum of Laplace circuit matrix is given. The obtained results can be used for computing the number of spanning trees in orgraphs of the studied type. One of the possible practical applications of these results is the investigation of topology and development of new Internet protocols.  相似文献   

15.
We investigate a noncooperative bargaining game for partitioning n agents into non-overlapping coalitions. The game has n time periods during which the players are called according to an exogenous agenda to propose offers. With probability \(\delta \), the game ends during any time period \(t<n\). If it does, the first t players on the agenda get a chance to propose but the others do not. Thus, \(\delta \) is a measure of the degree of democracy within the game (ranging from democracy for \(\delta =0\), through increasing levels of authoritarianism as \(\delta \) approaches 1, to dictatorship for \(\delta =1\)). We determine the subgame perfect equilibrium (SPE) and study how a player’s position on the agenda affects his bargaining power. We analyze the relation between the distribution of power of individual players, the level of democracy, and the welfare efficiency of the game. We find that purely democratic games are welfare inefficient and that introducing a degree of authoritarianism into the game makes the distribution of power more equitable and also maximizes welfare. These results remain invariant under two types of player preferences: one where each player’s preference is a total order on the space of possible coalition structures and the other where each player either likes or dislikes a coalition structure. Finally, we show that the SPE partition may or may not be core stable.  相似文献   

16.
In negation-limited complexity, one considers circuits with a limited number of NOT gates, being motivated by the gap in our understanding of monotone versus general circuit complexity, and hoping to better understand the power of NOT gates. We give improved lower bounds for the size (the number of AND/OR/NOT) of negation-limited circuits computing Parity and for the size of negation-limited inverters. An inverter is a circuit with inputs x 1,…,x n and outputs ¬ x 1,…,¬ x n . We show that: (a) for n=2 r ?1, circuits computing Parity with r?1 NOT gates have size at least 6n?log?2(n+1)?O(1), and (b) for n=2 r ?1, inverters with r NOT gates have size at least 8n?log?2(n+1)?O(1). We derive our bounds above by considering the minimum size of a circuit with at most r NOT gates that computes Parity for sorted inputs x 1???x n . For an arbitrary r, we completely determine the minimum size. It is 2n?r?2 for odd n and 2n?r?1 for even n for ?log?2(n+1)??1≤rn/2, and it is ?3n/2??1 for rn/2. We also determine the minimum size of an inverter for sorted inputs with at most r NOT gates. It is 4n?3r for ?log?2(n+1)?≤rn. In particular, the negation-limited inverter for sorted inputs due to Fischer, which is a core component in all the known constructions of negation-limited inverters, is shown to have the minimum possible size. Our fairly simple lower bound proofs use gate elimination arguments in a somewhat novel way.  相似文献   

17.
We consider the planning problem for freight transportation between two railroad stations. We are required to fulfill orders (transport cars by trains) that arrive at arbitrary time moments and have different value (weight). The speed of trains moving between stations may be different. We consider problem settings with both fixed and undefined departure times for the trains. For the problem with fixed train departure times we propose an algorithm for minimizing the weighted lateness of orders with time complexity O(qn 2 log n) operations, where q is the number of trains and n is the number of orders. For the problem with undefined train departure and arrival times we construct a Pareto optimal set of schedules optimal with respect to criteria wL max and C max in O(n 2 max{n log n, q log v}) operations, where v is the number of time windows during which the trains can depart. The proposed algorithm allows to minimize both weighted lateness wL max and total time of fulfilling freight delivery orders C max.  相似文献   

18.
We consider application of the two-armed bandit problem to processing a large number N of data where two alternative processing methods can be used. We propose a strategy which at the first stages, whose number is at most r ? 1, compares the methods, and at the final stage applies only the best one obtained from the comparison. We find asymptotically optimal parameters of the strategy and observe that the minimax risk is of the order of N α , where α = 2 r?1/(2 r ? 1). Under parallel processing, the total operation time is determined by the number r of stages but not by the number N of data.  相似文献   

19.
We consider the estimation problem for an unknown vector β ∈ Rp in a linear model Y = + σξ, where ξ ∈ Rn is a standard discrete white Gaussian noise and X is a known n × p matrix with np. It is assumed that p is large and X is an ill-conditioned matrix. To estimate β in this situation, we use a family of spectral regularizations of the maximum likelihood method βα(Y) = H α(X T X) β ?(Y), α ∈ R+, where β ?(Y) is the maximum likelihood estimate for β and {H α(·): R+ → [0, 1], α ∈ R+} is a given ordered family of functions indexed by a regularization parameter α. The final estimate for β is constructed as a convex combination (in α) of the estimates βα(Y) with weights chosen based on the observations Y. We present inequalities for large deviations of the norm of the prediction error of this method.  相似文献   

20.
We study two natural extensions of Constraint Satisfaction Problems (CSPs). Balance-Max-CSP requires that in any feasible assignment each element in the domain is used an equal number of times. An instance of Hard-Max-CSP consists of soft constraints and hard constraints, and the goal is to maximize the weight of satisfied soft constraints while satisfying all the hard constraints. These two extensions contain many fundamental problems not captured by CSPs, and challenge traditional theories about CSPs in a more general framework. Max-2-SAT and Max-Horn-SAT are the only two nontrivial classes of Boolean CSPs that admit a robust satisfibiality algorithm, i.e., an algorithm that finds an assignment satisfying at least (1 ? g(ε)) fraction of constraints given a (1 ? ε)-satisfiable instance, where g(ε) → 0 as ε → 0, and g(0) = 0. We prove the inapproximability of these problems with balance or hard constraints, showing that each variant changes the nature of the problems significantly (in different ways). For instance, deciding whether an instance of 2-SAT admits a balanced assignment is NP-hard, and for Max-2-SAT with hard constraints, it is hard to find a constant-factor approximation even on (1 ? ε)-satisfiable instances (in particular, the version with hard constraints does not admit a robust satisfiability algorithm). We also study hardness results for a certain CSP over a larger domain capturing ordering constraints: we show that hard constraints rule out constant-factor approximation algorithms. All our hardness results are almost optimal — they completely rule out algorithms with certain properties, or can be matched by simple extensions to existing algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号