首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We provide a constant time schedulability test for an on-line multiprocessor server handling aperiodic tasks. Dhall's effect is avoided by dividing the tasks in two priority classes based on task utilization: heavy and light. We prove that if the load on the multiprocessor server stays below U threshold = 3 ? √7 ≈ 35.425%, the server can accept an incoming aperiodic task and guarantee that the deadlines of all accepted tasks will be met. The same number 35.425% is also a threshold for a task to be characterized as heavy.The bound U threshold = 3 ? √7≈ 35.425% is easy-to-use, but not sharp if we know the number of processors in the multiprocessor system. Assuming the server to be equipped with m processors, we calculate a formula for the sharp bound U threshold (m), which converges to U threshold from above as m → ∞.The results are based on a utilization function u(x) = 2(1 ? x)/(2 + √2+2x). By using this function, the performance of the multiprocessor server can in some cases be improved beyond U threshold(m) by paying the extra overhead of monitoring the individual utilization of the current tasks.  相似文献   

2.
Guaranteeing the eventual execution of tasks in machines that are prone to unpredictable crashes and restarts may be challenging, but is also of high importance. Things become even more complicated when tasks arrive dynamically and have different computational demands, i.e., processing time (or sizes). In this paper, we focus on the online task scheduling in such systems, considering one machine and at least two different task sizes. More specifically, algorithms are designed for two different task sizes while the complementary bounds hold for any number of task sizes bigger than one. We look at the latency and 1-completed load competitiveness properties of deterministic scheduling algorithms under worst-case scenarios. For this, we assume an adversary, that controls the machine crashes and restarts as well as the task arrivals of the system, including their computational demands. More precisely, we investigate the effect of resource augmentation—in the form of processor speedup—in the machine’s performance, by looking at the two efficiency measures for different speedups. We first identify the threshold of the speedup under which competitiveness cannot be achieved by any deterministic algorithm, and above which there exists some deterministic algorithm that is competitive. We then propose an online algorithm, named \(\gamma \text{-Burst } \), that achieves both latency and 1-completed-load competitiveness when the speedup is over the threshold. This also proves that the threshold identified is also sufficient for competitiveness.  相似文献   

3.
4.
Consideration was given to the classical NP-hard problem 1|rj|Lmax of the scheduling theory. An algorithm to determine the optimal schedule of processing n jobs where the job parameters satisfy a system of linear constraints was presented. The polynomially solvable area of the problem 1|rj|Lmax was expanded. An algorithm was described to construct a Pareto-optimal set of schedules by the criteria Lmax and Cmax for complexity of O(n3logn) operations.  相似文献   

5.
In the era of data explosion, high volume of various data is generated rapidly at each moment of time; and if not processed, the profits of their latent information would be missed. This is the main current challenge of most enterprises and Internet mega-companies (also known as the big data problem). Big data is composed of three dimensions: Volume, Variety, and Velocity. The velocity refers to the high speed, both in data arrival rate (e.g., streaming data) and in data processing (i.e., real-time processing). In this paper, the velocity dimension of big data is concerned; so, real-time processing of streaming big data is addressed in detail. For each real-time system, to be fast is inevitable and a necessary condition (although it is not sufficient and some other concerns e.g., real-time scheduling must be issued, too). Fast processing is achieved by parallelism via the proposed deadline-aware dispatching method. For the other prerequisite of real-time processing (i.e., real-time scheduling of the tasks), a hybrid clustering multiprocessor real-time scheduling algorithm is proposed in which both the partitioning and global real-time scheduling approaches are employed to have better schedulablity and resource utilization, with a tolerable overhead. The other components required for real-time processing of streaming big data are also designed and proposed as real time streaming big data (RT-SBD) processing engine. Its prototype is implemented and experimentally evaluated and compared with the Storm, a well-known real-time streaming big data processing engine. Experimental results show that the proposed RT-SBD significantly outperforms the Storm engine in terms of proportional deadline miss ratio, tuple latency and system throughput.  相似文献   

6.
We propose an approach to organizing self-testing in a multiprocessor system under certain constraints; in particular, we use special diagnostic graphs, and the number of failures does not exceed a certain value T. We show that a test experiment where each of n processors is tested by two others lets one determine the state of all except possibly two processors for T = 4 and except one for T = 3. The total number of tests does not exceed 2n + 2.  相似文献   

7.
There has been a growing interest in applying human computation – particularly crowdsourcing techniques – to assist in the solution of multimedia, image processing, and computer vision problems which are still too difficult to solve using fully automatic algorithms, and yet relatively easy for humans. In this paper we focus on a specific problem – object segmentation within color images – and compare different solutions which combine color image segmentation algorithms with human efforts, either in the form of an explicit interactive segmentation task or through an implicit collection of valuable human traces with a game. We use Click’n’Cut, a friendly, web-based, interactive segmentation tool that allows segmentation tasks to be assigned to many users, and Ask’nSeek, a game with a purpose designed for object detection and segmentation. The two main contributions of this paper are: (i) We use the results of Click’n’Cut campaigns with different groups of users to examine and quantify the crowdsourcing loss incurred when an interactive segmentation task is assigned to paid crowd-workers, comparing their results to the ones obtained when computer vision experts are asked to perform the same tasks. (ii) Since interactive segmentation tasks are inherently tedious and prone to fatigue, we compare the quality of the results obtained with Click’n’Cut with the ones obtained using a (fun, interactive, and potentially less tedious) game designed for the same purpose. We call this contribution the assessment of the gamification loss, since it refers to how much quality of segmentation results may be lost when we switch to a game-based approach to the same task. We demonstrate that the crowdsourcing loss is significant when using all the data points from workers, but decreases substantially (and becomes comparable to the quality of expert users performing similar tasks) after performing a modest amount of data analysis and filtering out of users whose data are clearly not useful. We also show that – on the other hand – the gamification loss is significantly more severe: the quality of the results drops roughly by half when switching from a focused (yet tedious) task to a more fun and relaxed game environment.  相似文献   

8.
The advent of Big Data era drives data analysts from different domains to use data mining techniques for data analysis. However, performing data analysis in a specific domain is not trivial; it often requires complex task configuration, onerous integration of algorithms, and efficient execution in distributed environments. Few efforts have been paid on developing effective tools to facilitate data analysts in conducting complex data analysis tasks. In this paper, we design and implement FIU-Miner, a Fast, Integrated, and User-friendly system to ease data analysis. FIU-Miner allows users to rapidly configure a complex data analysis task without writing a single line of code. It also helps users conveniently import and integrate different analysis programs. Further, it significantly balances resource utilization and task execution in heterogeneous environments. Case studies of real-world applications demonstrate the efficacy and effectiveness of our proposed system.  相似文献   

9.
The problem presented in this paper is a generalization of the usual coupled-tasks scheduling problem in presence of compatibility constraints. The reason behind this study is the data acquisition problem for a submarine torpedo. We investigate a particular configuration for coupled tasks (any task is divided into two sub-tasks separated by an idle time), in which the idle time of a coupled task is equal to the sum of durations of its two sub-tasks. We prove \(\mathcal{NP}\)-completeness of the minimization of the schedule length, we show that finding a solution to our problem amounts to solving a graph problem, which in itself is close to the minimum-disjoint-path cover (min-DCP) problem. We design a \((\frac{3a+2b}{2a+2b})\)-approximation, where a and b (the processing time of the two sub-tasks) are two input data such as a>b>0, and that leads to a ratio between \(\frac {3}{2}\) and \(\frac{5}{4}\). Using a polynomial-time algorithm developed for some class of graph of min-DCP, we show that the ratio decreases to \(\frac{1+\sqrt{3}}{2}\approx 1.37\).  相似文献   

10.
How can we discover interesting patterns from time-evolving high-speed data streams? How to analyze the data streams quickly and accurately, with little space overhead? How to guarantee the found patterns to be self-consistent? High-speed data stream has been receiving increasing attention due to its wide applications such as sensors, network traffic, social networks, etc. The most fundamental task on the data stream is frequent pattern mining; especially, focusing on recentness is important in real applications. In this paper, we develop two algorithms for discovering recently frequent patterns in data streams. First, we propose TwMinSwap to find top-k recently frequent items in data streams, which is a deterministic version of our motivating algorithm TwSample providing theoretical guarantees based on item sampling. TwMinSwap improves TwSample in terms of speed, accuracy, and memory usage. Both require only O(k) memory spaces and do not require any prior knowledge on the stream such as its length and the number of distinct items in the stream. Second, we propose TwMinSwap-Is to find top-k recently frequent itemsets in data streams. We especially focus on keeping self-consistency of the discovered itemsets, which is the most important property for reliable results, while using O(k) memory space with the assumption of a constant itemset size. Through extensive experiments, we demonstrate that TwMinSwap outperforms all competitors in terms of accuracy and memory usage, with fast running time. We also show that TwMinSwap-Is is more accurate than the competitor and discovers recently frequent itemsets with reasonably large sizes (at most 5–7) depending on datasets. Thanks to TwMinSwap and TwMinSwap-Is, we report interesting discoveries in real world data streams, including the difference of trends between the winner and the loser of U.S. presidential candidates, and temporal human contact patterns.  相似文献   

11.
Resource-conscious technologies for cutting sheet material include the ICP and ECP technologies that allow for aligning fragments of the contours of cutouts. In this work, we show the mathematical model for the problem of cutting out parts with these technologies and algorithms for finding cutting tool routes that satisfy technological constraints. We give a solution for the problem of representing a cutting plan as a plane graph G = (V,F,E), which is a homeomorphic image of the cutting plan. This has let us formalize technological constraints on the trajectory of cutting the parts according to the cutting plan and propose a series of algorithms for constructing a route in the graph G = (V,F,E), which is an image of an admissible trajectory. Using known coordinates of the preimages of vertices of graph G = (V,F,E) and the locations of fragments of the cutting plan that are preimages of edges of graph G = (V,F,E), the resulting route in the graph G = (V,E) can be interpreted as the cutting tool’s trajectory.The proposed algorithms for finding routes in a connected graph G have polynomial computational complexity. To find the optimal route in an unconnected graph G, we need to solve, for every dividing face f of graph G, a travelling salesman problem on the set of faces incident to f.  相似文献   

12.
Disjunctive Temporal Problems (DTPs) with Preferences (DTPPs) extend DTPs with piece-wise constant preference functions associated to each constraint of the form lx ? yu, where x,y are (real or integer) variables, and l,u are numeric constants. The goal is to find an assignment to the variables of the problem that maximizes the sum of the preference values of satisfied DTP constraints, where such values are obtained by aggregating the preference functions of the satisfied constraints in it under a “max” semantic. The state-of-the-art approach in the field, implemented in the native DTPP solver Maxilitis, extends the approach of the native DTP solver Epilitis. In this paper we present alternative approaches that translate DTPPs to Maximum Satisfiability of a set of Boolean combination of constraints of the form l?x ? y?u, ? ∈{<,≤}, that extend previous work dealing with constant preference functions only. We prove correctness and completeness of the approaches. Results obtained with the Satisfiability Modulo Theories (SMT) solvers Yices and MathSAT on randomly generated DTPPs and DTPPs built from real-world benchmarks, show that one of our translation is competitive to, and can be faster than, Maxilitis (This is an extended and revised version of Bourguet et al. 2013).  相似文献   

13.
The integrality recognition problem is considered on a sequence M n, k of nested relaxations of a Boolean quadric polytope, including the rooted semimetric M n and metric M n, 3 polytopes. The constraints of the metric polytope cut off all faces of the rooted semimetric polytope that contain only fractional vertices. This makes it possible to solve the integrality recognition problem on M n in polynomial time. To solve the integrality recognition problem on the metric polytope, we consider the possibility of cutting off all fractional faces of M n, 3 by a certain relaxation M n, k . The coordinates of points of the metric polytope are represented in homogeneous form as a three-dimensional block matrix. We show that in studying the question of cutting off the fractional faces of the metric polytope, it is sufficient to consider only constraints in the form of triangle inequalities.  相似文献   

14.
High utility sequential pattern (HUSP) mining has emerged as an important topic in data mining. A number of studies have been conducted on mining HUSPs, but they are mainly intended for non-streaming data and thus do not take data stream characteristics into consideration. Streaming data are fast changing, continuously generated unbounded in quantity. Such data can easily exhaust computer resources (e.g., memory) unless a proper resource-aware mining is performed. In this study, we explore the fundamental problem of how limited memory can be best utilized to produce high quality HUSPs over a data stream. We design an approximation algorithm, called MAHUSP, that employs memory adaptive mechanisms to use a bounded portion of memory, in order to efficiently discover HUSPs over data streams. An efficient tree structure, called MAS-Tree, is proposed to store potential HUSPs over a data stream. MAHUSP guarantees that all HUSPs are discovered in certain circumstances. Our experimental study shows that our algorithm can not only discover HUSPs over data streams efficiently, but also adapt to memory allocation with limited sacrifices in the quality of discovered HUSPs. Furthermore, in order to show the effectiveness and efficiency of MAHUSP in real-life applications, we apply our proposed algorithm to a web clickstream dataset obtained from a Canadian news portal to showcase users’ reading behavior, and to a real biosequence database to identify disease-related gene regulation sequential patterns. The results show that MAHUSP effectively discovers useful and meaningful patterns in both cases.  相似文献   

15.
We consider application of the two-armed bandit problem to processing a large number N of data where two alternative processing methods can be used. We propose a strategy which at the first stages, whose number is at most r ? 1, compares the methods, and at the final stage applies only the best one obtained from the comparison. We find asymptotically optimal parameters of the strategy and observe that the minimax risk is of the order of N α , where α = 2 r?1/(2 r ? 1). Under parallel processing, the total operation time is determined by the number r of stages but not by the number N of data.  相似文献   

16.
We study two natural extensions of Constraint Satisfaction Problems (CSPs). Balance-Max-CSP requires that in any feasible assignment each element in the domain is used an equal number of times. An instance of Hard-Max-CSP consists of soft constraints and hard constraints, and the goal is to maximize the weight of satisfied soft constraints while satisfying all the hard constraints. These two extensions contain many fundamental problems not captured by CSPs, and challenge traditional theories about CSPs in a more general framework. Max-2-SAT and Max-Horn-SAT are the only two nontrivial classes of Boolean CSPs that admit a robust satisfibiality algorithm, i.e., an algorithm that finds an assignment satisfying at least (1 ? g(ε)) fraction of constraints given a (1 ? ε)-satisfiable instance, where g(ε) → 0 as ε → 0, and g(0) = 0. We prove the inapproximability of these problems with balance or hard constraints, showing that each variant changes the nature of the problems significantly (in different ways). For instance, deciding whether an instance of 2-SAT admits a balanced assignment is NP-hard, and for Max-2-SAT with hard constraints, it is hard to find a constant-factor approximation even on (1 ? ε)-satisfiable instances (in particular, the version with hard constraints does not admit a robust satisfiability algorithm). We also study hardness results for a certain CSP over a larger domain capturing ordering constraints: we show that hard constraints rule out constant-factor approximation algorithms. All our hardness results are almost optimal — they completely rule out algorithms with certain properties, or can be matched by simple extensions to existing algorithms.  相似文献   

17.
Crowdsourcing applications like Amazon Mechanical Turk (AMT) make it possible to address many difficult tasks (e.g., image tagging and sentiment analysis) on the internet and make full use of the wisdom of crowd, where worker quality is one of the most crucial issues for the task owners. Thus, a challenging problem is how to effectively and efficiently select the high quality workers, so that the tasks online can be accomplished successfully under a certain budget. The existing methods on the crowd worker selection problem mainly based on the quality measurement of the crowd workers, those who have to register on the crowdsourcing platforms. With the connect of the OSNs and the crowdsourcing applications, the social contexts like social relationships and social trust between participants and social positions of participants can assist requestors to select one or a group of trustworthy crowdsourcing workers. In this paper, we first present a contextual social network structure and a concept of Strong Social Component (SSC), which emblems a group of workers who have high social contexts values. Then, we propose a novel index for SSC, and a new efficient and effective algorithm C-AWSA to find trustworthy workers, who can complete the tasks with high quality. The results of our experiments conducted on four real OSN datasets illustrate that the superiority of our method in trustworthy worker selection.  相似文献   

18.
We consider online mining of correlated heavy-hitters (CHH) from a data stream. Given a stream of two-dimensional data, a correlated aggregate query first extracts a substream by applying a predicate along a primary dimension, and then computes an aggregate along a secondary dimension. Prior work on identifying heavy-hitters in streams has almost exclusively focused on identifying heavy-hitters on a single dimensional stream, and these yield little insight into the properties of heavy-hitters along other dimensions. In typical applications however, an analyst is interested not only in identifying heavy-hitters, but also in understanding further properties such as: what other items appear frequently along with a heavy-hitter, or what is the frequency distribution of items that appear along with the heavy-hitters. We consider queries of the following form: “In a stream S of (xy) tuples, on the substream H of all x values that are heavy-hitters, maintain those y values that occur frequently with the x values in H”. We call this problem as CHH. We formulate an approximate formulation of CHH identification, and present an algorithm for tracking CHHs on a data stream. The algorithm is easy to implement and uses workspace much smaller than the stream itself. We present provable guarantees on the maximum error, as well as detailed experimental results that demonstrate the space-accuracy trade-off.  相似文献   

19.
Enriching logic formalisms with counting capabilities is an important task in view of the needs of many application areas, ranging from database theory to formal verification. In this paper, we consider a very expressive language obtained by enriching linear integer arithmetic with free function symbols and cardinality constraints for interpreted sets. We obtain positive results for a flat fragment via a reduction to decidability of Presburger arithmetic with unary counting quantifiers (Schweikhart in Arithmetic, first-order logic, and counting quantifiers, ACM TOCL, New York, 2004). We isolate also an easier simple flat subfragment, whose satisfiability is in NP, and we show that this subfragment is adequate to formalize problems arising in the area of the verification of fault-tolerant distributed algorithms. We finally discuss our first implementation, the related experimental results, as well as further algorithmic problems suggested by model-checking applications.  相似文献   

20.
The topic of Data Stream Processing is a recent and highly active research area dealing with the in-memory, tuple-by-tuple analysis of streaming data. Continuous queries typically consume huge volumes of data received at a great velocity. Solutions that persistently store all the input tuples and then perform off-line computation are impractical. Rather, queries must be executed continuously as data cross the streams. The goal of this paper is to present parallel patterns for window-based stateful operators, which are the most representative class of stateful data stream operators. Parallel patterns are presented “à la” Algorithmic Skeleton, by explaining the rationale of each pattern, the preconditions to safely apply it, and the outcome in terms of throughput, latency and memory consumption. The patterns have been implemented in the \(\mathtt {FastFlow}\) framework targeting off-the-shelf multicores. To the best of our knowledge this is the first time that a similar effort to merge the Data Stream Processing domain and the field of Structured Parallelism has been made.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号