首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nowadays, more and more computer-based scientific experiments need to handle massive amounts of data. Their data processing consists of multiple computational steps and dependencies within them. A data-intensive scientific workflow is useful for modeling such process. Since the sequential execution of data-intensive scientific workflows may take much time, Scientific Workflow Management Systems (SWfMSs) should enable the parallel execution of data-intensive scientific workflows and exploit the resources distributed in different infrastructures such as grid and cloud. This paper provides a survey of data-intensive scientific workflow management in SWfMSs and their parallelization techniques. Based on a SWfMS functional architecture, we give a comparative analysis of the existing solutions. Finally, we identify research issues for improving the execution of data-intensive scientific workflows in a multisite cloud.  相似文献   

2.
Over the past decade process mining has emerged as a new analytical discipline able to answer a variety of questions based on event data. Event logs have a very particular structure; events have timestamps, refer to activities and resources, and need to be correlated to form process instances. Process mining results tend to be very different from classical data mining results, e.g., process discovery may yield end-to-end process models capturing different perspectives rather than decision trees or frequent patterns. A process-mining tool like ProM provides hundreds of different process mining techniques ranging from discovery and conformance checking to filtering and prediction. Typically, a combination of techniques is needed and, for every step, there are different techniques that may be very sensitive to parameter settings. Moreover, event logs may be huge and may need to be decomposed and distributed for analysis. These aspects make it very cumbersome to analyze event logs manually. Process mining should be repeatable and automated. Therefore, we propose a framework to support the analysis of process mining workflows. Existing scientific workflow systems and data mining tools are not tailored towards process mining and the artifacts used for analysis (process models and event logs). This paper structures the basic building blocks needed for process mining and describes various analysis scenarios. Based on these requirements we implemented RapidProM, a tool supporting scientific workflows for process mining. Examples illustrating the different scenarios are provided to show the feasibility of the approach.  相似文献   

3.
The emergence of Cloud Computing as a model of service provisioning in distributed systems instigated researchers to explore its pros and cons on executing different large scale scientific applications, i.e., Workflows. One of the most challenging problems in clouds is to execute workflows while minimizing the execution time as well as cost incurred by using a set of heterogeneous resources over the cloud simultaneously. In this paper, we present, Budget and Deadline Constrained Heuristic based upon Heterogeneous Earliest Finish Time (HEFT) to schedule workflow tasks over the available cloud resources. The proposed heuristic presents a beneficial trade-off between execution time and execution cost under given constraints. The proposed heuristic is evaluated for different synthetic workflow applications by a simulation process and comparison is done with state-of-art algorithm i.e. BHEFT. The simulation results show that our proposed scheduling heuristic can significantly decrease the execution cost while producing makespan as good as the best known scheduling heuristic under the same deadline and budget constraints.  相似文献   

4.
We study two natural extensions of Constraint Satisfaction Problems (CSPs). Balance-Max-CSP requires that in any feasible assignment each element in the domain is used an equal number of times. An instance of Hard-Max-CSP consists of soft constraints and hard constraints, and the goal is to maximize the weight of satisfied soft constraints while satisfying all the hard constraints. These two extensions contain many fundamental problems not captured by CSPs, and challenge traditional theories about CSPs in a more general framework. Max-2-SAT and Max-Horn-SAT are the only two nontrivial classes of Boolean CSPs that admit a robust satisfibiality algorithm, i.e., an algorithm that finds an assignment satisfying at least (1 ? g(ε)) fraction of constraints given a (1 ? ε)-satisfiable instance, where g(ε) → 0 as ε → 0, and g(0) = 0. We prove the inapproximability of these problems with balance or hard constraints, showing that each variant changes the nature of the problems significantly (in different ways). For instance, deciding whether an instance of 2-SAT admits a balanced assignment is NP-hard, and for Max-2-SAT with hard constraints, it is hard to find a constant-factor approximation even on (1 ? ε)-satisfiable instances (in particular, the version with hard constraints does not admit a robust satisfiability algorithm). We also study hardness results for a certain CSP over a larger domain capturing ordering constraints: we show that hard constraints rule out constant-factor approximation algorithms. All our hardness results are almost optimal — they completely rule out algorithms with certain properties, or can be matched by simple extensions to existing algorithms.  相似文献   

5.
The problem of two edge-disjoint paths is to identify two paths \(Q_1\) and \(Q_2\) from source \(s \in V\) to target \(t \in V\) without any common arc in a directed connected graph \(G=(V, E)\). In this paper, we present an adaptive stabilizing algorithm for finding a pair of edge-disjoint paths from s to t in G in O(D) rounds with state-space complexity of \(O(log\; n)\) bits per process, where n is the number of nodes and D is the diameter of the graph. The proposed algorithm is optimal with respect to its time complexity, and the total length of the shortest paths. In addition, it can also be used to solve the problem for undirected graphs. Since the proposed algorithm is stabilizing, it does not require initialization and is capable of withstanding transient faults. We view a fault that perturbs the state of the system but not its program as a transient fault. In addition, the proposed algorithm is adaptive since it is capable of dealing with topology changes in the form of addition/removal of arcs and/or nodes as well as changes in the directions of arcs provided that two edge-disjoint paths between s and t exist after the topology change.  相似文献   

6.
Emergence of MapReduce (MR) framework for scaling data mining and machine learning algorithms provides for Volume, while handling of Variety and Velocity needs to be skilfully crafted in algorithms. So far, scalable clustering algorithms have focused solely on Volume, taking advantage of the MR framework. In this paper we present a MapReduce algorithm—data aware scalable clustering (DASC), which is capable of handling the 3 Vs of big data by virtue of being (i) single scan and distributed to handle Volume, (ii) incremental to cope with Velocity and (iii) versatile in handling numeric and categorical data to accommodate Variety. DASC algorithm incrementally processes infinitely growing data set stored on distributed file system and delivers quality clustering scheme while ensuring recency of patterns. The up-to-date synopsis is preserved by the algorithm for the data seen so far. Each new data increment is processed and merged with the synopsis. Since the synopsis itself may grow very large in size, the algorithm stores it as a file. This makes DASC algorithm truly scalable. Exclusive clusters are obtained on demand by applying connected component analysis (CCA) algorithm over the synopsis. CCA presents subtle roadblock to effective parallelism during clustering. This problem is overcome by accomplishing the task in two stages. In the first stage, hyperclusters are identified based on prevailing data characteristics. The second stage utilizes this knowledge to determine the degree of parallelism, thereby making DASC data aware. Hyperclusters are distributed over the available compute nodes for discovering embedded clusters in parallel. Staged approach for clustering yields dual advantage of improved parallelism and desired complexity in \(\mathcal {MRC}^0\) class. DASC algorithm is empirically compared with incremental Kmeans and Scalable Kmeans++ algorithms. Experimentation on real-world and synthetic data with approximately 1.2 billion data points demonstrates effectiveness of DASC algorithm. Empirical observations of DASC execution are in consonance with the theoretical analysis with respect to stability in resources utilization and execution time.  相似文献   

7.
Mining co-occurrence frequency patterns from multiple sequences is a hot topic in bioinformatics. Many seemingly disorganized constituents repetitively appear under different biological matrices, such as PAM250 and BLOSUM62, which are considered hidden frequent patterns (FPs). A hidden FP with both gap and flexible approximation operations (replacement, deletion or insertion) deepens the difficulty in discovering its true occurrences. To effectively discover co-occurrence FPs (Co-FPs) under these conditions, we design a mining algorithm (co-fp-miner) using the following steps: (1) a biological approximation scoring matrix is designed to discover various deformations of a single FP pattern; (2) a data-driven intersection tactic is used to generate candidate Co-FPs; (3) a deterministic Apriori-like rule is proposed to prune unnecessary Co-FPs; and (4) finally, we employ a backtracking matching scheme to validate true Co-FPs. The co-fp-miner algorithm is an unified framework for both exact and approximate mining on multiple sequences. Experiments on DNA and protein sequences demonstrate that co-fp-miner is more efficient on solutions, time and memory consumption than that of other peers.  相似文献   

8.
In this paper, a steganographic scheme adopting the concept of the generalized K d -distance N-dimensional pixel matching is proposed. The generalized pixel matching embeds a B-ary digit (B is a function of K and N) into a cover vector of length N, where the order-d Minkowski distance-measured embedding distortion is no larger than K. In contrast to other pixel matching-based schemes, a N-dimensional reference table is used. By choosing d, K, and N adaptively, an embedding strategy which is suitable for arbitrary relative capacity can be developed. Additionally, an optimization algorithm, namely successive iteration algorithm (SIA), is proposed to optimize the codeword assignment in the reference table. Benefited from the high dimensional embedding and the optimization algorithm, nearly maximal embedding efficiency is achieved. Compared with other content-free steganographic schemes, the proposed scheme provides better image quality and statistical security. Moreover, the proposed scheme performs comparable to state-of-the-art content-based approaches after combining with image models.  相似文献   

9.
Representative skyline computation is a fundamental issue in database area, which has attracted much attention in recent years. A notable definition of representative skyline is the distance-based representative skyline (DBRS). Given an integer k, a DBRS includes k representative skyline points that aims at minimizing the maximal distance between a non-representative skyline point and its nearest representative. In the 2D space, the state-of-the-art algorithm to compute the DBRS is based on dynamic programming (DP) which takes O(k m 2) time complexity, where m is the number of skyline points. Clearly, such a DP-based algorithm cannot be used for handling large scale datasets due to the quadratic time cost. To overcome this problem, in this paper, we propose a new approximate algorithm called ARS, and a new exact algorithm named PSRS, based on a carefully-designed parametric search technique. We show that the ARS algorithm can guarantee a solution that is at most ?? larger than the optimal solution. The proposed ARS and PSRS algorithms run in O(klog2mlog(T/??)) and O(k 2 log3m) time respectively, where T is no more than the maximal distance between any two skyline points. We also propose an improved exact algorithm, called PSRS+, based on an effective lower and upper bounding technique. We conduct extensive experimental studies over both synthetic and real-world datasets, and the results demonstrate the efficiency and effectiveness of the proposed algorithms.  相似文献   

10.
Cellular Learning Automata (CLAs) are hybrid models obtained from combination of Cellular Automata (CAs) and Learning Automata (LAs). These models can be either open or closed. In closed CLAs, the states of neighboring cells of each cell called local environment affect on the action selection process of the LA of that cell whereas in open CLAs, each cell, in addition to its local environment has an exclusive environment which is observed by the cell only and the global environment which can be observed by all the cells in CLA. In dynamic models of CLAs, one of their aspects such as structure, local rule or neighborhood radius may change during the evolution of the CLA. CLAs can also be classified as synchronous CLAs or asynchronous CLAs. In a synchronous CLA, all LAs in different cells are activated synchronously whereas in an asynchronous CLA, the LAs in different cells are activated asynchronously. In this paper, a new closed asynchronous dynamic model of CLA whose structure and the number of LAs in each cell may vary with time has been introduced. To show the potential of the proposed model, a landmark clustering algorithm for solving topology mismatch problem in unstructured peer-to-peer networks has been proposed. To evaluate the proposed algorithm, computer simulations have been conducted and then the results are compared with the results obtained for two existing algorithms for solving topology mismatch problem. It has been shown that the proposed algorithm is superior to the existing algorithms with respect to communication delay and average round-trip time between peers within clusters.  相似文献   

11.
Automatic hair extraction from a given 2D image has been a challenging problem for a long time, especially when complex backgrounds and a wide variety of hairstyles are involved. This paper has made its contribution in the following three aspects. First, it proposes a novel framework that successfully combines the techniques of face detection, outlier-aware initial stroke placement and matting to extract the desired hair region from an input image. Second, it introduces an alpha space to facilitate the choice of matting parameters. Third, it defines a new comparison metric that is well suited for the alpha matte comparison. Our results show that, compared with the manually drawn trimaps for hair extraction, the proposed automatic algorithm can achieve about 86.2 % extraction accuracy.  相似文献   

12.
A new algorithm is presented for the modeling and simulation of multi-flexible-body systems. This algorithm is built upon a divide-and-conquer-based multibody dynamics framework, and it is capable of handling arbitrary large rotations and deformations in articulated flexible bodies. As such, this work extends the current capabilities of the flexible divide-and-conquer algorithm (Mukherjee and Anderson in Comput. Nonlinear Dyn. 2(1):10–21, 2007), which is limited to the use of assumed modes in a floating frame of reference configuration. The present algorithm utilizes the existing finite element modeling techniques to construct the equations of motion at the element level, as well as at the body level. It is demonstrated that these equations can be assembled and solved using a divide-and-conquer type methodology. In this respect, the new algorithm is applied using the absolute nodal coordinate formulation (ANCF) (Shabana, 1996). The ANCF is selected because of its straightforward implementation and effectiveness in modeling large deformations. It is demonstrated that the present algorithm provides an efficient and robust method for modeling multi-flexible-body systems that employ highly deformable bodies. The new algorithm is tested using three example systems employing deformable bodies in two and three spatial dimensions. The current examples are limited to the ANCF line or cable elements, but the approach may be extended to higher order elements. In its basic form, the divide-and-conquer algorithm is time and processor optimal, yielding logarithmic complexity O(log(N b )) when implemented using O(N b ) processors, where N b is the number of bodies in the system.  相似文献   

13.
Online trip planning is a popular service that has facilitated a lot of people greatly. However, little attention has been paid to personalized trip planning which is even more useful. In this paper, we define a highly expressive personalized route planning query-the Personalized and Sequenced Route (PSR) Query which considers both personalization and sequenced constraint, and propose a novel framework to deal with the query. The framework consists of three phases: guessing, crossover and refinement. The guessing phase strives to obtain one high quality route as the baseline to bound the search space into a circular region. The crossover phase heuristically improve the quality of multiple guessed routes via a modified genetic algorithm, which further narrows the radius of the search space. The refinement phase backwardly examines each candidate point and partial route to rule out impossible ones. The combination of these phases can efficiently and effectively narrow our search space via a few iterations. In the experiment part, we firstly show our evaluation results of each phase separately, proving the effectiveness of each phase. Then, we present the evaluation results of the combination of them, which offers insight into the merits of the proposed framework.  相似文献   

14.
Given a tree T=(V,E) of n nodes such that each node v is associated with a value-weight pair (val v ,w v ), where value val v is a real number and weight w v is a non-negative integer, the density of T is defined as \(\frac{\sum_{v\in V}{\mathit{val}}_{v}}{\sum_{v\in V}w_{v}}\). A subtree of T is a connected subgraph (V′,E′) of T, where V′?V and E′?E. Given two integers w min? and w max?, the weight-constrained maximum-density subtree problem on T is to find a maximum-density subtree T′=(V′,E′) satisfying w min?≤∑vV w v w max?. In this paper, we first present an O(w max? n)-time algorithm to find a weight-constrained maximum-density path in a tree T, and then present an O(w max? 2 n)-time algorithm to find a weight-constrained maximum-density subtree in T. Finally, given a node subset S?V, we also present an O(w max? 2 n)-time algorithm to find a weight-constrained maximum-density subtree in T which covers all the nodes in S.  相似文献   

15.
The Shor algorithm is effective for public-key cryptosystems based on an abelian group. At CRYPTO 2001, Paeng (2001) presented a MOR cryptosystem using a non-abelian group, which can be considered as a candidate scheme for post-quantum attack. This paper analyses the security of a MOR cryptosystem based on a finite associative algebra using a quantum algorithm. Specifically, let L be a finite associative algebra over a finite field F. Consider a homomorphism φ: Aut(L) → Aut(H)×Aut(I), where I is an ideal of L and H ? L/I. We compute dim Im(φ) and dim Ker(φ), and combine them by dim Aut(L) = dim Im(φ)+dim Ker(φ). We prove that Im(φ) = StabComp(H,I)(μ + B2(H, I)) and Ker(φ) ? Z1(H, I). Thus, we can obtain dim Im(φ), since the algorithm for the stabilizer is a standard algorithm among abelian hidden subgroup algorithms. In addition, Z1(H, I) is equivalent to the solution space of the linear equation group over the Galois fields GF(p), and it is possible to obtain dim Ker(φ) by the enumeration theorem. Furthermore, we can obtain the dimension of the automorphism group Aut(L). When the map ? ∈ Aut(L), it is possible to effectively compute the cyclic group 〈?〉 and recover the private key a. Therefore, the MOR scheme is insecure when based on a finite associative algebra in quantum computation.  相似文献   

16.
The abundance and ubiquity of graphs (e.g., online social networks such as Google\(+\) and Facebook; bibliographic graphs such as DBLP) necessitates the effective and efficient search over them. Given a set of keywords that can identify a data subject (DS), a recently proposed keyword search paradigm produces a set of object summaries (OSs) as results. An OS is a tree structure rooted at the DS node (i.e., a node containing the keywords) with surrounding nodes that summarize all data held on the graph about the DS. OS snippets, denoted as size-l OSs, have also been investigated. A size-l OS is a partial OS containing l nodes such that the summation of their importance scores results in the maximum possible total score. However, the set of nodes that maximize the total importance score may result in an uninformative size-l OSs, as very important nodes may be repeated in it, dominating other representative information. In view of this limitation, in this paper, we investigate the effective and efficient generation of two novel types of OS snippets, i.e., diverse and proportional size-l OSs, denoted as DSize-l and PSize-l OSs. Namely, besides the importance of each node, we also consider its pairwise relevance (similarity) to the other nodes in the OS and the snippet. We conduct an extensive evaluation on two real graphs (DBLP and Google\(+\)). We verify effectiveness by collecting user feedback, e.g., by asking DBLP authors (i.e., the DSs themselves) to evaluate our results. In addition, we verify the efficiency of our algorithms and evaluate the quality of the snippets that they produce.  相似文献   

17.
An optimal probabilistic-planning algorithm solves a problem, usually modeled by a Markov decision process, by finding an optimal policy. In this paper, we study the k best policies problem. The problem is to find the k best policies of a discrete Markov decision process. The k best policies, k?>?1, cannot be found directly using dynamic programming. Naïvely, finding the k-th best policy can be Turing reduced to the optimal planning problem, but the number of problems queried in the naïve algorithm is exponential in k. We show empirically that solving k best policies problem by using this reduction requires unreasonable amounts of time even when k?=?3. We then provide two new algorithms. The first is a complete algorithm, based on our theoretical contribution that the k-th best policy differs from the i-th policy, for some i?k, on exactly one state. The second is an approximate algorithm that skips many less useful policies. We show that both algorithms have good scalability. We also show that the approximate algorithms runs much faster and finds interesting, high-quality policies.  相似文献   

18.
In this paper we present a polynomial time approximation scheme for the most points covering problem. In the most points covering problem, n points in R 2, r>0, and an integer m>0 are given and the goal is to cover the maximum number of points with m disks with radius r. The dual of the most points covering problem is the partial covering problem in which n points in R 2 are given, and we try to cover at least pn points of these n points with the minimum number of disks. Both these problems are NP-hard. To solve the most points covering problem, we use the solution of the partial covering problem to obtain an upper bound for the problem and then we generate a valid solution for the most points covering problem by a careful modification of the partial covering solution. We first present an improved approximation algorithm for the partial covering problem which has a better running time than the previous algorithm for this problem. Using this algorithm, we attain a \((1 - \frac{{2\varepsilon }}{{1 +\varepsilon }})\)-approximation algorithm for the most points covering problem. The running time of our algorithm is \(O((1+\varepsilon )mn+\epsilon^{-1}n^{4\sqrt{2}\epsilon^{-1}+2}) \) which is polynomial with respect to both m and n, whereas the previously known algorithm for this problem runs in \(O(n \log n +n\epsilon^{-6m+6} \log (\frac{1}{\epsilon}))\) which is exponential regarding m.  相似文献   

19.
We consider the k-Server problem under the advice model of computation when the underlying metric space is sparse. On one side, we introduce Θ(1)-competitive algorithms for a wide range of sparse graphs. These algorithms require advice of (almost) linear size. We show that for graphs of size N and treewidth α, there is an online algorithm that receives O (n(log α + log log N))* bits of advice and optimally serves any sequence of length n. We also prove that if a graph admits a system of μ collective tree (q, r)-spanners, then there is a (q + r)-competitive algorithm which requires O (n(log μ + log log N)) bits of advice. Among other results, this gives a 3-competitive algorithm for planar graphs, when provided with O (n log log N) bits of advice. On the other side, we prove that advice of size Ω(n) is required to obtain a 1-competitive algorithm for sequences of length n even for the 2-server problem on a path metric of size N ≥ 3. Through another lower bound argument, we show that at least \(\frac {n}{2}(\log \alpha - 1.22)\) bits of advice is required to obtain an optimal solution for metric spaces of treewidth α, where 4 ≤ α < 2k.  相似文献   

20.
Continuous top-k query over sliding window is a fundamental problem in database, which retrieves k objects with the highest scores when the window slides. Existing studies mainly adopt exact algorithms to tackle this type of queries, whose key idea is to maintain a subset of objects in the window, and try to retrieve answers from it. However, all the existing algorithms are sensitive to query parameters and data distribution. In addition, they suffer from expensive overhead for incremental maintenance, and thus cannot satisfy real-time requirement. In this paper, we define a novel query named (ε, δ)-approximate continuous top-k query, which returns approximate answers for top-k query. In order to efficiently support this query, we propose an efficient framework, named PABF (Probabilistic Approximate Based Framework), to support approximate top-k query over sliding window. We firstly maintain a self-adaptive pruning value, which could filter out newly arrived objects who have a probability less than 1 ? δ of being a query result. For those objects that are not filtered, we combine them together, if the score difference among them is less than a threshold. To efficiently maintain these combined results, the framework PABF also proposes a multi-phase merging algorithm. Theoretical analysis indicates that even in the worst case, we require only logarithmic complexity for maintaining each candidate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号