首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
In this paper, we consider the on-line scheduling of jobs that may be competing for mutually exclusive resources. We model the conflicts between jobs with a conflict graph, so that the set of all concurrently running jobs must form an independent set in the graph. This model is natural and general enough to have applications in a variety of settings; however, we are motivated by the following two specific applications: traffic intersection control and session scheduling in high speed local area networks with spatial reuse. Our results focus on two special classes of graphs motivated by our applications: bipartite graphs and interval graphs. The cost function we use is maximum response time. In all of the upper bounds, we devise algorithms which maintain a set of invariants which bound the accumulation of jobs on cliques (in the case of bipartite graphs, edges) in the graph. The lower bounds show that the invariants maintained by the algorithms are tight to within a constant factor. For a specific graph which arises in the traffic intersection control problem, we show a simple algorithm which achieves the optimal competitive ratio.  相似文献   

2.
In this paper, main components of a workflow system that are relevant to the correctness in the presence of concurrency are formalized based on set theory and graph theory. The formalization which constitutes the theoretical basis of the correctness criterion provided can be summarized as follows:-Activities of a workflow are represented through a notation based on set theory to make it possible to formalize the conceptual grouping of activities.-Control-flow is represented as a special graph based on this set definition, and it includes serial composition, parallel composition, conditional branching, and nesting of individual activities and conceptual activities themselves.-Data-flow is represented as a directed acyclic graph in conformance with the control-flow graph.The formalization of correctness of concurrently executing workflow instances is based on this framework by defining two categories of constraints on the workflow environment with which the workflow instances and their activities interact. These categories are:-Basic constraints that specify the correct states of a workflow environment.-Inter-activity constraints that define the semantic dependencies among activities such as an activity requiring the validity of a constraint that is set or verified by a preceding activity.Basic constraints graph and inter-activity constraints graph which are in conformance with the control-flow and data-flow graphs are then defined to represent these constraints. These graphs are used in formalizing the intervals among activities where an inter-activity constraint should be maintained and the intervals where a basic constraint remains invalid.A correctness criterion is defined for an interleaved execution of workflow instances using the constraints graphs. A concurrency control mechanism, namely Constraint Based Concurrency Control technique is developed based on the correctness criterion. The performance analysis shows the superiority of the proposed technique. Other possible approaches to the problem are also presented.  相似文献   

3.
In our previous articles, we made the case for having an enterprise architecture and discussed the first phases of an architecture development process. The second article concentrated on describing the baseline architecture and defining the target architecture. We complete our discussion of the methodology by focusing on transition and implementation planning. Transition planning focuses on deriving a time-phased set of actions to achieve a given goal-in this case, implementation of the target architecture. Large organizations will remediate, renovate, or replace many systems concurrently. In doing so, they must recognize interdependencies among systems and accommodate them in activity scheduling. Implementation planning has a different time frame and a different audience. It maps resources (people, places, things, and funding) to transition planning activities  相似文献   

4.
The first part of this paper describes an automatic reverse engineering process to infer subsystem abstractions that are useful for a variety of software maintenance activities. This process is based on clustering the graph representing the modules and module-level dependencies found in the source code into abstract structures not in the source code called subsystems. The clustering process uses evolutionary algorithms to search through the enormous set of possible graph partitions, and is guided by a fitness function designed to measure the quality of individual graph partitions. The second part of this paper focuses on evaluating the results produced by our clustering technique. Our previous research has shown through both qualitative and quantitative studies that our clustering technique produces good results quickly and consistently. In this part of the paper we study the underlying structure of the search space of several open source systems. We also report on some interesting findings our analysis uncovered by comparing random graphs to graphs representing real software systems.  相似文献   

5.
王勇  云晓春  李奕飞 《软件学报》2008,19(4):981-992
测量分析对等网络(peer-to-peer networks)拓扑特征是解决P2P优化、网络监管等问题的基础.对等网络是一类大规模、自组织、并且高度动态的复杂网络系统,准确、完整地测量所有对等网络拓扑面临很大困难.研究对等网络的协议特点,分析特定P2P拓扑实例成为认识P2P拓扑特性的一种可选研究方案.以Gnutella网络为测量对象,定义了对等网络拓扑测量系统准确性、完整性的衡量指标,设计、实现了基于正反馈的分布式Gnutella拓扑爬行器——D-Crawler;分析了Gnutella网络拓扑图的度等级分布特征、度频率分布特征以及小世界特性.实验和分析结果表明,对等网络拓扑图属性特征与其使用的协议和客户端软件行为密切相关;Gnutella网络中不同层次的节点之间的拓扑关系表现出不同的特性:上层节点组成的子图具有度等级幂律特征,但在其度频率分布上却呈现出正态分布的特性;下层节点在度等级分布上的幂律特征表现不强烈,而在其度频率分布特征上具有明显的幂律特性.拟合结果表明:幂律能够较好地拟合度等级分布和下层节点度频率分布,然而对于上层节点度概率密度分布,Gaussian拟合效果最好.Gnutella网络具有小世界特性,即:较大的聚集系数和较小的特征路径长度,但它不是无尺度图,不符合BA(Barabási-Albert)生长模型,其发展遵循一种不同于BA模型的生长过程.  相似文献   

6.
Designing and reasoning about real-time systems are difficult activities, in which timing and reactive behaviour requirements add significant complexity to system validation. In this paper, a new technique for distributed prototyping of real-time systems is presented. It enables system prototypes to be concurrently developed and tested by a geographically distributed team, in such a way that each developer can validate his or her part of the system against the other parts which are being built in other development sites. A set of tools has been implemented that supports validation of functional and time behaviour through distributed animation of graphical prototypes with a consistent vision of simulated time.  相似文献   

7.
Wizard: a database inference analysis and detection system   总被引:1,自引:0,他引:1  
The database inference problem is a well-known problem in database security and information system security in general. In order to prevent an adversary from inferring classified information from combinations of unclassified information, a database inference analyst must be able to detect and prevent possible inferences. Detecting database inference problems at database design time provides great power in reducing problems over the lifetime of a database. We have developed and constructed a system called Wizard to analyze databases for their inference problems. The system takes as input a database schema, its constituent instances (if available) and additional human-supplied domain information, and provides a set of associations between entities and/or activities that can be grouped by their potential severity of inference vulnerability. A knowledge acquisition process called microanalysis permits semantic knowledge of a database to be incorporated into the analysis using conceptual graphs. These graphs are then analyzed with respect to inference-relevant domains we call facets using tools we have developed. We can determine inference problems within single facets as well as some inference problems between two or more facets. The architecture of the system is meant to be general so that further refinements of inference information subdomains can be easily incorporated into the system  相似文献   

8.
Modern distributed systems contain a large number of objects and must be capable of evolving, without shutting down the complete system, to cater for changing requirements. There is a need for distributed, automated management agents whose behavior also has to dynamically change to reflect the evolution of the system being managed. Policies are a means of specifying and influencing management behavior within a distributed system, without coding the behavior into the manager agents. Our approach is aimed at specifying implementable policies, although policies may be initially specified at the organizational level and then refined to implementable actions. We are concerned with two types of policies. Authorization policies specify what activities a manager is permitted or forbidden to do to a set of target objects and are similar to security access-control policies. Obligation policies specify what activities a manager must or must not do to a set of target objects and essentially define the duties of a manager. Conflicts can arise in the set of policies. Conflicts may also arise during the refinement process between the high level goals and the implementable policies. The system may have to cater for conflicts such as exceptions to normal authorization policies. The paper reviews policy conflicts, focusing on the problems of conflict detection and resolution. We discuss the various precedence relationships that can be established between policies in order to allow inconsistent policies to coexist within the system and present a conflict analysis tool which forms part of a role based management framework. Software development and medical environments are used as example scenarios  相似文献   

9.
Planning graphs have been shown to be a rich source of heuristic information for many kinds of planners. In many cases, planners must compute a planning graph for each element of a set of states, and the naive technique enumerates the graphs individually. This is equivalent to solving a multiple-source shortest path problem by iterating a single-source algorithm over each source.We introduce a data-structure, the state agnostic planning graph, that directly solves the multiple-source problem for the relaxation introduced by planning graphs. The technique can also be characterized as exploiting the overlap present in sets of planning graphs. For the purpose of exposition, we first present the technique in deterministic (classical) planning to capture a set of planning graphs used in forward chaining search. A more prominent application of this technique is in conformant and conditional planning (i.e., search in belief state space), where each search node utilizes a set of planning graphs; an optimization to exploit state overlap between belief states collapses the set of sets of planning graphs to a single set. We describe another extension in conformant probabilistic planning that reuses planning graph samples of probabilistic action outcomes across search nodes to otherwise curb the inherent prediction cost associated with handling probabilistic actions. Finally, we show how to extract a state agnostic relaxed plan that implicitly solves the relaxed planning problem in each of the planning graphs represented by the state agnostic planning graph and reduces each heuristic evaluation to counting the relevant actions in the state agnostic relaxed plan. Our experimental evaluation (using many existing International Planning Competition problems from classical and non-deterministic conformant tracks) quantifies each of these performance boosts, and demonstrates that heuristic belief state space progression planning using our technique is competitive with the state of the art.  相似文献   

10.
Social networks are usually modeled and represented as deterministic graphs with a set of nodes as users and edges as connection between users of networks. Due to the uncertain and dynamic nature of user behavior and human activities in social networks, their structural and behavioral parameters are time varying parameters and for this reason using deterministic graphs for modeling and analysis of behavior of users may not be appropriate. In this paper, we propose that stochastic graphs, in which weights associated with edges are random variables, may be a better candidate as a graph model for social network analysis. Thus, we first propose generalization of some network measures for stochastic graphs and then propose six learning automata based algorithms for calculating these measures under the situation that the probability distribution functions of the edge weights of the graph are unknown. Simulations on different synthetic stochastic graphs for calculating the network measures using the proposed algorithms show that in order to obtain good estimates for the network measures, the required number of samples taken from edges of the graph is significantly lower than that of standard sampling method aims to analysis of human behavior in online social networks.  相似文献   

11.
Simulation and verification are two conventional techniques for the analysis of specifications of real-time systems. While simulation is relatively inexpensive in terms of execution time, it only validates the behavior of a system for one particular computation path. On the other hand, verification provides guarantees over the entire set of computation paths of a system, but is, in general, very expensive due to the state-space explosion problem. We introduce a new technique: simulation-verification combines the best of both worlds by synthesizing an intermediate analysis method. This method uses simulation to limit the generation of a computation graph to that set of computations consistent with the simulation. This limited computation graph, called a simulation-verification graph, can be one or more orders of magnitude smaller than the full computation graph. A tool, XSVT, is described which implements simulation-verification graphs. Three paradigms for using the new technique are proposed. The paper illustrates the application of the proposed technique via an example of a robot controller for a manufacturing assembly line  相似文献   

12.
Incremental constraint modelling in a feature modelling system   总被引:1,自引:0,他引:1  
The techniques of constraint propagation have recently been successfully applied to feature-based design. Because of their speed, constraint propagation methods allow incremental design and rapid local modifcations of the part. However, cyclic constraints cause serious problems to current constraint propagation algorithms. Variational geometric design systems can, in principle, manage these cases. Unfortunately, this typically requires complete re-evaluation of the underlying set of constraint equations, making the method unsuitable for interactive use. The proposed system aims to localize the problem of constraint solving and maintenance. The constraint graph of the part or assembly is divided into several independent partial graphs, subsystems. Afterwards, each subsystem is handled separately using a selected constraint solving technique for the subsystem.  相似文献   

13.
We consider the following problem of scheduling with agreements: a set of jobs must be scheduled non-preemptively on identical machines subject to constraints that only some specific jobs can be scheduled concurrently on different machines. These constraints are represented by an agreement graph and the aim is to minimize the makespan. This problem is NP-hard. We study the complexity of the problem for two machines and arbitrary bipartite agreement graphs, in particular we prove the NP-hardness of the open problem proposed in the literature which is the case of two machines with processing times at most 3. We propose list algorithms with empirical results for the problem in the general case.  相似文献   

14.
On parallelizing the multiprocessor scheduling problem   总被引:1,自引:0,他引:1  
Existing heuristics for scheduling a node and edge weighted directed task graph to multiple processors can produce satisfactory solutions but incur high time complexities, which tend to exacerbate in more realistic environments with relaxed assumptions. Consequently, these heuristics do not scale well and cannot handle problems of moderate sizes. A natural approach to reducing complexity, while aiming for a similar or potentially better solution, is to parallelize the scheduling algorithm. This can be done by partitioning the task graphs and concurrently generating partial schedules for the partitioned parts, which are then concatenated to obtain the final schedule. The problem, however, is nontrivial as there exists dependencies among the nodes of a task graph which must be preserved for generating a valid schedule. Moreover, the time clock for scheduling is global for all the processors (that are executing the parallel scheduling algorithm), making the inherent parallelism invisible. In this paper, we introduce a parallel algorithm that is guided by a systematic partitioning of the task graph to perform scheduling using multiple processors. The algorithm schedules both the tasks and messages, and is suitable for graphs with arbitrary computation and communication costs, and is applicable to systems with arbitrary network topologies using homogeneous or heterogeneous processors. We have implemented the algorithm on the Intel Paragon and compared it with three closely related algorithms. The experimental results indicate that our algorithm yields higher quality solutions while using an order of magnitude smaller scheduling times. The algorithm also exhibits an interesting trade-off between the solution quality and speedup while scaling well with the problem size  相似文献   

15.
Using agile methods to develop large systems presents a thorny set of issues. If large teams are to produce lots of software functionality quickly, the agile methods involved must scale to meet the task. After all, a small team could create the software if the functionality to be delivered was small and, conversely, could be delivered given we had the time. Scaling agile teams thus becomes an issue if the only option for meeting a system delivery deadline is to have many developers working concurrently.  相似文献   

16.
《国际计算机数学杂志》2012,89(9):1483-1489
The problem of counting rankings satisfying the collinearity condition with respect to two rankings over the metric spaces of rank distance is treated in this paper by transforming it into finding the number of systems of distinct representatives (abbreviated as SDR's) with respect to associated set systems. A formula for the chromatic list expression of graphs is then given in terms of the inclusion and exclusion principle, followed by a formula for the number of SDR's of set systems when the graphs are complete.  相似文献   

17.
在几何图形或图像边界的频谱分析应用中,用Fourier三角基表示间断图形时必然会出现Gibbs现象,而用Walsh函数表示时,因其收敛速度慢而效果欠佳.本文首先构造了一类分段点在四进制有理数点处的分段多项式函数集(简称四进制U-系统,QU-系统),它是L2[0,1]空间上的完备的正交函数系,并研究了它的性质、基函数与Fourier-QU系数的计算公式,同时,也给出了1~3次QU-系统的一组显式表达式.然后,使用Fourier-QU级数的有限项和表示图像轮廓线,提出用有限的Fourier-QU系数描述几何图形或图像轮廓线,并由此得到了一类新的多项式描述子——QU描述子,而归一化QU描述子是一类基于平移、旋转与尺度变换的特征不变量.最后,通过数值实验证实了使用Fourier-QU级数逼近一元平方可积函数时,其收敛速率要优于Fourier级数、Walsh级数和Fourier-BU级数,同样也验证了QU描述子是一类有效的形状描述子,用图像间的QU距离能准确地描述图像间的相似性.  相似文献   

18.
This paper proposes an optimization technique for spot-checking to minimize the computation time in volunteer computing (VC) systems with non-reliable participants. Credibility-based voting with spot-checking is a promising approach to the high-performance and reliable VC systems. In this approach, spot-check rate has a significant impact on the performance, which must be set before the computation. Therefore, the estimation of the optimal spot-check rate is the major concern to minimize the computation time. The key idea for the estimation is to represent the mathematical expectation of the computation time as a function of spot-check rate. Extensive simulation has shown that the proposed technique always obtains an approximate estimate of the optimal spot-check rate and minimizes the computation time with an uncertainty of 1%.  相似文献   

19.
传统程序切片技术在计算BPEL程序切片时会产生切片不完备问题,为此,提出一种基于程序依赖图的BPEL静态程序切片技术。该技术根据BPEL语言的特点,通过建立BPEL程序依赖图,计算BPEL程序切片。案例分析表明,该技术能够获得更加全面的程序切片,从而可以帮助软件工程人员更好地测试、调试和维护BPEL程序。  相似文献   

20.
Manufacturing scheduling is an optimization process that allocates limited manufacturing resources over time among parallel and sequential manufacturing activities. This allocation must obey a set of rules or constraints that reflect the temporal relationships between manufacturing activities and the capacity limitations of a set of shared resources. The allocation also affects a schedule's optimality with respect to criteria such as cost, lateness, or throughput. The globalization of manufacturing makes such optimization increasingly important. To survive in this competitive market, manufacturing enterprises must increase their productivity and profitability through greater shop floor agility. Agent-based manufacturing scheduling systems are a promising way to provide this optimization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号