首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Reconfigurable computing systems can be reconfigured at runtime and support partial reconfigurability which makes us able to execute tasks in a true multitasking manner.To manage such systems at runtime,a reconfigurable operating system is needed.The main part of this operating system is resource management unit which performs on-line scheduling and placement of hardware tasks at runtime.Reconfiguration overhead is an important obstacle that limits the performance of on-line scheduling algorithms in reconfigurable computing systems and increases the overall execution time.Configuration reusing (task reusing) can decrease reconfiguration overhead considerably,particularly in periodic applications or the applications in which the probability of tasks recurrence is high.In this paper,we present a technique called reusing-based scheduling (RBS),for on-line scheduling and placement in which configuration reusing is considered as a main characteristic in order to reduce reconfiguration overhead and decrease total execution time of the tasks.Several experiments have been conducted on the proposed algorithm.Obtained results show considerable improvement in overall execution time of the tasks.  相似文献   

2.
Semi-online scheduling with machine cost   总被引:2,自引:1,他引:1       下载免费PDF全文
For most scheduling problems the set of machines is fixed initially and remains unchanged for the duration of the problem.Recently Imreh and Nogaproposed to add the concept of machine cost to scheduling problems and considered the so-called List Model problem.An online algorthm with a competitive ratio 1.618 was given while the lower boud is 4/3.In this paper,two different semi-onlne versions of this problem are studied‘.In the first case,it is assumed that the processing time of the largest job is known a priori.A semi-online algorithm is presented with the competitive ratio at most 1.5309 while the lower bound is 4/3,In the second case,it is assumed that the total processing time of all jobs is known in advance.A semi-online algorithm is presented with the competitive ratio at most 1.414 while the lower bound is 1.161.It is shown that the additional partial available information about the jobs leads to the possibility of constructing a schedule with a smaller competitive ratio than that of online algorithms.  相似文献   

3.
We design a task mapper TPCM for assigning tasks to virtual machines, and an application-aware virtual machine scheduler TPCS oriented for parallel computing to achieve a high performance in virtual computing systems. To solve the problem of mapping tasks to virtual machines, a virtual machine mapping algorithm (VMMA) in TPCM is presented to achieve load balance in a cluster. Based on such mapping results, TPCS is constructed including three components: a middleware supporting an application-driven scheduling, a device driver in the guest OS kernel, and a virtual machine scheduling algorithm. These components are implemented in the user space, guest OS, and the CPU virtualization subsystem of the Xen hypervisor, respectively. In TPCS, the progress statuses of tasks are transmitted to the underlying kernel from the user space, thus enabling virtual machine scheduling policy to schedule based on the progress of tasks. This policy aims to exchange completion time of tasks for resource utilization. Experimental results show that TPCM can mine the parallelism among tasks to implement the mapping from tasks to virtual machines based on the relations among subtasks. The TPCS scheduler can complete the tasks in a shorter time than can Credit and other schedulers, because it uses task progress to ensure that the tasks in virtual machines complete simultaneously, thereby reducing the time spent in pending, synchronization, communication, and switching. Therefore, parallel tasks can collaborate with each other to achieve higher resource utilization and lower overheads. We conclude that the TPCS scheduler can overcome the shortcomings of present algorithms in perceiving the progress of tasks, making it better than schedulers currently used in parallel computing.  相似文献   

4.
No-wait flow shops with makespan minimization are classified as NP-hard. In this paper, the optimization objective is equivalently transformed to total idle-time minimization. The independence relationship between tasks is analyzed, and objective increment properties are established for the fundamental operators of the heuristics. The quality of the new schedules generated during a heuristic is judged only by objective increments and not by the traditional method, which computes and compares the objective of a whole schedule. Based on objective increments, the time complexity of the heuristic can be decreased by one order. A seed phase is presented to generate an initial solution according to the transformed objective. Construction and improvement phases are introduced by experimental analysis. The FCH (fast composite heuristic) is proposed and compared with the most effective algorithms currently available for the considered problem. Experimental results show that the effectiveness of the FCH is similar to that of the best methods but requires far less computation time. The FCH can also be efficient in real time scheduling and rescheduling for no-wait flow shops.  相似文献   

5.
Parameter setting for evolutionary algorithms is still an important issue in evolutionary computation. There are two main approaches to parameter setting: parameter tuning and parameter control. In this paper, we introduce self-adaptive parameter control of a genetic algorithm based on Bayesian network learning and simulation. The nodes of this Bayesian network are genetic algorithm parameters to be controlled. Its structure captures probabilistic conditional (in)dependence relationships between the parameters. They are learned from the best individuals, i.e., the best configurations of the genetic algorithm. Individuals are evaluated by running the genetic algorithm for the respective parameter configuration. Since all these runs are time-consuming tasks, each genetic algorithm uses a small-sized population and is stopped before convergence. In this way promising individuals should not be lost. Experiments with an optimal search problem for simultaneous row and column orderings yield the same optima as state-of-the-art methods but with a sharp reduction in computational time. Moreover, our approach can cope with as yet unsolved high-dimensional problems.  相似文献   

6.
In this Paper,we present reduction algorithms based on the principle of Skowron‘s discernibility matrix-the ordered attributes method.The completeness of the algorithms for Pawlak reduct and the uniqueness for a given order of the attributes are proved.Since a discernibility matrix requires the size of the memory of |U|^2,U is a universe of bojects,it would be impossible to apply these algorithms directly to a massive object set.In order to solve the problem,a so=called quasi-discernibility matrix and two reduction algorithms are prpopsed.Although the proposed algorithms are incomplete for Pawlak reduct,their optimal paradigms ensure the completeness as long as they satisfy some conditions.Finally,we consider the problem on the reduction of distributive object sets.  相似文献   

7.
This paper compares the quality and execution times of several algorithms for scheduling service based workflow applications with changeable service availability and parameters. A workflow is defined as an acyclic directed graph with nodes corresponding to tasks and edges to dependencies between tasks. For each task, one out of several available services needs to be chosen and scheduled to minimize the workflow execution time and keep the cost of service within the budget. During the execution ofa workflow, some services may become unavailable, new ones may appear, and costs and execution times may change with a certain probability. Rescheduling is needed to obtain a better schedule. A solution is proposed on how integer linear pro- gramming can be used to solve this problem to obtain optimal solutions for smaller problems or suboptimal solutions for larger ones. It is compared side-by-side with GAIN, divide-and-conquer, and genetic algorithms for various probabilities of service unavailability or change in service parameters. The algorithms are implemented and subsequently tested in a real BeesyCluster environment.  相似文献   

8.
The query space of a similarity query is usually narrowed down by pruning inactive query subspaces which contain no query results and keeping active query subspaces which may contain objects corre-sponding to the request. However,some active query subspaces may contain no query results at all,those are called false active query subspaces. It is obvious that the performance of query processing degrades in the presence of false active query subspaces. Our experiments show that this problem becomes seriously when the data are high dimensional and the number of accesses to false active sub-spaces increases as the dimensionality increases. In order to solve this problem,this paper proposes a space mapping approach to reducing such unnecessary accesses. A given query space can be re-fined by filtering within its mapped space. To do so,a mapping strategy called maxgap is proposed to improve the efficiency of the refinement processing. Based on the mapping strategy,an index structure called MS-tree and algorithms of query processing are presented in this paper. Finally,the performance of MS-tree is compared with that of other competitors in terms of range queries on a real data set.  相似文献   

9.
Community structure is one of the most important properties in social networks,and community detection has received an enormous amount of attention in recent years.In dynamic networks,the communities may evolve over time so that pose more challenging tasks than in static ones.Community detection in dynamic networks is a problem which can naturally be formulated with two contradictory objectives and consequently be solved by multiobjective optimization algorithms.In this paper,a novel multiobjective immune algorithm is proposed to solve the community detection problem in dynamic networks.It employs the framework of nondominated neighbor immune algorithm to simultaneously optimize the modularity and normalized mutual information,which quantitatively measure the quality of the community partitions and temporal cost,respectively.The problem-specific knowledge is incorporated in genetic operators and local search to improve the effectiveness and efficiency of our method.Experimental studies based on four synthetic datasets and two real-world social networks demonstrate that our algorithm can not only find community structure and capture community evolution more accurately but also be more steadily than the state-of-the-art algorithms.  相似文献   

10.
In machine learning and statistics, classification is the a new observation belongs, on the basis of a training set of data problem of identifying to which of a set of categories (sub-populations) containing observations (or instances) whose category membership is known. SVM (support vector machines) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. The basic SVM takes a set of input data and predicts, for each given input, which of two possible classes fon~as the output, making it a non-probabilistic binary linear classifier. In pattern recognition problem, the selection of the features used for characterization an object to be classified is importance. Kernel methods are algorithms that, by replacing the inner product with an appropriate positive definite function, impticitly perform a nonlinear mapping 4~ of the input data in Rainto a high-dimensional feature space H. Cover's theorem states that if the transformation is nonlinear and the dimensionality of the feature space is high enough, then the input space may be transformed into a new feature space where the patterns are linearly separable with high probability.  相似文献   

11.
 This paper describes the concept of fuzzy regression analysis based on genetic algorithms. It is shown that the performance of fuzzy regression models may be improved and fuzzy modeling technique can be simplified by incorporating genetic algorithms into regression analysis procedure. The effectiveness of the proposed approach is illustrated through simulation of fuzzy linear regression model obtained by other authors and comparison of the results. The paper further demonstrates the applications of the approach to the manufacturing and business problems.  相似文献   

12.
针对由理查德·迈尔斯提出的标记线图的遗传算法进行改进:采取自适应参数调 整法,同一代中适应度高于平均的个体杂交和变异率动态变化,适应度低于平均的个体杂交和 变异率设为定值;在创建初始种群时加入了约束条件,旨在改善初始种群覆盖空间的不确定性 和个体分布的相对不合理性;修正了遗传算法的适应度函数,使得以个体适应度为指标的选择 算子能正确引导算法搜索解空间。用遗传算法标记 6 幅不同的线图,变量为杂交率、变异率公 式中的参数 a 和 c,分析算法标记成功率曲线的变化趋势,探讨算子参数设置对遗传算法性能 的影响,结果表明 c 属于区间[0,0.05],a 属于区间[0.8,1.0]且为标记线图的遗传算法的最优 参数设置。  相似文献   

13.
随着国家电网对分布式电源并网市场的开放,将分布式电源集成到现有配电系统是今后电力系统的发展趋势。以配电网网损和节点电压偏移最小化为优化目标,考虑支路电流约束、分布式发电单元容量和总接入容量等约束条件,构建大规模分布式电源并网优化配置模型。并提出基于均匀设计的改进遗传算法进行寻优计算,避免了遗传算子的盲目试凑,可以较好地兼顾多目标优化Pareto解集的多样性与快速性,有效提高优化精度。算例对比分析结果表明,通过对分布式电源接入配电网的合理优化配置,可以有效降低系统网损,提高配电网电压的稳定性。  相似文献   

14.
本文简要介绍了入侵检测技术的原理和分类,介绍了遗传算法在入侵检测技术中的应用,以及利用遗传算法为基础和其他技术相结合的一些方法在入侵检测中的应用。  相似文献   

15.
将遗传算法应用于飞机大修总装工作流程优化中。以完成时间最短为目标,建立总装工作流程优化模型;综合“工序重置法”和罚函数法,满足总装工作中的工序和人员约束条件;采用改进的最优保存策略对基本遗传算法作进一步改进。仿真结果表明,改进的遗传算法的最优解搜索能力较基本遗传算法有明显提高,验证了GA在解决定检离位工作流程优化问题上的适用性。  相似文献   

16.
柔性作业车间调度问题是典型的NP难问题,对实际生产应用具有指导作用。近年来,随着遗传算法的发展,利用遗传算法来解决柔性作业车间调度问题的思想和方法层出不穷。为了促进遗传算法求解柔性作业车间调度问题的进一步发展,阐述了柔性作业车间调度问题的研究理论,对已有改进方法进行了分类,通过对现存问题的分析,探讨了未来的发展方向。  相似文献   

17.
遗传算法是一种模拟生物选择、进化过程的随机、并行搜索算法。本文利用图像恢复的数学模型,采用嵌入混沌序列生成初始种群的遗传算法对退化图像进行恢复,可以有效的解决逆滤波复原算法中存在的恢复后图像的精确度不足等缺点,有较好的恢复效果。同时为了解决遗传算法局部收敛和收敛速度慢的问题,本文算法中又引入了混沌优化机制:  相似文献   

18.
遗传算法作为一种通用方法,已经在很多领域得到应用,但在测试性工程中应用相对较少;鉴于测试性分配的原则和一般模型,文章利用"并行工程"的思想,系统地给出了采用遗传算法进行复杂电子装备测试性优化分配的全过程;基于遗传算法的测试性优化分配方法可以把测试性和可靠性、维修性等紧密衔接起来,解决了电子装备"并行设计"中测试性分配存在的问题,弥补了传统方法的缺陷;通过将该算法与传统的测试性分配方法进行比较,充分证明了遗传算法在测试性分配工作中具有很高的实用价值;采用该方法可以大大提高测试性分配的成功率、节约装备的诊断和测试费用.  相似文献   

19.
In this paper, we show that genetic algorithms (GA) can be used to control the output of procedural modeling algorithms. We propose an efficient way to encode the choices that have to be made during a procedural generation as a hierarchical genome representation. In combination with mutation and reproduction operations specifically designed for controlled procedural modeling, our GA can evolve a population of individual models close to any high‐level goal. Possible scenarios include a volume that should be filled by a procedurally grown tree or a painted silhouette that should be followed by the skyline of a procedurally generated city. These goals are easy to set up for an artist compared to the tens of thousands of variables that describe the generated model and are chosen by the GA. Previous approaches for controlled procedural modeling either use Reversible Jump Markov Chain Monte Carlo (RJMCMC) or Stochastically‐Ordered Sequential Monte Carlo (SOSMC) as workhorse for the optimization. While RJMCMC converges slowly, requiring multiple hours for the optimization of larger models, it produces high quality models. SOSMC shows faster convergence under tight time constraints for many models, but can get stuck due to choices made in the early stages of optimization. Our GA shows faster convergence than SOSMC and generates better models than RJMCMC in the long run.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号