首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 457 毫秒
1.
In this paper, we present an evolutionary algorithm (EVA) for solving the resource-constrained project scheduling problem with minimum and maximum time lags (RCPSP/max). EVA works on a population consisting of several distance-order-preserving activity lists representing feasible or infeasible schedules. The algorithm uses the conglomerate-based crossover operator, the objective of which is to exploit the knowledge of the problem to identify and combine those good parts of the solution that have really contributed to its quality. In a recent paper, Valls et al. (European J. Oper. Res. 165, 375–386, 2005) showed that incorporating a technique called double justification (DJ) in RCPSP heuristic algorithms can produce a substantial improvement in the results obtained. EVA also applies two double justification operators DJmax and DJU adapted to the specific characteristics of problem RCPSP/max to improve all solutions generated in the evolutionary process. Computational results in benchmark sets show the merit of the proposed solution method.  相似文献   

2.
《Information and Computation》2007,205(9):1470-1490
The SL synchronous programming model is a relaxation of the Esterel synchronous model where the reaction to the absence of a signal within an instant can only happen at the next instant. In previous work, we have revisited the SL synchronous programming model. In particular, we have discussed an alternative design of the model, introduced a CPS translation to a tail recursive form, and proposed a notion of bisimulation equivalence. In the present work, we extend the tail recursive model with first-order data types obtaining a non-deterministic synchronous model whose complexity is comparable to the one of the π-calculus. We show that our approach to bisimulation equivalence can cope with this extension and in particular that labelled bisimulation can be characterised as a contextual bisimulation.  相似文献   

3.
Data forwarding is an essential function in wireless sensor networks (WSNs). It is well-known that geographic forwarding is an efficient scheme for WSNs as it requires maintaining only local topology information to forward data to a central gathering point, called the sink (or base station), for further analysis and processing. In this paper, we propose an energy-efficient data forwarding protocol for WSNs, called Weighted Localized Delaunay Triangulation-based data forwarding (WLDT), with a goal to extending the network lifetime. Specifically, WLDT selects as forwarders the sensors with high remaining energy and whose locations lie nearer the shortest path between source sensors and a single sink, thus helping the sensors minimize their average energy consumption. More precisely, WLDT defines checkpoints to build energy-efficient data forwarding paths and uses a 1-lookahead scheme to guarantee data delivery to the sink. We show that WLDT, which favors data forwarding through short Delaunay edges, achieves an energy gain percentage in the order of 55% for the free space model and close to 100% for the multi-path model compared to BVGF and GPSR, which forward data through long distances and which we have slightly updated to account for energy in the selection of next forwarders. We prove that these checkpoints yield an energy gain percentage in the order of 30% in comparison with a similar protocol, called WLDT-w/c (or WLDT without checkpoints), which forwards data via short distances but does not use checkpoints.  相似文献   

4.
Implicative fuzzy associative memories (IFAMs) are single layer feedforward fuzzy neural networks whose synaptic weights and threshold values are given by implicative fuzzy learning. Despite an excellent tolerance with respect to either pasitive or negative noise, IFAMs are not suited for patterns corrupted by mixed noise. This paper presents a solution to this problem. Precisely, we first introduce the class of finite IFAMs by replacing the unit interval by a finite chain L. Then, we generalize both finite IFAMs and their dual versions by means of a permutation on L. The resulting models are referred to as permutation-based finite IFAMs (π-IFAMs). We show that a π-IFAM can be viewed as a finite IFAM, but defined on an alternative lattice structure (L,?). Thus, π-IFAMs also exhibit optimal absolute storage capacity and one step convergence in the autoassociative case. Furthermore, computational experiments revealed that a certain π-IFAM, called Lukasiewicz πμ-IFAM, outperformed several other associative memory models for the reconstruction of gray-scale patterns corrupted by salt and pepper noise.  相似文献   

5.
In [C.S. Iliopoulos, M.S. Rahman, Faster index for property matching, Information Processing Letters 105 (2008) 218-223], Iliopoulos and Rahman proposed a data structure called IDS-PIP for solving the property indexing pattern matching problem. IDS-PIP can be constructed in O(n) time, where n is the length of the text. Then, based on ID-PIP, each query takes O(mlog|Σ|+K) time, where m is the length the pattern, Σ is the alphabet, and K is the output size. They assume that all intervals in property π are disjoint. If the intervals in property π are not disjoint, then they create an equivalent set of disjoint intervals π to replace π. However, property π is not equivalent to property π at all. In this erratum, we propose a way for finding correct property π and modify IDS-PIP slightly.  相似文献   

6.
According to recent research, the current Internet wastes energy due to an un-optimized network design, which does not consider the energy consumption of network elements such as routers and switches. Looking toward energy saving networks, a generalized problem called the energy consumption minimized network (EMN) had been proposed. However, due to the NP-completeness of this problem, it requires a considerable amount of time to obtain the solution, making it practically intractable for large-scale networks.In this paper, we re-formulate the NP-complete EMN problem into a simpler one using a newly defined concept called ‘traffic centrality’. We then propose a new ant colony-based self-adaptive energy saving routing scheme, referred to as A-ESR, which exploits the ant colony optimization (ACO) method to make the Internet more energy efficient. The proposed A-ESR algorithm heuristically solves the re-formulated problem without any supervised control by allowing the incoming flows to be autonomously aggregated on specific heavily-loaded links and switching off the other lightly-loaded links. Additionally, the A-ESR algorithm adjusts the energy consumption by tuning the aggregation parameter β, which can dramatically reduce the energy consumption during nighttime hours (at the expense of tolerable network delay performance). Another promising capability of this algorithm is that it provides a high degree of self-organizing capabilities due to the amazing advantages of the swarm intelligence of artificial ants. The simulation results in real IP networks show that the proposed A-ESR algorithm performs better than previous algorithms in terms of its energy efficiency. The results also show that this efficiency can be adjusted by tuning β.  相似文献   

7.
The studied resource-constrained project scheduling problem (RCPSP) is a classical well-known problem which involves resource, precedence, and temporal constraints and has been applied to many applications. However, the RCPSP is confirmed to be an NP-hard combinatorial problem. Restated, it is hard to be solved in a reasonable time. Therefore, there are many metaheuristics-based schemes for finding near optima of RCPSP were proposed. The particle swarm optimization (PSO) is one of the metaheuristics, and has been verified being an efficient nature-inspired algorithm for many optimization problems. For enhancing the PSO efficiency in solving RCPSP, an effective scheme is suggested. The justification technique is combined with PSO as the proposed justification particle swarm optimization (JPSO), which includes other designed mechanisms. The justification technique adjusts the start time of each activity of the yielded schedule to further shorten the makespan. Moreover, schedules are generated by both forward scheduling particle swarm and backward scheduling particle swarm in this work. Additionally, a mapping scheme and a modified communication mechanism among particles with a designed gbest ratio (GR) are also proposed to further improve the efficiency of the proposed JPSO. Simulation results demonstrate that the proposed JPSO provides an effective and efficient approach for solving RCPSP.  相似文献   

8.
《Parallel Computing》2014,40(10):661-680
Data-flow is a natural approach to parallelism. However, describing dependencies and control between fine-grained data-flow tasks can be complex and present unwanted overheads. TALM (TALM is an Architecture and Language for Multi-threading) introduces a user-defined coarse-grained parallel data-flow model, where programmers identify code blocks, called super-instructions, to be run in parallel and connect them in a data-flow graph. TALM has been implemented as a hybrid Von Neumann/data-flow execution system: the Trebuchet. We have observed that TALM’s usefulness largely depends on how programmers specify and connect super-instructions. Thus, we present Couillard, a full compiler that creates, based on an annotated C-program, a data-flow graph and C-code corresponding to each super-instruction. We show that our toolchain allows one to benefit from data-flow execution and explore sophisticated parallel programming techniques, with small effort. To evaluate our system we have executed a set of real applications on a large multi-core machine. Comparison with popular parallel programming methods shows competitive speedups, while providing an easier parallel programing approach. More specifically, for an application that follows the wavefront method, running with big inputs, Trebuchet achieved up to 4.7% speedup over Intel® TBB novel flow-graph approach and up to 44% over OpenMP.  相似文献   

9.
Reducing the energy consumption in low cost, performance-constrained microcontroller units (MCU’s) cannot be achieved with complex energy minimization techniques (i.e. fine-grained DVFS, Thermal Management, etc), due to their high overheads. To this end, we propose an energy-efficient, multi-core architecture combining two homogeneous cores with different design margins. One is a performance-guaranteed core, also called Heavy Core (HC), fabricated with a worst-case design margin. The other is a low-power core, called Light Core (LC), which has only a typical-corner design margin. Post-silicon measurements show that the Light core has a 30% lower power density compared to the Heavy core, with only a small loss in reliability. Furthermore, we derive the energy-optimal workload distribution and propose a runtime environment for Heavy/Light MCU platforms. The runtime decreases the overall energy by exploiting available parallelism to minimize the platform’s active time. Results show that, depending on the core to peripherals power-ratio and the Light core’s operating frequency, the expected energy savings range from 10 to 20%.  相似文献   

10.
Systems that maintain coherence at large granularity, such as shared virtual memory systems, suffer from false sharing and extra communication. Relaxed memory consistency models have been used to alleviate these problems, but at a cost in programming complexity. Release Consistency (RC) and Lazy Release Consistency (LRC) are accepted to offer a reasonable tradeoff between performance and programming complexity. Entry Consistency (EC) offers a more relaxed consistency model, but it requires explicit association of shared data objects with synchronization variables. The programming burden of providing such associations can be substantial. This paper proposes a new consistency model for such systems, called Scope Consistency (ScC), which offers most of the performance advantages of the EC model without requiring explicit bindings between data and synchronization variables. Instead, ScC dynamically detects the associations implied by the programmer, using a programming interface similar to that of RC or LRC. We propose two ScC protocols: one that uses hardware support for fine-grained remote writes (automatic updates or AU) and the other, an all-software protocol. We compare the AU-based ScC protocol with Automatic Update Release Consistency (AURC), a modified LRC protocol that also takes advantage of automatic update support. AURC already improves performance substantially over an all-software LRC protocol. For three of the five applications we used, ScC further improves the speedups achieved by AURC by about 10%. Received October 1996, and in final form July 1997.  相似文献   

11.
In this paper, we explore the difference between preemption and activity splitting in the resource-constrained project scheduling problem (RCPSP) literature and identify a new set of RCPSPs that only allows non-preemptive activity splitting. Each activity can be processed in multiple modes and both renewable and non-renewable resources are considered. Renewable resources have time-varying resource constraints and vacations. Multi-mode RCPSP (MRCPSP) with non-preemptive activity splitting is shown to be a generalization of the RCPSP with calendarization. Activity ready times and due dates are considered to study the impact on project makespan. Computational experiments are conducted to compare optimal makespans under three different problem settings: RCPSPs without activity splitting (P1), RCPSPs with non-preemptive activity splitting (P2), and preemptive RCPSPs (P3). A precedence tree-based branch-and-bound algorithm is modified as an exact method to find optimal solutions. Resource constraints are included into the general time window rule and priority rule-based simple heuristics are proposed to search for good initial solutions to tighten bounding rules. Results indicate that there are significant makespan reductions possible when non-preemptive activity splitting or preemptions are allowed. The higher the range of time-varying renewable resource limits and the tighter the renewable resource limits are, the bigger the resulting makespan reduction can be.  相似文献   

12.
Spontaneous capillary flow (SCF) is a powerful method for moving fluids at the microscale. In modern biotechnology, composite channels—sometimes open—are increasingly used. The ability to predict the occurrence of a SCF is a necessity. In this work, using the Gibbs free energy, we derive a general condition for the establishment of SCF in any composite microchannel of constant cross section, i.e., a microchannel comprising different wall materials and even open parts. It is shown that SCF occurs when the Cassie angle is smaller than π/2 (θ* < π/2). For a homogeneous confined channel, this relation collapses to the well-known hydrophilic contact angle θ < π/2.  相似文献   

13.
The multi-agent programming contest uses a cow-herding scenario where two teams of cooperative agents compete for resources against each other. We developed such a team of agents using two well-known platforms, one based on a logic-based agent-oriented programming language, called Jason, and the other based on an organisational model, called $\mathcal{M}$ oise. While there is significant research on both agent programming and agent organisations, this was one of the first applications of a combined approach where we can program deliberative agents and organise them using a sophisticated organisational model. In this paper, we describe and discuss our contribution to the multi-agent contest using this combination of agent and organisation programming.  相似文献   

14.
For the last 30 years, several dynamic memory managers (DMMs) have been proposed. Such DMMs include first fit, best fit, segregated fit and buddy systems. Since the performance, memory usage and energy consumption of each DMM differs, software engineers often face difficult choices in selecting the most suitable approach for their applications. This issue has special impact in the field of portable consumer embedded systems, that must execute a limited amount of multimedia applications (e.g., 3D games, video players, signal processing software, etc.), demanding high performance and extensive memory usage at a low energy consumption. Recently, we have developed a novel methodology based on genetic programming to automatically design custom DMMs, optimizing performance, memory usage and energy consumption. However, although this process is automatic and faster than state-of-the-art optimizations, it demands intensive computation, resulting in a time-consuming process. Thus, parallel processing can be very useful to enable to explore more solutions spending the same time, as well as to implement new algorithms. In this paper we present a novel parallel evolutionary algorithm for DMMs optimization in embedded systems, based on the Discrete Event Specification (DEVS) formalism over a Service Oriented Architecture (SOA) framework. Parallelism significantly improves the performance of the sequential exploration algorithm. On the one hand, when the number of generations are the same in both approaches, our parallel optimization framework is able to reach a speed-up of 86.40× when compared with other state-of-the-art approaches. On the other, it improves the global quality (i.e., level of performance, low memory usage and low energy consumption) of the final DMM obtained in a 36.36% with respect to two well-known general-purpose DMMs and two state-of-the-art optimization methodologies.  相似文献   

15.

The rapid increase in the number of cores on chips forced the designers to invent new communication methods such as Network-on-Chip (NoC) paradigm. Advances in integrated circuit fabrications even allowed three-dimensional NoC (3D-NoC) implementations. 3D-NoCs have more advantages than their 2D counterparts such as lower area, higher throughput, better performance, and less energy consumption. However, they lack the design automation algorithms. An important design problem for a given application is mapping it on a 3D-NoC topology. In this paper, we present an integer linear programming (ILP) formulation and a novel heuristic algorithm, called CastNet3D, for application mapping onto mesh-based 3D-NoCs with energy minimization being the objective. The algorithm tries to utilize vertical links for communicating nodes as much as possible. Vertical links are shorter than horizontal ones; therefore, they are faster and consume less energy. We compared CastNet3D against ILP in terms of energy consumption and execution time on several benchmarks. Our results show that CastNet3D obtains close to optimum results in much shorter time frames.

  相似文献   

16.
Embedded systems execute applications that execute hardware differently depending on the computation task, generating time-varying workloads. Energy minimization can be reached by using the low-power central processing unit (CPU) frequency for each workload. We propose an autonomous and online approach, capable of reducing energy consumption from adaptation to workload variations even in an unknown environment. In this approach, we improved the AEWMA algorithm into a new algorithm called AEWMA-MSE, adding new functionality to detect workload changes and demonstrating why it is better to use statistical analysis for real user cases in a mobile environment. Also, a new power model for mobile devices based on k-NN algorithm for regression was proposed and validated proving to have a better trade-off between execution time and precision than neural networks and linear regression-based models. AEWMA-MSE and the proposed power model are integrated into a novel algorithm for energy management based on reinforcement learning that suitably selects the appropriate CPU frequency based on workload predictions to minimize energy consumption. The proposed approach is validated through simulation by using real smartphone data from an ARM Cortex A7 processor used in a commercial smartphone. Our proposal proved to have an improvement in the Q-learning cost function and can effectively minimize the average energy consumption by 21% and up to 29% when compared to the already existing approaches.  相似文献   

17.
The π-calculus, in particular its stochastic version the stochastic π-calculus, is a common modeling formalism to concisely describe the chemical reactions occurring in biochemical systems. However, it remains largely unexplored how to transform a biochemical model expressed in the stochastic π-calculus back into a set of meaningful reactions. To this end, we present a two step approach of first translating model states to reaction sets and then visualizing sequences of reaction sets, which are obtained from state trajectories, in terms of reaction networks. Our translation from model states to reaction sets is formally defined and shown to be correct, in the sense that it reflects the states and transitions as they are derived from the continuous time Markov chain-semantics of the stochastic π-calculus. Our visualization concept combines high level measures of network complexity with interactive, table-based network visualizations. It directly reflects the structures introduced in the first step and allows modelers to explore the resulting simulation traces by providing both: an overview of a network’s evolution and a detail inspection on demand.  相似文献   

18.
A model for representing image contours in a form that allows interaction with higher level processes has been proposed by Kass et al. (in Proceedings of First International Conference on Computer Vision, London, 1987, pp. 259–269). This active contour model is defined by an energy functional, and a solution is found using techniques of variational calculus. Amini et al. (in Proceedings, Second International Conference on Computer Vision, 1988, pp. 95–99) have pointed out some of the problems with this approach, including numerical instability and a tendency for points to bunch up on strong portions of an edge contour. They proposed an algorithm for the active contour model using dynamic programming. This approach is more stable and allows the inclusion of hard constraints in addition to the soft constraints inherent in the formulation of the functional; however, it is slow, having complexity O(nm3), where n is the number of points in the contour and m is the size of the neighborhood in which a point can move during a single iteration. In this paper we summarize the strengths and weaknesses of the previous approaches and present a greedy algorithm which has performance comparable to the dynamic programming and variational calculus approaches. It retains the improvements of stability, flexibility, and inclusion of hard constraints introduced by dynamic programming but is more than an order of magnitude faster than that approach, being O(nm). A different formulation is used for the continuity term than that of the previous authors so that points in the contour are more evenly spaced. The even spacing also makes the estimation of curvature more accurate. Because the concept of curvature is basic to the formulation of the contour functional, several curvature approximation methods for discrete curves are presented and evaluated as to efficiency of computation, accuracy of the estimation, and presence of anomalies.  相似文献   

19.
Both the overhearing and overhearing avoidance in a densely distributed sensor network may inevitably incur considerable power consumption. In this paper we propose a so-called CCS-MAC (collaborative compression strategy-based MAC) MAC protocol which facilitates to exploit those overheard data that is treated useless in traditional MAC protocols for the purpose of cost and energy savings. Particularly the CCS-MAC enables different sensor nodes to perform data compression cooperatively with regard to those overheard data, so that the redundancy of data prepared for the link layer transmission can be totally eliminated at the earliest. The problem of collaborative compression is analyzed and discussed along with a corresponding linear programming model formulated. Based on it a heuristic node-selection algorithm with a time complexity of (O(N2)) is proposed to the solve the linear programming problem. The node-selection algorithm is implemented in CCS-MAC at each sensor node in a distributed manner. The experiment results verify that the proposed CCS-MAC scheme can achieve a significant energy savings so as to prolong the lifetime of the sensor networks so far.  相似文献   

20.
In the Big Data Era, the management of energy consumption by servers and data centers has become a challenging issue for companies, institutions, and countries. In data-centric applications, Database Management Systems are one of the major energy consumers when executing complex queries involving very large databases. Several initiatives have been proposed to deal with this issue, covering both the hardware and software dimensions. They can be classified in two main approaches assuming that either (a) the database is already deployed on a given platform, or (b) it is not yet deployed. In this study, we focus on the first set of initiatives with a particular interest in physical design, where optimization structures (e.g., indexes, materialized views) are selected to satisfy a given set of non-functional requirements such as query performance for a given workload. In this paper, we first propose an initiative, called Eco-Physic, which integrates the energy dimension into the physical design when selecting materialized views, one of the redundant optimization structures. Secondly, we provide a multi-objective formalization of the materialized view selection problem, considering two non-functional requirements: query performance and energy consumption while executing a given workload. Thirdly, an evolutionary algorithm is developed to solve the problem. This algorithm differs from the existing ones by being interactive, so that database administrators can adjust some energy sensitive parameters at the final stage of the algorithm execution according to their specifications. Finally, intensive experiments are conducted using our mathematical cost model and a real device for energy measurements. Results underscore the value of our approach as an effective way to save energy while optimizing queries through materialized views structures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号