首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Advances in optical data transmission technology have allowed the current expansion of bandwidth-demanding services over the Internet. Also, the emergence of orthogonal frequency-division multiplexing (OFDM) has opened the possibility of increasing the network spectral efficiency by solving the routing, modulation and spectrum assignment (RMSA) problem. Recently, investigators have examined the effects of multiple demands or multiple virtual topologies when they are requested at different time periods over a single physical substrate. That makes the RMSA harder and with many more instances. Such analysis is required because network traffic does not remain static along time, and the demand can increase considerably as new user services arise. Therefore, planning the network considering a multi-period study becomes essential, since it can prevent a case where demands may exceed the bandwidth capacity and cause request blocking in future periods. In this work, we provide a novel mixed integer linear programming (MILP) formulation to solve the RMSA problem for several t periods of demands. This model can be used not only to find the solutions to minimize the used capacity, but also as an efficient method of network planning, since it can estimate with a single formulation and a single iteration the point of resource exhaustion in each period t. The results are found for a small network, and they show the efficiency of the proposed MILP formulation. We also propose an alternative version of this formulation with predefined paths, which is less computationally demanding. The results of this study are compared to a step-by-step planning, where the strategy is a decomposition method that breaks the previous formulation into t steps. Comparing the results of the two strategies, it can be seen that the single multi-period formulation is a good strategy to solve the problem. By contrast, the step-by-step strategy may require reconfigurations and eventual interruptions in the network, from a step to another one.

  相似文献   

2.
In this paper, we consider two new types of the two-machine flowshop scheduling problems where a batching machine is followed by a single machine. The first type is that normal jobs with transportation between machines are scheduled on the batching and single machines. The second type is that normal jobs are processed on the batching machine while deteriorating jobs are scheduled on the single machine. For the first type, we formulate the problem to minimize the makespan as a mixed integer programming model and prove that it is strongly NP-hard. Furthermore, a heuristic algorithm along with a worst case error bound is derived and the computational experiments are also carried out to verify the effectiveness of the proposed heuristic algorithm. For the second type, the two objectives are considered. For the problem with minimizing the makespan, we find an optimal polynomial algorithm. For the problem with minimizing the sum of completion time, we show that it is strongly NP-hard and propose an optimal polynomial algorithm for its special case.  相似文献   

3.
Case‐based reasoning (CBR) is the area of artificial intelligence where problems are solved by adapting solutions that worked for similar problems from the past. This technique can be applied in different domains and with different problem representations. In this paper, a system curve base generator (CuBaGe) is presented. This framework is designed to be a domain‐independent prediction system for the analysis and prediction of curves and time‐series trends, based on the CBR technology. CuBaGe employs a novel curve representation method based on splines and a corresponding similarity function based on definite integrals. This combination of curve representation and similarity measure showed excellent results with sparse and non‐equidistant time series, which is demonstrated through a set of experiments. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of ?1?a?a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.  相似文献   

5.
Through evolution, biomolecules have resolved fundamental problems as a highly interactive parallel and distributed system that we are just beginning to decipher. Biomolecular Computing (BMC) protocols, however, are unreliable, inefficient and unscalable when compared to computational algorithms run in silico. An alternative approach is explored to exploiting these properties by building biomolecular analogs (eDNA) and virtual test tubes in electronics that would capture the best of both worlds. A distributed implementation is described of a virtual tube, Edna, on a cluster of PCs that does capture the massive asynchronous parallel interactions typical of BMC. Results are reported from over 1000 experiments that calibrate and benchmark Edna's performance, reproduce and extend Adleman's solution to the Hamiltonian Path problem for larger families of graphs than has been possible on a single processor or has been actually carried out in wet labs, and benchmark the feasibility and performance of DNA-based associative memories. The results required a million-fold less molecules and are at least as reliable as in vitro experiments, and so provide strong evidence that the paradigm of molecular computing can be implemented much more efficiently (in terms of time, cost, and probability of success) in silico than the corresponding wet experiments, at least in the range where Edna can be practically run. This approach also demonstrates intrinsic advantages in using electronic analogs of DNA as genomes for genetic algorithms and evolutionary computation.  相似文献   

6.
We consider an Mx/G/1 queueing system with a vacation time under single vacation policy, where the server takes exactly one vacation between two successive busy periods. We derive the steady state queue size distribution at different points in times, as well as the steady state distributions of busy period and unfinished work (backlog) of this model.Scope and purposeThis paper addresses issues of model building of manufacturing systems of job-shop type, where the server takes exactly one vacation after the end of each busy period. This vacation can be utilized as a post processing time after clearing the jobs in the system. To be more realistic, we further assume that the arrivals occur in batches of random size instead of single units and it covers many practical situations. For example in manufacturing systems of job-shop type, each job requires to manufacture more than one unit; in digital communication systems, messages which are transmitted could consist of a random number of packets. These manufacturing systems can be modeled by Mx/G/1 queue with a single vacation policy and this extends the results of Levy and Yechiali, Manage Sci 22 (1975) 202, and Doshi, Queueing Syst 1 (1986) 29.  相似文献   

7.
This paper studies the problems of minimizing total completion time (ΣCi) and makespan (Cmax) on a single batch processing machine with job families and secondary resource constraints. The motivation for this problem is the burn-in operation in the final testing stage of semiconductor manufacturing, where both oven capacity and the number of boards available may constrain scheduling decisions. Because both problems are NP-hard, integer programming formulations are developed for special cases and are then used to develop heuristics. Extensive computational experiments show that the heuristics are capable of consistently obtaining good solutions in modest CPU times.  相似文献   

8.
The single allocation p-hub center problem is an NP-hard location–allocation problem which consists of locating hub facilities in a network and allocating non-hub nodes to hub nodes such that the maximum distance/cost between origin–destination pairs is minimized. In this paper we present an exact 2-phase algorithm where in the first phase we compute a set of potential optimal hub combinations using a shortest path based branch and bound. This is followed by an allocation phase using a reduced sized formulation which returns the optimal solution. In order to get a good upper bound for the branch and bound we developed a heuristic for the single allocation p-hub center problem based on an ant colony optimization approach. Numerical results on benchmark instances show that the new solution approach is superior over traditional MIP-solver like CPLEX. As a result we are able to provide new optimal solutions for larger problems than those reported previously in literature. We are able to solve problems consisting of up to 400 nodes in reasonable time. To the best of our knowledge these are the largest problems solved in the literature to date.  相似文献   

9.
Traditional approaches to addressing historical queries assume asingle line of time evolution; that is, a system (database, relation) evolves over time through a sequence of transactions. Each transaction always applies to the unique, current state of the system, resulting in a new current state. There are, however, complex applications where the system's state evolves intomultiple lines of evolution. In general, this creates a tree (hierarchy) of evolution lines, where each tree node represents the time evolution of a particular subsystem. Multiple lines create novel historical queries, such asvertical orhorizontal historical queries. The key characteristic of these problems is that portions of the history are shared; answering historical queries should not necessitate duplication of shared histories as this could increase the storage requirements dramatically. Both the vertical and horizontal historical queries have two parts: a search part, where the time of interest is located together with the appropriate subsystem, and a reconstruction part, where the subsystem's state is reconstructed for that time. This article focuses on the search part; several reconstruction methods, designed for single evolution lines can be applied once the appropriate time of interest is located. For both the vertical and the horizontal historical queries, we present algorithms that work without duplicating shared histories. Combinations of the vertical and horizontal queries are possible, and enable searching in both dimensions of the tree of evolutions.  相似文献   

10.
We consider a discrete-time single-server queue where the idle server waits for reaching a level N in the queue size to start a batch service of N messages, although the following arrivals during the busy period receive single services. We find the stationary distributions of the queue and system lengths as well as some performance measures. The vacation and busy periods of the system and the number of messages served during a busy period are also analyzed. The stationary distributions of the time spent waiting in the queue and in the system are studied too. Finally, a total expected cost function is developed to determine the optimal operating N-policy at minimum cost.  相似文献   

11.
This paper addresses scheduling problems for tasks with release and execution times. We present a number of efficient and easy to implement algorithms for constructing schedules of minimum makespan when the number of distinct task execution times is fixed. For a set of independent tasks, our algorithm in the single processor case runs in time linear in the number of tasks; with precedence constraints, our algorithm runs in time linear in the sum of the number of tasks and the size of the precedence constraints. In the multi-processor case, our algorithm constructs minimum makespan schedules for independent tasks with uniform execution times. The algorithm runs in O(n log m) time where n is the number of tasks and m is the number of processors. Received September 25, 1997; revised June 11, 1998.  相似文献   

12.
On the Complexity of Adjacent Resource Scheduling   总被引:1,自引:0,他引:1  
We study the problem of scheduling resource(s) for jobs in an adjacent manner (ARS). The problem relates to fixed-interval scheduling on one hand, and to the problem of two-dimensional strip packing on the other. Further, there is a close relation with multiprocessor scheduling. A distinguishing characteristic is the constraint of resource-adjacency. As an application of ARS, we consider an airport where passengers check in for their flight, joining lines before one or more desks, at the desk the luggage is checked and so forth. To smoothen these operations the airport maintains a clear order in the waiting lines: a number n(f) of adjacent desks is to be assigned exclusively during a fixed time-interval I(f) to flight f. For each flight in a given planning horizon of discrete time periods, one seeks a feasible assignment to adjacent desks and the objective is to minimize the total number of involved desks. The paper explores two problem variants and relates them to other scheduling problems. The basic, rectangular version of ARS is a special case of multiprocessor scheduling. The other problem is more general and it does not fit into any existing scheduling model. After presenting an integer linear program for ARS, we discuss the complexity of both problems, as well as of special cases. The decision version of the rectangular problem remains strongly NP-complete. The complexity of the other problem is already strongly NP-complete for two time periods. The paper also determines a number of cases that are solvable in polynomial time.  相似文献   

13.
Multi-instance clustering with applications to multi-instance prediction   总被引:2,自引:0,他引:2  
In the setting of multi-instance learning, each object is represented by a bag composed of multiple instances instead of by a single instance in a traditional learning setting. Previous works in this area only concern multi-instance prediction problems where each bag is associated with a binary (classification) or real-valued (regression) label. However, unsupervised multi-instance learning where bags are without labels has not been studied. In this paper, the problem of unsupervised multi-instance learning is addressed where a multi-instance clustering algorithm named Bamic is proposed. Briefly, by regarding bags as atomic data items and using some form of distance metric to measure distances between bags, Bamic adapts the popular k -Medoids algorithm to partition the unlabeled training bags into k disjoint groups of bags. Furthermore, based on the clustering results, a novel multi-instance prediction algorithm named Bartmip is developed. Firstly, each bag is re-represented by a k-dimensional feature vector, where the value of the i-th feature is set to be the distance between the bag and the medoid of the i-th group. After that, bags are transformed into feature vectors so that common supervised learners are used to learn from the transformed feature vectors each associated with the original bag’s label. Extensive experiments show that Bamic could effectively discover the underlying structure of the data set and Bartmip works quite well on various kinds of multi-instance prediction problems.  相似文献   

14.
We consider linear continuous‐time systems with multiplicative noise and polytopic‐type parameter uncertainty, and we address the problems of H and H2 filtering of these systems. These problems are solved by applying a vertex‐dependent Lyapunov function that considerably reduces the overdesign associated with the classical design that is based on a single Lyapunov function for the whole parameter range. A new approach of the Finsler lemma is used that decreases the overdesign entailed in the usual derivation of the robust estimation problem. The developed theory is also extended to the robust gain scheduling case where online measurement is used to improve the estimation. Two examples are given that demonstrate the tractability and applicability of the design methods.  相似文献   

15.
16.
17.
《Ergonomics》2012,55(9):1235-1247
Abstract

This study was designed to test the usefulness of the axillary temperature as a circadian marker rhythm, e.g. for shift workers in field experiments, where recordings may be required over extended periods. Axillary and rectal temperatures were recorded automatically at 5 or 15min intervals (Δt) using a ‘Chronotherm’ ambulatory system. Conventional (t-test, analysis of variance (ANOVA), correlation) and curve-fitting (cosinor) methods as well as power spectra were used for statistical analyses, (a) Rectal and axillary temperature rhythms were compared in five subjects over a 36 h span in laboratory conditions. Apart from rather small but constant differences in respective mesors (24 h mean) and acrophases (peak time location on the 24 h scale), rectal and axillary temperature recordings gave similar results in each individual. (b) In analyses of axillary temperatures recorded over periods of up to 15 days in a further five subjects (a field study with usual activities), only minor changes in circadian rhythm parameters were found to result from manipulation of the sampling interval over the range Δt= 15 min to Δt= 240 min, with or without data during sleep, (c) The axillary temperature recorded at Δt= 15 min over a 13 day span in a 32 year old worker on a 3-4 day rotating shift schedule had a prominent period (τ) of 22.3 h, although the prominent period of the rhythms of both wrist activity and the sleep-wake cycle remained unaltered at 24 h. Thus internal desynchronization was shown to have occurred in this subject.  相似文献   

18.
This article addresses three stability problems related to networked-control systems (NCSs) with periodic scheduling, where control systems may have multiple samplings in a hyperperiod (a hyperperiod is a periodically repeated scheduling sequence for all tasks in an NCS). As expected, the analysis of a system with multiple samplings is much richer than the case with single sampling. For example, a system with two samplings may be stable (unstable) even if it is unstable (stable) when sampled by either sampling. In this context, it is important to understand how network-induced delays and multiple samplings affect the system's stability. In this article, three particular stability problems involving constant and/or time-varying parameters are investigated, and the corresponding stability regions are derived. Numerical examples and various discussions complete the presentation.  相似文献   

19.
This paper presents a set of efficient graph transformations for local instruction scheduling. These transformations to the data-dependency graph prune redundant and inferior schedules from the solution space of the problem. Optimally scheduling the transformed problems using an enumerative scheduler is faster and the number of problems solved to optimality within a bounded time is increased. Furthermore, heuristic scheduling of the transformed problems often yields improved schedules for hard problems. The basic node-based transformation runs in O(ne) time, where n is the number of nodes and e is the number of edges in the graph. A generalized subgraph-based transformation runs in O(n2 e) time. The transformations are implemented within the Gnu Compiler Collection (GCC) and are evaluated experimentally using the SPEC CPU2000 floating-point benchmarks targeted to various processor models. The results show that the transformations are fast and improve the results of both heuristic and optimal scheduling.  相似文献   

20.
Video surveillance systems using Closed Circuit Television (CCTV) cameras, is one of the fastest growing areas in the field of security technologies. However, the existing video surveillance systems are still not at a stage where they can be used for crime prevention. The systems rely heavily on human observers and are therefore limited by factors such as fatigue and monitoring capabilities over long periods of time. This work attempts to address these problems by proposing an automatic suspicious behaviour detection which utilises contextual information. The utilisation of contextual information is done via three main components: a context space model, a data stream clustering algorithm, and an inference algorithm. The utilisation of contextual information is still limited in the domain of suspicious behaviour detection. Furthermore, it is nearly impossible to correctly understand human behaviour without considering the context where it is observed.This work presents experiments using video feeds taken from CAVIAR dataset and a camera mounted on one of the buildings Z-Block) at the Queensland University of Technology, Australia. From these experiments, it is shown that by exploiting contextual information, the proposed system is able to make more accurate detections, especially of those behaviours which are only suspicious in some contexts while being normal in the others. Moreover, this information gives critical feedback to the system designers to refine the system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号