首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Part Batching and Scheduling in a Flexible Cell to Minimize Setup Costs   总被引:1,自引:0,他引:1  
In this paper we consider the problem of batching parts and scheduling their operations in flexible manufacturing cells. We consider the case in which there is only one processor and no more than k parts may be present in the system at the same time. The objective is to minimize the total number of setups, given that each part requires a sequence of operations, and each operation requires a given tool. We prove that even for k=3 the problem is NP-hard and we develop a branch-and-price scheme for its solution. Moreover, we present an extensive computational experience. Finally, we analyze some special cases and related problems.  相似文献   

2.
In this study, a two-machine flowshop producing identical parts is considered. Each of the identical parts is assumed to require a number of manufacturing operations, and the machines are assumed to be flexible enough to perform different operations. Due to economical or technological constraints, some specific operations are preassigned to one of the machines. The remaining operations, called flexible operations, can be performed on either one of the machines, so that the same flexible operation can be performed on different machines for different parts. The problem is to determine the assignment of the flexible operations to the machines for each part, with the objective of maximizing the throughput rate. We consider various cases regarding the number of parts to be produced and the capacity of the buffer between the machines. We present solution methods for each variant of the problem.  相似文献   

3.
We consider a robotic cell served by a dual-gripper robot that consists of identical CNC machines placed linearly and a material handling robot loading/unloading the machines and transporting the parts between them. Identical parts are to be processed in this system and the CNC machines are capable of performing all the operations that a part requires. We consider the problem of sequencing activities of the robot in order to maximize the throughput rate. As a consequence of the flexibility of the CNC machines, a new class of robot move sequences, named as pure cycles, arises. In a pure cycle, the robot loads and unloads each machine once and each part is processed on exactly one of the machines. Thereby, the problem is to determine the best pure cycle that maximizes the throughput rate. We first determine the feasibility conditions for the pure cycles and prove some basic results that reduces the number of feasible pure cycles to be investigated. We analyze 2-machine robotic cells in detail and prove that five of the cycles among a huge number of feasible pure cycles dominate the rest. We determine the parameter regions in which each of the five cycles is optimal. We also analyze the performance improvement that can be attained by using a dual gripper robot and provide managerial insights.  相似文献   

4.
We consider the problem of rendezvous, proximity operations, and docking of an autonomous spacecraft. The problem can be conveniently divided into three phases: (1) rendezvous phase; (2) docking phase; and (3) docked phase. On each phase the task to perform is different, and requires a different control algorithm. Angle and range measurements are available for the entire mission, but constraints and tasks to perform are different depending on the phase. Due to the different constraints, available measurements, and tasks to perform on each phase, we study this problem using a hybrid systems approach, in which the system has different modes of operation for which a suitable controller is to be designed. Following this approach, we characterize the family of individual controllers and the required properties they should induce to the closed-loop system to solve the problem within each phase of operation. Furthermore, we propose a supervisory algorithm that robustly coordinates the individual controllers so as to provide a solution to the problem. In addition, we present specific controller designs that appropriately solve the control problems for individual phases and validate them numerically.  相似文献   

5.
We present a general analysis of the problem of sequencing operations in bufferless robotic cell flow shops with parallel machines. Our focus will be cells that produce identical parts. The objective is to find a cyclic sequence of robot moves that maximizes the steady state throughput. Parallel machines are used in the industry to increase throughput, most typically at bottleneck processes having larger processing times.Efficient use of parallel machines requires that several parts be processed in one cycle of robot movements. We analyze such cycles for constant travel-time robotic cells. The number of cycles that produce several parts is very large, so we focus on a subclass called blocked cycles. In this class, we find a dominating subclass called LCM Cycles.The results and the analysis in this paper offer practitioners (i) guidelines to determine whether parallel machines will be cost-effective for a given implementation, (ii) a simple formula for determining how many copies of each machine are required to meet a particular throughput rate, and (iii) an optimal sequence of robot moves for a cell with parallel machines under a certain common condition on the processing times.  相似文献   

6.
Data-intensive Grid applications need access to large data sets that may each be replicated on different resources. Minimizing the overhead of transferring these data sets to the resources where the applications are executed requires that appropriate computational and data resources be selected. In this paper, we consider the problem of scheduling an application composed of a set of independent tasks, each of which requires multiple data sets that are each replicated on multiple resources. We break this problem into two parts: one, to match each task (or job) to one compute resource for executing the job and one storage resource each for accessing each data set required by the job and two, to assign the set of tasks to the selected resources. We model the first part as an instance of the well-known Set Covering Problem (SCP) and apply a known heuristic for SCP to match jobs to resources. The second part is tackled by extending existing MinMin and Sufferage algorithms to schedule the set of distributed data-intensive tasks. Through simulation, we experimentally compare the SCP-based matching heuristic to others in conjunction with the task scheduling algorithms and present the results.  相似文献   

7.
In this paper, we present a version of the linear hash structure algorithm to increase concurrency using multi-level transaction model. We exploit the semantics of the linear hash operations at each level of transaction nesting to allow more concurrency. We implement each linear hash operation by a sequence of operations at lower level of abstraction. Each linear hash operation at leaf-level is a combination of search and read/write operations. We consider locks at both vertex (page) and key level (tuple) to further increase concurrency. As undo-based recovery is not possible with multi-level transactions, we use compensation-based undo to achieve atomicity. We have implemented our model using object-oriented technology and multithreading paradigm. In our implementation, linear hash operations such as find, insert, delete, split, and merge are implemented as methods and correspond to multi-level transactions.  相似文献   

8.
A scheduling system is proposed and developed for a special type of flow shop. In this flow shop there is one machine at each stage. A job may require multiple operations at each stage. The first operation of a job on stage j cannot start until the last operation of the job on stage j - 1 has finished. Pre-emption of the operations of a job is not allowed. The flow shop that the authors consider has another feature, namely time lags between the multiple operations of a job. To move from one operation of a job to another requires a finite amount of time. This time lag is independent of the sequence and need not be the same for all operations or jobs. During a time lag of a job, operations of other jobs may be processed. This problem originates from a flexible manufacturing system scheduling problem where, between operations of a job on the same workstation, refixturing of the parts has to take place in a load/unload station, accompanied by (manual) transportation activities. In this paper a scheduling system is proposed in which the inherent structure of this flow shop is used in the formulation of lowerbounds on the makespan. A number of lowerbounds are developed and discussed. The use of these bounds makes it possible to generate a schedule that minimizes makespan or to construct approximate solutions. Finally, some heuristic procedures for this type of flow shop are proposed and compared with some well-known heuristic scheduling rules for job shop/flow shop scheduling.An earlier version of this paper was presented at the First International Conference on Industrial Engineering and Production Management 1993, 2–4 June 1993, Mons, Belgium.  相似文献   

9.
We consider the effect on system performance of the distribution of a data base in the form of multiple copies at distinct sites. The purpose of our analysis is to determine the gain in READ throughput that can be obtained in the presence of consistency preserving algorithms that have to be implemented when UPDATE operations are carried out on each copy. We show that READ throughput diminishes if the number of copies exceeds an optimal value. The theoretical model we develop is applied to a system in which consistency is preserved through the use of Ellis' ring algorithm.  相似文献   

10.
We consider verification of programs manipulating dynamic linked data structures such as various forms of singly and doubly-linked lists or trees. We consider important properties for this kind of systems like no null-pointer dereferences, absence of garbage, shape properties, etc. We develop a verification method based on a novel use of tree automata to represent heap configurations. A heap is split into several ??separated?? parts such that each of them can be represented by a tree automaton. The automata can refer to each other allowing the different parts of the heaps to mutually refer to their boundaries. Moreover, we allow for a hierarchical representation of heaps by allowing alphabets of the tree automata to contain other, nested tree automata. Program instructions can be easily encoded as operations on our representation structure. This allows verification of programs based on symbolic state-space exploration together with refinable abstraction within the so-called abstract regular tree model checking. A motivation for the approach is to combine advantages of automata-based approaches (higher generality and flexibility of the abstraction) with some advantages of separation-logic-based approaches (efficiency). We have implemented our approach and tested it successfully on multiple non-trivial case studies.  相似文献   

11.
In this paper we study the effect of the sampling strategy on the achievable accuracy in linear system identification experiments. We consider the experiment as being composed of a number of subexperiments and we impose the constraint that the sampling rate should be constant in each of the subexperiments. We investigate the effect of having different sampling rates in each of the subexperiments and we show that the optimal information return can be achieved by use of a finite number of sub-experiments. We develop the theory in two parts : the first relating to the identification of a stochastic process from output observations ; the second relating to the identification of an input-output transfer function from noisy observations. We present a number of examples which show that the appropriate choice of sampling strategy can be of paramount importance in identification experiments. We also show that the design can be extended in a straightforward manner to safeguard against a single non-informative experiment in the case of diffuse prior distribution for the parameters.  相似文献   

12.
To explore the advantages of power saving offered by the wireless multicast advantage property, we consider the case of source-initiated multicast traffic. Current research activity for the minimum energy multicast (MEM) problem has been focused on devising efficient centralized greedy algorithms for static ad hoc networks. In this paper, we consider mobile ad hoc networks (MANETs) that use omnidirectional antennas and have limited energy resources. We propose the design and initial evaluation of the distributed minimum energy multicast (DMEM) algorithm for MANETs that attempts to reduce as much as possible the total RF energy required by the multicast communication. Several localized operations are presented for the DMEM algorithm, in which each node requires only the knowledge of and distances to all neighboring tree nodes. Through extensive simulation studies, we show that these operations are very efficient both in terms of energy saving and operation overhead  相似文献   

13.
We consider an operation assignment problem that arose from a printed circuit (PC) board assembly process. Components can either be inserted on boards manually or by machine. The objective is to determine an assignment of components (operations) to a set of capacitated machines (with the remainder of the components inserted manually) to minimize the total set-up and processing cost for assembling all boards. The problem can be formulated as a mixed integer linear program, but is too large to be practically solved. For the case of one machine, we present two different solution heuristics. We show that while each can be arbitrarily bad, on average the algorithms perform quite well. For the case of multiple machines, we present four different solution heuristics. We discuss implementation of our results at Hewlett-Packard.  相似文献   

14.
We consider multi-period part selection and loading problems in flexible manufacturing systems with the objective of minimizing subcontracting costs. The part selection problem is to select sets of part types and to determine their quantities to be produced during the upcoming planning horizon while satisfying due dates of all orders for the parts, and the loading problem involves allocation of operations and required tools to machines. Production demands should be satisfied for periods through subcontracting if production demands cannot be satisfied by the system due to machine capacity or tool magazine capacity constraints. For the part selection and loading problems, we develop three iterative algorithms, called the forward algorithm, the backward algorithm and the capacity approximation algorithm, that solve the part selection and loading problems iteratively for each period. To compare the three algorithms, a series of computational experiments is done on randomly generated test problems.  相似文献   

15.
Efficient rescue operations require a high level of situation awareness amongst decision‐makers and first responders for the purpose of achieving operations successfully and reducing losses. Moreover, a common operational picture between involved actors is required in order to support decision‐making. Therefore, different organisations and agencies have to collaborate, cooperate, and coordinate their actions with each other. Hence, effective interactions and communications between participants are vital to fulfil these essential needs. However, emergency actors still lack backing to exchange information effectively and ensure a common operational picture in order to reach shared situational awareness. For this reason, we aim to develop and implement Rescue MODES, a communication system oriented to support situation awareness amongst French emergency actors in rescue operations. In this paper, we examine and analyse actors’ activities and interactions, so that the system will be based on the real needs of actors. We start by studying and modelling the communications, interactions, and information flow. This modelling is based on an application ontology. Then, we define requirements for good communication in these operations and present some existing systems in France and how each system responds to these requirements.  相似文献   

16.
A multi-resolution topological representation for non-manifold meshes   总被引:1,自引:0,他引:1  
We address the problem of representing and processing 3D objects, described through simplicial meshes, which consist of parts of mixed dimensions, and with a non-manifold topology, at different levels of detail. First, we describe a multi-resolution model, that we call a non-manifold multi-tessellation (NMT), and we consider the selective refinement query, which is at the heart of several analysis operations on multi-resolution meshes. Next, we focus on a specific instance of a NMT, generated by simplifying simplicial meshes based on vertex-pair contraction, and we describe a compact data structure for encoding such a model. We also propose a new data structure for two-dimensional simplicial meshes, capable of representing both connectivity and adjacency information with a small memory overhead, which is used to describe the mesh extracted from an NMT through selective refinement. Finally, we present algorithms to efficiently perform updates on such a data structure.  相似文献   

17.
18.
We propose in this paper novel cooperative distributed MPC algorithms for tracking of piecewise constant setpoints in linear discrete-time systems. The available literature for cooperative tracking requires that each local controller uses the centralized state dynamics while optimizing over its local input sequence. Furthermore, each local controller must consider a centralized target model. The proposed algorithms instead use a suitably augmented local system, which in general has lower dimension compared to the centralized system. The same parsimonious parameterization is exploited to define a target model in which only a subset of the overall steady-state input is the decision variable. Consequently the optimization problems to be solved by each local controller are made simpler. We also present a distributed offset-free MPC algorithm for tracking in the presence of modeling errors and disturbances, and we illustrate the main features and advantages of the proposed methods by means of a multiple evaporator process case study.  相似文献   

19.
An expert system approach for die and mold making operations   总被引:4,自引:0,他引:4  
In the modern manufacturing of sophisticated parts with 3D sculptured surfaces, die and mold making operations are the most widely used machining processes to remove unwanted material. To manufacture a die or a mold, many different cutting tools are involved, from deep hole drills to the smallest ball nose end mills. Since the specification of each tool is very different from each other, each mold or die is specific with their complicated shapes and many machining rules exist to consider, a great deal of expertise is needed in planning the machining operations. An expert system (DieEX) developed for this purpose is described in the present work. The geometry and the material of the workpiece, tool material, tool condition and operation type are considered as input values and various recommendations about the tool type, tool specifications, work holding method, type of milling operation, direction of feed and offset values are provided.  相似文献   

20.
Minimizing Total Weighted Tardiness in a Generalized Job Shop   总被引:1,自引:0,他引:1  
We consider a generalization of the classical job shop scheduling problem with release times, positive end–start time lags, and a general precedence graph. As objective we consider the total weighted tardiness. We use a tabu search algorithm to search for the order in which the operations are processed on each machine. Given a sequence of operations on each machine, we determine optimal starting times by solving a maximum cost flow problem. This solution is used to determine the neighborhood for our tabu search algorithm. All sequences in our neighborhood are obtained by swapping certain pairs of adjacent operations. We show that only swaps that possess a certain property can improve the current solution; if no such swap is available in the neighborhood, then the current solution is globally optimal. In the computational results we compare our method with other procedures proposed in literature. Our tabu search algorithm seems to be effective both with respect to time and solution quality. The research was carried out at the Technische Universiteit Eindhoven and the Universiteit Utrecht with support of Baan and the Future and Emerging Technologies programme of the EU under contract number IST-1999-14186 (ALCOM-FT).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号