首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
Stability and performance analysis of mixed product run-to-run control   总被引:1,自引:1,他引:1  
Run-to-run control has been widely used in batch manufacturing processes to reduce variations. However, in batch processes, many different products are fabricated on the same set of process tool with different recipes. Two intuitive ways of defining a control scheme for such a mixed production mode are (i) each run of different products is used to estimate a common tool disturbance parameter, i.e., a “tool-based” approach, (ii) only a single disturbance parameter that describe the combined effect of both tool and product is estimated by results of runs of a particular product on a specific tool, i.e., a “product-based” approach. In this study, a model two-product plant was developed to investigate the “tool-based” and “product-based” approaches. The closed-loop responses are derived analytically and control performances are evaluated. We found that a “tool-based” approach is unstable when the plant is non-stationary and the plant-model mismatches are different for different products. A “product-based” control is stable but its performance will be inferior to single product control when the drift is significant. While the controller for frequent products can be tuned in a similar manner as in single product control, a more active controller should be used for the infrequent products which experience a larger drift between runs. The results were substantiated for a larger system with multiple products, multiple plants and random production schedule.  相似文献   

2.
Inventory management is an important area of production control. In 1999, Pfohl et al. [Pfohl, H.-C., Cullmann, O., & Stölzle, W. (1999). Inventory management with statistical process control: Simulation and evaluation. Journal of Business Logistics, 20, 101–120] developed a real-time inventory decision support system by using the individual control charts for monitoring the inventory level (i.e., stock quantity) and the market demand, in which a series of decision rules are provided to help the inventory manager to determine the time and the quantity to order. In the present paper, a real-time inventory decision system is proposed by incorporating Western Electric run rules into the decision rules of the system. Since the data of demand sometimes present a pattern of time series (i.e., autocorrelation may exist in the data of demand), in the proposed decision system the ARMA control chart is employed to monitor the market demand and the individual control chart is used to monitor the inventory level. A simulation study is conducted to investigate the effects of demand pattern and autocorrelation on the proposed inventory decision system and to verify the effectiveness of the system. The index “service level” is selected as the key indicator for the system performance. Based on the results of the simulation study, it is shown that the performance of the proposed inventory decision system is quite consistent with service level always greater than 90% for various demand patterns.  相似文献   

3.
A crucial step in the modeling of a system is to determine the values of the parameters to use in the model. In this paper we assume that we have a set of measurements collected from an operational system, and that an appropriate model of the system (e.g., based on queueing theory) has been developed. Not infrequently proper values for certain parameters of this model may be difficult to estimate from available data (because the corresponding parameters have unclear physical meaning or because they cannot be directly obtained from available measurements, etc.). Hence, we need a technique to determine the missing parameter values, i.e., to calibrate the model.As an alternative to unscalable “brute force” technique, we propose to view model calibration as a non-linear optimization problem with constraints. The resulting method is conceptually simple and easy to implement. Our contribution is twofold. First, we propose improved definitions of the “objective function” to quantify the “distance” between performance indices produced by the model and the values obtained from measurements. Second, we develop a customized derivative-free optimization (DFO) technique whose original feature is the ability to allow temporary constraint violations. This technique allows us to solve this optimization problem accurately, thereby providing the “right” parameter values. We illustrate our method using two simple real-life case studies.  相似文献   

4.
This paper deals with a human-assisted knowledge extraction method to extract “if…then…” rules from a small set of machining data. The presented method utilizes both probabilistic reasoning and fuzzy logical reasoning to benefit from the machining data and from the judgment and preference of a machinist. Using the extracted rules, one can determine the values of operational parameters (feed, cutting velocity, etc.) to ensure the desired machining performance (keep surface roughness within the stipulated range (e.g., moderate)). Applying the presented method in a real-life machining knowledge extraction situation and comparing it with the inductive learning based knowledge extraction method (i.e., ID3), the usefulness of the method is demonstrated. As the concept of manufacturing automation is shifting toward “how to support humans by computers”, the presented method provides some valuable hints to the developers of futuristic computer integrated manufacturing systems.  相似文献   

5.
Shortest distance and reliability of probabilistic networks   总被引:1,自引:0,他引:1  
When the “length” of a link is not deterministic and is governed by a stochastic process, the “shortest” path between two points in the network is not necessarily always composed of the same links and depends on the state of the network. For example, in communication and transportation networks, the travel time on a link is not deterministic and the fastest path between two points is not fixed. This paper presents an algorithm to compute the expected shortest travel time between two nodes in the network when the travel time on each link has a given independent discrete probability distribution. The algorithm assumes the knowledge of all the paths between two nodes and methods to determine the paths are referenced.In reliability (i.e. the probability that two given points are connected by a path) computations, associated with each link is a probability of “failure” and a probability of “success”. Since “failure” implies infinite travel time, the algorithm simultaneously computes reliability. The paper also discusses the algorithm's capability to simultaneously compute some other performance measures which are useful in the analysis of emergency services operating on a network.  相似文献   

6.
We investigate two distinct issues related to resource allocation heuristics: robustness and failure rate. The target system consists of a number of sensors feeding a set of heterogeneous applications continuously executing on a set of heterogeneous machines connected together by high-speed heterogeneous links. There are two quality of service (QoS) constraints that must be satisfied: the maximum end-to-end latency and minimum throughput. A failure occurs if no allocation is found that allows the system to meet its QoS constraints. The system is expected to operate in an uncertain environment where the workload, i.e., the load presented by the set of sensors, is likely to change unpredictably, possibly resulting in a QoS violation. The focus of this paper is the design of a static heuristic that: (a) determines a robust resource allocation, i.e., a resource allocation that maximizes the allowable increase in workload until a run-time reallocation of resources is required to avoid a QoS violation, and (b) has a very low failure rate (i.e., the percentage of instances a heuristic fails). Two such heuristics proposed in this study are a genetic algorithm and a simulated annealing heuristic. Both were “seeded” by the best solution found by using a set of fast greedy heuristics.  相似文献   

7.
The surveillance of a manoeuvring target with multiple sensors in a coordinated manner requires a method for selecting and positioning groups of sensors in real time. Herein, the principles of dispatching, as used for the effective operation of service vehicles, are considered. The object trajectory is first discretized into a number of demand instants (data acquisition times), to which groups of sensors are assigned, respectively. Heuristic rules are used to determine the composition of each sensor group by evaluating the potential contribution of each sensor. In the case of dynamic sensors, the position of each sensor with respect to the target is also specified. Our proposed approach aims to improve the quality of the surveillance data in three ways: (1) The assigned sensors are manoeuvred into “optimal” sensing positions, (2) the uncertainty of the measured data is mitigated through sensor fusion, and (3) the poses of the unassigned sensors are adjusted to ensure that the surveillance system can react to future object manoeuvres. If a priori target trajectory information is available, the system performance may be further improved by optimizing the initial pose of each sensor off-line. The advantages of dispatching dynamic sensors over similar static-sensor systems are demonstrated through comprehensive simulations.  相似文献   

8.
Fingerprint matching is often affected by the presence of intrinsically low quality fingerprints and various distortions introduced during the acquisition process. An effective approach to account for within-class variations is by capturing multiple enrollment impressions of a finger. The focus of this work is on effectively combining minutiae information from multiple impressions of the same finger in order to increase coverage area, restore missing minutiae, and eliminate spurious ones. We propose a new, minutiae-based, template synthesis algorithm which merges several enrollment feature sets into a “super-template”. We have performed extensive experiments and comparisons to demonstrate the effectiveness of the proposed approach using a challenging public database (i.e., FVC2000 Db1) which contains small area, low quality fingerprints.  相似文献   

9.
The current development approaches for e-learning systems fail to explain in a clear and consistent way the pedagogical principles that support them. Moreover, decisions with regard to the structuration of each component proposed by these approaches are mainly taken by the designer/developer. As a result, the ensuing e-learning systems reflect “common sense” rather than a theoretically informed and systematic design.The present paper proposes a global architecture model for any e-learning system whose blocks are extracted from the analysis of the main approaches that currently guide the development of these kinds of systems. We use Kipling’s famous questions to define and structure the blocks of the proposed model, and we base the answers to these questions on two disciplines that are closed to e-learning: presential education (i.e., its pedagogical theories) and knowledge management.  相似文献   

10.
Efficient constrained local model fitting for non-rigid face alignment   总被引:1,自引:1,他引:0  
Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The “simultaneous” algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2–3 fps). The “project-out” algorithm for fitting an AAM achieves faster than real time performance (>200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the “simultaneous” AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the “exhaustive local search” (ELS) algorithm. Experiments were conducted on the CMU MultiPIE database.  相似文献   

11.
12.
In traditional distributed power control (DPC) algorithms, every user in the system is treated in the same way, i.e., the same power control algorithm is applied to every user in the system. In this paper, we divide the users into different groups depending on their channel conditions and use different DPC accordingly. Our motivation comes from the fact that different DPC algorithms have its own advantages and drawbacks, and our aim in this paper is to “combine” the advantages of different DPC algorithms, and we use soft computing techniques for that. In the simulations results, we choose Foschini and Miljanic Algorithm in [3], which has relatively fast convergence but is not robust against time-varying link gain changes and CIR estimation errors, and fixed step algorithm of Kim [3], which is robust but its convergence is slow. By “combining” these two algorithms using soft computing techniques, the resulting algorithm has fast convergence and is robust. Acknowledgments This work was supported in part by GETA (Finnish Academy Graduate School on Electronics, Telecommunications and Automation), Finland.  相似文献   

13.
This investigation focuses upon the information utilization process; i.e., what it means for a decision-maker to utilize an information system. A definition of utilization is adopted that states that an information system is utilized if the output from the information system is included in the Human Information Processing system of a decision-maker. The definition of utilization is further refined by segmenting utilization into two distinct subsystems, a Human Information Processing system and a Data Selection system. As an initial investigation of the utilization process, it was decided specifically to study the relationship of the Data Selection system to the Human Information Processing system. The primary aim of this investigation was to determine whether factors that are internal to a decision-maker may affect the data selection process. “Cognitive style” was chosen as representative of internal factors. Measures of data selection and cognitive style were created with particular emphasis placed on the development of an instrument to measure cognitive style. An experiment was designed to investigate the effect of this factor on data selection. The results of this experiment indicate there is a strong relationship between cognitive style and data selection.  相似文献   

14.
The high investment cost of flexible manufacturing systems (FMS) requires their management to be effective and efficient. The effectiveness in managing FMSs includes addressing machine loading, scheduling parts and dispatching vehicles and the quality of the solution. Therefore the problem is inevitably multi-criteria, and decision maker's judgement may contribute to the quality of the solution and the systems's performance. On the other hand, each of these problems of FMS is hard to optimize due to the large and discrete solution spaces (NP-hard). The FMS manager must address each of these problems hierarchically (separately) or simultaneously (aggregately) in a limited time. The efficiency of the management is related to the response time.

Here we propose a decision support system that utilizes an evolutionary algorithm (EA) with a memory of “good” past experiments as the solution engine. Therefore, even in the absence of an expert decision maker the performance of the solution engine and/or the quality of the solutions are maintained.

The experiences of the decision maker(s) are collected in a database (i.e., memory-base) that contains problem characteristics, the modeling parameters of the evolutionary program, and the quality of the solution. The solution engine in the decision support system utilizes the information contained in the memory-base in solving the current problem. The initial population is created based on a memory-based seeding algorithm that incorporates information extracted from the quality solutions available in the database. Therefore, the performance of the engine is designed to improve following each use gradually. The comparisons obtained over a set of randomly generated test problems indicate that EAs with the proposed memory-based seeding perform well. Consequently, the proposed DSS improves not only the effectiveness (better solution) but also the efficiency (shorter response time) of the decision maker(s).  相似文献   


15.
The Rule-Based (RB) and the Artificial Neural Network (ANN) approaches to expert systems development have each demonstrated some specific advantages and disadvantages. These two approaches can be integrated to exploit the advantages and minimize the disadvantages of each method used alone. An RB/ANN integrated approach is proposed to facilitate the development of an expert system which provides a “high-performance” knowledge-based network, an explanation facility, and an input/output facility. In this case study an expert system designed to assist managers in forecasting the performance of stock prices is developed to demonstrate the advantages of this integrated approach and how it can enhance support for managerial decision making.  相似文献   

16.
In this paper, we propose a Markov chain-based analytical framework for modeling the behavior of the medium access control (MAC) protocol in IEEE 802.15.4 wireless networks. Two scenarios are of interest. First, we consider networks where the (sensor) nodes communicate directly to the network coordinator (the final sink). Then, we consider cluster-tree (CT) scenarios where the sources communicate to the coordinator through a series of intermediate relay, which forward the received packets and do not generate traffic on their own. In both scenarios, no acknowledgment messages are used to confirm successful data packet deliveries and communications are beaconed (i.e., they rely on synchronization packets denoted as “beacons”). In all cases, our focus is on networks where the sources and the relays have finite queues (denoted as buffers) to store data packets. The network performance is evaluated in terms of aggregate network throughput and packet delivery delay. The performance predicted by the proposed analytical framework is in very good agreement with realistic ns-2 simulation results.  相似文献   

17.
Clinical decision support system (CDSS) and their logic syntax include the coding of notifications (e.g., Arden Syntax). The following paper will describe the rationale for segregating policies, user preferences and clinical monitoring rules into “advanced notification” and” clinical” components, which together form a novel and complex CDSS. Notification rules and hospital policies are respectively abstracted from care-provider roles and alerting mechanisms. User-defined preferences determine which devices are to be used for receiving notifications. Our design differs from previous notification systems because it integrates a versatile notification platform supporting a wide range of mobile devices with a XML/HL-7 compliant communication protocol.  相似文献   

18.
In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.  相似文献   

19.
This paper investigates the field of manufacturing system control. The addressed subject is indeed very fascinating, due to the importance that it has reached in the last decades both at research and industrial level. On the other hand, it seems to the author that most of the complexity intrinsic to the subject itself relies on the different meanings or levels of abstraction that both the terms “manufacturing system” and “control” may symbolize. The presented research aims to face the topic in a concrete fashion, i.e., by developing a control software system for a specific, although easy to be generalized, robotized manufacturing cell. Two different development methodologies, from the conceptual design to the actual implementation, of a cell control system are presented and compared. The former, based on ladder logic diagrams, for a PLC controlled manufacturing cell; the latter, based on object-oriented modeling and programming techniques, for a PC controlled manufacturing cell. The analysis has been conducted considering the internal and external requirements of the manufacturing system, mostly driven by the contemporary industrial need of reconfigurable control systems, largely acknowledged as the critical key to succeed in the new era of mass customization.  相似文献   

20.
Information retention from PowerPoint and traditional lectures   总被引:1,自引:0,他引:1  
The benefit of PowerPoint is continuously debated, but both supporters and detractors have insufficient empirical evidence. Its use in university lectures has influenced investigations of PowerPoint’s effects on student performance (e.g., overall quiz/exam scores) in comparison to lectures based on overhead projectors, traditional lectures (e.g., “chalk-and-talk”), and online lectures. Thus far, comparisons of overall exam scores have yielded mixed results. The present study decomposes overall quiz scores into auditory, graphic, and alphanumeric scores to reveal new insights into effects of PowerPoint presentations on student performance. Analyses considered retention of lecture information presented to students without the presence of PowerPoint (i.e., traditional lecture), auditory information in the presence of PowerPoint, and visual (i.e., graphic and alphanumeric) information displayed on PowerPoint slides. Data were collected from 62 students via quiz and questionnaire. Students retained 15% less information delivered verbally by the lecturer during PowerPoint presentations, but they preferred PowerPoint presentations over traditional presentations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号