首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17篇
  免费   0篇
化学工业   1篇
一般工业技术   11篇
自动化技术   5篇
  2023年   1篇
  2015年   1篇
  2014年   1篇
  2011年   1篇
  2009年   1篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   2篇
  2004年   2篇
  2002年   3篇
  2000年   1篇
  1996年   1篇
排序方式: 共有17条查询结果,搜索用时 15 毫秒
1.
In this paper we follow previous “pseudo-stochastic” approaches that solve stochastic control problems by using deterministic optimal control methods. In a similar manner to the certainty equivalence principle, the suggested model maximizes a given profit function of the expected system outcome. However, unlike the certainty equivalence principle, we model the expected influences of all future events (including those that are expected beyond the planning horizon), as encapsulated by their density functions and not only by their mean values. The model is applied to the optimal scheduling of multiple part-types on a single machine that is subject to random failures and repairs. The objective of the scheduler is to maximize the profit function of the produced multiple-part mix. A numerical study is performed to evaluate the suggested pseudo-stochastic solutions under various conditions. These solutions are compared to a profit upper bound of the stochastic optimal control solutions.  相似文献   
2.
Ben-Gal  Irad  Bukchin  Joseph 《IIE Transactions》2002,34(4):375-391
The increasing use of computerized tools for virtual manufacturing in workstatin design has two main advantages over traditional methods first it enables the designer to examine a large number of design solutions; and second, simulation of the work task may be performedin order to obtain the values of various performance measures. In this paper a ne~ structural. methodology for the workstation design is presented. Factorial experiments and the response surface methodology are integrated 111 order to reduce the number of examined design solutions and obtain an estimate for the best design configuration With respect to multi-objective requirements.  相似文献   
3.
In this paper, an information theoretic approach is applied to analyze the performance of a decentralized control system. The control system plays the role of a correcting device which decreases the uncertainties associated with state variables of a production line by applying an appropriate “correcting signal” for each deviation from the target. In particular, a distributed feedback control policy is considered to govern a transfer production line, which consists of machines and buffers and processes a single part type in response to a stochastic demand. It is shown how the uncertainty of the demand propagates dynamically into the production system, causing uncertainties associated with buffer levels and machine production rates. The paper proposes upper estimates for these uncertainties as functions of the demand variance, parameters of the distributed controllers and some physical properties of the production line. The bounds are based on dynamic entropy measures of the system state and the control variables. Some practical implications into the area of decentralized controller design are proposed, an information-economical analysis is presented and a numerical study is performed.  相似文献   
4.
Conventional Statistical Process Control (SPC) schemes fail to monitor nonlinear and finite-state processes that often result from feedback-controlled processes. SPC methods that are designed to monitor autocorrelated processes usually assume a known model (often an ARIMA) that might poorly describe the real process. In this paper, we present a novel SPC methodology based on context modeling of finite-state processes. The method utilizes a series of context-tree models to estimate the conditional distribution of the process output given the context of previous observations. The Kullback-Leibler divergence statistic is derived to indicate significant changes in the trees along the process. The method is implemented in a simulated flexible manufacturing system in order to detect significant changes in its production mix ratio output.  相似文献   
5.
In certain types of processes, verification of the quality of the output units is possible only after the entire batch has been processed. We develop a model that prescribes which units should be inspected and how the units that were not inspected should be disposed of, in order to minimize the expected sum of inspection costs and disposition error costs, for processes that are subject to random failure and recovery. The model is based on a dynamic programming algorithm that has a low computational complexity. The study also includes a sensitivity analysis under a variety of cost and probability scenarios, supplemented by an analysis of the smallest batch that requires inspection, the expected number of inspections, and the performance of an easy to implement heuristic.  相似文献   
6.
Universal compression algorithms can detect recurring patterns in any type of temporal data—including financial data—for the purpose of compression. The universal algorithms actually find a model of the data that can be used for either compression or prediction. We present a universal Variable Order Markov (VOM) model and use it to test the weak form of the Efficient Market Hypothesis (EMH). The EMH is tested for 12 pairs of international intra-day currency exchange rates for one year series of 1, 5, 10, 15, 20, 25 and 30 min. Statistically significant compression is detected in all the time-series and the high frequency series are also predictable above random. However, the predictability of the model is not sufficient to generate a profitable trading strategy, thus, Forex market turns out to be efficient, at least most of the time.   相似文献   
7.
This work evaluates the importance of approximate Fourier phase information in the phase retrieval problem. The main discovery is that a rough phase estimate (up to π/2 rad) allows development of very efficient algorithms whose reconstruction time is an order of magnitude faster than that of the current method of choice--the hybrid input-output (HIO) algorithm. Moreover, a heuristic explanation is provided of why continuous optimization methods like gradient descent or Newton-type algorithms fail when applied to the phase retrieval problem and how the approximate phase information can remedy this situation. Numerical simulations are presented to demonstrate the validity of our analysis and success of our reconstruction method even in cases where the HIO algorithm fails, namely, complex-valued signals without tight support information.  相似文献   
8.
Two approaches are investigated to the multigrid solution of the quasi-geostrophic equations—a fundamental nonlinear system of partial differential equations that models large-scale planetary flows. One approach employs standard coarsening with pointwise SOR and the other line relaxation with partial coarsening. The latter solver is implemented in turbulent-flow simulations on the CRAY C-90 supercomputer. This solver is robust with respect to anisotropy of the operator due to stratification, and it efficiently exploits the vectorization and parallelization capabilities of the machine. The approach taken is applicable to more complex related systems.  相似文献   
9.
Ben-Gal  Irad  Caramanis  Michael 《IIE Transactions》2002,34(12):1087-1100
The paper considers a sequential Design Of Experiments (DOE) scheme. Our objective is to maximize both information and economic measures over a feasible set of experiments. Optimal DOE strategies are developed by introducing information criteria based on measures adopted from information theory. The evolution of acquired information along various stages of experimentation is analyzed for linear models with a Gaussian noise term. We show that for particular cases, although the amount of information is unbounded, the desired rate of acquiring information decreases with the number of experiments. This observation implies that at a certain point in time it is no longer efficient to continue experimenting. Accordingly, we investigate methods of stochastic dynamic programming under imperfect state information as appropriate means to obtain optimal experimentation policies. We propose cost-to-go functions that model the trade-off between the cost of additional experiments and the benefit of incremental information. We formulate a general stochastic dynamic programming framework for design of experiments and illustrate it by analytic and numerical implementation examples.  相似文献   
10.
The classical funnel experiment was used by Deming to promote the idea of statistical process control (SPC). The popular example illustrates that the implementation of simple feedback rules to stationary processes violates the independence assumption and prevents the implementation of conventional SPC. However, Deming did not indicate how to implement SPC in the presence of such feedback rules. This pedagogical gap is addressed here by introducing a simple feedback rule to the funnel example that results in a nonlinear process to which the traditional SPC methods cannot be applied. The proposed method of Markov‐based SPC, which is a simplified version of the context‐based SPC method, is shown to monitor the modified process well. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号