首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19篇
  免费   0篇
轻工业   3篇
无线电   1篇
一般工业技术   10篇
冶金工业   1篇
自动化技术   4篇
  2023年   2篇
  2019年   2篇
  2018年   1篇
  2014年   1篇
  2013年   1篇
  2009年   1篇
  2006年   1篇
  2005年   3篇
  2004年   2篇
  2002年   3篇
  2000年   1篇
  1998年   1篇
排序方式: 共有19条查询结果,搜索用时 15 毫秒
1.
In this paper, we suggest a potential use of data compression measures, such as the Entropy, and the Huffman Coding, to assess the effects of noise factors on the reliability of tested systems. In particular, we extend the Taguchi method for robust design by computing the entropy of the percent contribution values of the noise factors. The new measures are computed already at the parameter-design stage, and together with the traditional S/N ratios enable the specification of a robust design. Assuming that (some of) the noise factors should be naturalized, the entropy of a design reflects the potential efforts that will be required in the tolerance-design stage to reach a more reliable system. Using a small example, we illustrate the contribution of the new measure that might alter the designer decision in comparison with the traditional Taguchi method, and ultimately obtain a system with a lower quality loss. Assuming that the percent contribution values can reflect the probability of a noise factor to trigger a disturbance in the system response, a series of probabilistic algorithms can be applied to the robust design problem. We focus on the Huffman coding algorithm, and show how to implement this algorithm such that the designer obtains the minimal expected number of tests in order to find the disturbing noise factor. The entropy measure, in this case, provides the lower bound on the algorithm's performance.  相似文献   
2.
BACKGROUND: Lead III ST-segment depression during acute anterior wall myocardial infarction (AMI) has been attributed to reciprocal changes. However, the value of the T-wave direction (positive or negative) in predicting the site of obstruction and type of the left anterior descending (LAD) artery is not clear and has not been studied before. HYPOTHESIS: The aim of the study was to assess retrospectively the correlation between two patterns of lead III ST-segment depression, and type of LAD artery and its level of obstruction during first AMI. METHODS: The study group consisted of 48 consecutive patients, admitted to the coronary care unit for first AMI, who showed ST-segment elevation in lead a VL and ST-segment depression in lead III on admission 12-lead electrocardiogram. The patients were divided by T-wave direction into Group 1 (n = 31), negative T wave, and Group 2 (n = 17), positive T wave. The coronary angiogram was evaluated for type of LAD ("wrapped", i.e., surrounding the apex or not), site of obstruction (pre- or postdiagonal branch), and other significant coronary artery obstructions. RESULTS: Mean lead III ST-segment depression was 1.99 +/- 1.32 mm in Group 1 and 1.13 +/- 0.74 mm in Group 2 (p = 0.004); mean ST-segment elevation in a VL was 1.35 +/- 0.84 mm and 1.23 +/- 0.5 mm, respectively (p = 0.5). A wrapped LAD was found in 12 patients (38.7%) in Group 1 and in 13 in Group 2 (76.4%) (p = 0.02). The sensitivity of lead III ST-segment depression with positive T wave to predict a wrapped LAD was 52%, and the specificity was 82% with a positive predictive value of 76%. On angiography, 25 patients (80%) in Group 1 and 13 (76%) in Group 2 had prediagonal occlusion of the LAD (p = 0.77). No significant difference between groups was found for right and circumflex coronary artery involvement or incidence of multivessel disease. CONCLUSIONS: The presence of lead III ST-segment depression with positive T wave associated with ST-segment elevation in a VL in the early course of AMI can serve as an early electrocardiographic marker of prediagonal occlusion of a "wrapped" LAD.  相似文献   
3.
In this paper we follow previous “pseudo-stochastic” approaches that solve stochastic control problems by using deterministic optimal control methods. In a similar manner to the certainty equivalence principle, the suggested model maximizes a given profit function of the expected system outcome. However, unlike the certainty equivalence principle, we model the expected influences of all future events (including those that are expected beyond the planning horizon), as encapsulated by their density functions and not only by their mean values. The model is applied to the optimal scheduling of multiple part-types on a single machine that is subject to random failures and repairs. The objective of the scheduler is to maximize the profit function of the produced multiple-part mix. A numerical study is performed to evaluate the suggested pseudo-stochastic solutions under various conditions. These solutions are compared to a profit upper bound of the stochastic optimal control solutions.  相似文献   
4.
Ben-Gal  Irad  Bukchin  Joseph 《IIE Transactions》2002,34(4):375-391
The increasing use of computerized tools for virtual manufacturing in workstatin design has two main advantages over traditional methods first it enables the designer to examine a large number of design solutions; and second, simulation of the work task may be performedin order to obtain the values of various performance measures. In this paper a ne~ structural. methodology for the workstation design is presented. Factorial experiments and the response surface methodology are integrated 111 order to reduce the number of examined design solutions and obtain an estimate for the best design configuration With respect to multi-objective requirements.  相似文献   
5.
6.
In this paper, an information theoretic approach is applied to analyze the performance of a decentralized control system. The control system plays the role of a correcting device which decreases the uncertainties associated with state variables of a production line by applying an appropriate “correcting signal” for each deviation from the target. In particular, a distributed feedback control policy is considered to govern a transfer production line, which consists of machines and buffers and processes a single part type in response to a stochastic demand. It is shown how the uncertainty of the demand propagates dynamically into the production system, causing uncertainties associated with buffer levels and machine production rates. The paper proposes upper estimates for these uncertainties as functions of the demand variance, parameters of the distributed controllers and some physical properties of the production line. The bounds are based on dynamic entropy measures of the system state and the control variables. Some practical implications into the area of decentralized controller design are proposed, an information-economical analysis is presented and a numerical study is performed.  相似文献   
7.
Conventional Statistical Process Control (SPC) schemes fail to monitor nonlinear and finite-state processes that often result from feedback-controlled processes. SPC methods that are designed to monitor autocorrelated processes usually assume a known model (often an ARIMA) that might poorly describe the real process. In this paper, we present a novel SPC methodology based on context modeling of finite-state processes. The method utilizes a series of context-tree models to estimate the conditional distribution of the process output given the context of previous observations. The Kullback-Leibler divergence statistic is derived to indicate significant changes in the trees along the process. The method is implemented in a simulated flexible manufacturing system in order to detect significant changes in its production mix ratio output.  相似文献   
8.
In certain types of processes, verification of the quality of the output units is possible only after the entire batch has been processed. We develop a model that prescribes which units should be inspected and how the units that were not inspected should be disposed of, in order to minimize the expected sum of inspection costs and disposition error costs, for processes that are subject to random failure and recovery. The model is based on a dynamic programming algorithm that has a low computational complexity. The study also includes a sensitivity analysis under a variety of cost and probability scenarios, supplemented by an analysis of the smallest batch that requires inspection, the expected number of inspections, and the performance of an easy to implement heuristic.  相似文献   
9.
Universal compression algorithms can detect recurring patterns in any type of temporal data—including financial data—for the purpose of compression. The universal algorithms actually find a model of the data that can be used for either compression or prediction. We present a universal Variable Order Markov (VOM) model and use it to test the weak form of the Efficient Market Hypothesis (EMH). The EMH is tested for 12 pairs of international intra-day currency exchange rates for one year series of 1, 5, 10, 15, 20, 25 and 30 min. Statistically significant compression is detected in all the time-series and the high frequency series are also predictable above random. However, the predictability of the model is not sufficient to generate a profitable trading strategy, thus, Forex market turns out to be efficient, at least most of the time.   相似文献   
10.
Ben-Gal  Irad  Caramanis  Michael 《IIE Transactions》2002,34(12):1087-1100
The paper considers a sequential Design Of Experiments (DOE) scheme. Our objective is to maximize both information and economic measures over a feasible set of experiments. Optimal DOE strategies are developed by introducing information criteria based on measures adopted from information theory. The evolution of acquired information along various stages of experimentation is analyzed for linear models with a Gaussian noise term. We show that for particular cases, although the amount of information is unbounded, the desired rate of acquiring information decreases with the number of experiments. This observation implies that at a certain point in time it is no longer efficient to continue experimenting. Accordingly, we investigate methods of stochastic dynamic programming under imperfect state information as appropriate means to obtain optimal experimentation policies. We propose cost-to-go functions that model the trade-off between the cost of additional experiments and the benefit of incremental information. We formulate a general stochastic dynamic programming framework for design of experiments and illustrate it by analytic and numerical implementation examples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号