首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper describes the operation of an algorithm for synchronizing the time of computers using messages transmitted over packet-switched networks such as the Internet. The algorithm configures itself to realize any specified performance level at minimum cost (measured in computer cycles or network bandwidth). If the highest possible accuracy is requested, the performance will be limited by the larger of the instability of the local clock oscillator or the noise in the measurement process between the client and the server; uncertainties of about 8 ms RMS have been obtained using standard workstations and average network connections. Lower accuracy can be realized at substantially lower cost because the cost varies approximately as the inverse of the accuracy squared over a wide range of these parameters. The algorithm makes better use of scarce network bandwidth than previous methods. This improvement is realized by using a pure frequency-locked loop (rather than mixed frequency/phase locking algorithms currently proposed for the NTP) with unequal spacing between calibration cycles. The result is a cleaner separation between network noise and clock noise, which is especially important when the highest possible accuracy is desired. In addition, the algorithm is an improvement over the pure-FLL "Interlock" algorithm described previously because it is self configuring. In addition to supporting an explicit trade-off between cost and accuracy, the algorithm provides better performance than previous methods because it is better able to adapt itself to fluctuations in the asymmetry of the network delay.  相似文献   

2.
This paper investigates strategies for improving supply chain delivery timeliness when the delivery time follows an asymmetric Laplace distribution. Delivery performance is measured using a cost-based analytical model which evaluates the expected cost for early and late delivery. This paper presents a set of propositions that define the effect of changes to the parameters of the delivery time distribution on the expected penalty cost for untimely delivery when a supplier uses an optimally positioned delivery window to minimise the expected cost of untimely delivery. The scale parameter increases the expected penalty cost, skewness decreases the cost and the location parameter has no effect on the expected penalty cost. The effects are illustrated in a numerical example with real-world supplier data. The results can be used in developing strategies for improving delivery performance from a supplier’s perspective and define how the delivery time distribution parameters can be modified to decrease the expected penalty cost of untimely delivery. The paper proposes a general approach to modelling delivery performance improvement and can be applied to other delivery time distribution forms. The approach can serve as guidance for practitioners undertaking a programme to improve delivery performance.  相似文献   

3.
The testing local area network, TestLAN, is an approach to the design of integrated testing systems. Its objective is to link a network of testers such that distributed clients (jobs) requiring testing services can be served efficiently with minimum delays through the use of dynamic priority assignment, time-out, and resource allocation protocols. Previous research has shown this approach to improve performance, but assumed that the protocols are static. Events such as demand fluctuation and design changes, however, can affect the usefulness of the protocol logic, thereby adversely affecting the performance of the system. This paper investigates the thresholds of change and the time at which protocols should be re-evaluated for possible changes. The results of the analysis show that significant improvement can be gained by protocol adaptation, with certain cases realizing improvement in the flow time by as much as 32%, compared to the non-adaptable TestLAN. However, differences among adaptation methods are more difficult to generalize, as they depend on the particular cases. In many cases, adapting protocol logic when thresholds are in the range 0.2 ≤ Δ ≤ 0.4, the percentage change of the parameters of the system in either direction, reduces flow time significantly, yet these thresholds are not general. The importance of this analysis is that the experiments cover a large space of general conditions, and general design recommendations for protocol adaptation are developed.  相似文献   

4.
The commercialization of magnetic refrigerators depends upon the ability to meet performance targets while having acceptable equipment costs. This paper links device design parameters and performance of magnetic refrigerators to the cost of cooling delivered. A device configuration parameter, D, is defined that links the field volume to the volume of magnetocaloric material. Combined with the magnet performance parameter, efficiency, and specific exergetic cooling, the cost structure of a magnetic refrigerator is determined. Some magnetic refrigerators reported in the literature are classified using their configuration parameters, and are then compared in terms of demonstrated performance using results available in the literature. The required improvement in performance is calculated such that the cost of a magnetic refrigerator would be equivalent to a conventional compressor-based device. Finally, some of the reasons for different performance are discussed with a focus on opportunities for improvements.  相似文献   

5.
This paper addresses the design of a blood supply chain (SC) network considering blood group compatibility. To this aim, a bi-objective mathematical programming model is developed which minimises the total cost as well as the maximum unsatisfied demand. Due to uncertain nature of some input parameters, two novel robust possibilistic programming models are proposed based on credibility measure. The data of a real case study are then used to illustrate the applicability and performance of the proposed models as well as validating the proposed robust possibilistic programming approach. The obtained results show the superiority of the developed models and significant cost savings compared to current existed blood SC network.  相似文献   

6.
It has long been recognized that poor quality can only result in higher costs. Yet, the idea of reducing cost through better quality is not fully realized. Current models for the economic design of control charts provide strategies to maintain existing quality levels. In this research, a comprehensive cost model is developed to incorporate two cost functions. A reactive function, which accounts for all quality related costs incurred while maintaining a stable level of the process, and a proactive function, which accounts for the cost of process improvement. Using incremental economics, the two cost functions are assembled to allow an evaluation of process improvement alternatives based on their economic worth. Procedures for obtaining economically optimum designs for controlling the process mean are developed and designed experiments are utilized to investigate model performance over a wide range of input parameters. The results indicate that the model is sensitive to changes in 13 parameters, especially when the magnitude of the process shift is small. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
Synthesizing Josephson stepwise approximated sine waves to produce an ac quantum standard is limited by errors in the waveform transitions between quantized voltages. Many parameters have an impact on the shape of the transients. A simple model with a single equivalent time constant can be used to evaluate the influence of these parameters. Measurements of the transients allow establishment of the value of the equivalent time constant and prediction of the response to variations of the parameters. We have experimentally confirmed the influence of changes in the bias current used for the quantized voltages and in the microwave power applied. Under usual operating conditions, the model predicts that increasing the number of samples per period nonlinearly reduces the difference between measured and ideal root-mean-square (rms) values. This behavior was confirmed through measurements with thermal converters.   相似文献   

8.
Performance measurement is a fundamental instrument of management. For maintenance management, one of the key issues is to ensure the maintenance activities planned and executed have given the expected results. This can be facilitated by effective use of rigorously defined key performance indicators (KPI) that are able to measure important aspects of maintenance function. In this paper, an industrial survey was carried out to explore the use of performance measurement in maintenance management. Based on survey responses, analyses were performed on popularly used KPI, how these KPI are sourced or chosen; the influence of manufacturing environment and maintenance objectives on KPI choice and effective use of these KPI in decision support and performance improvement. It was found that maintenance performance measurement is dominated by lagging indicators (equipment, maintenance cost and safety performance). There is lesser use of leading (maintenance work process) indicators. The results showed no direct correlations between the maintenance objectives pursued and the KPI used. Further analysis showed that only a minority of the companies have a high percentage of decisions and changes triggered by KPI use and only a few are satisfied with their performance measurement systems. Correlation analysis showed a strong positive linear relationship between degree of satisfaction and process changes/decisions triggered by KPI use, with the least satisfied people having the least decisions and changes triggered by KPI use. The results indicate some ineffectiveness of performance measurement systems in driving performance improvement in industries.  相似文献   

9.
Risk-adjusted control charts have been widely used in monitoring surgical quality in detecting risks of surgical performance. Most of the previous approaches focus on shifts in the location parameter as well as the existence of the scale parameter, which cannot get the full measure of the scale parameter under different levels. Ignoring the magnitude of the scale parameter, the monitoring methods cannot detect different variations of surgical mortality that is measured by scale parameter and required to reflect surgical quality improvement. The method of detecting variations in surgical quality is of interest in surgical quality improvement. This paper uses a new weighted h-likelihood method to obtain a weighted score test for the surgical risks from the logistic model. Then an exponentially weighted moving average chart can be constructed to monitor the changes in the variance of risks, which could be of interest in practical surgical monitoring programs. Simulation results indicate that the proposed approach performs more efficiently than existing methods under various magnitudes of shifts in scale parameters on top of different pre-set threshold stability. In addition, the application of the proposed method to real surgical data from the Surgical Outcome Monitoring and Improvement Program in Hong Kong shows the improvement and deterioration in a hospital's outcomes.  相似文献   

10.
Wang P  Farrell G  Semenova Y  Rajan G 《Applied optics》2008,47(16):2921-2925
It is shown that manufacturing tolerances of the fiber parameters bend radius and numerical aperture (NA) significantly influence the fiber bend loss performance and spectral response of a fiber based edge filter. A theoretical model, validated by experimental results, is used to determine the changes in key spectral parameters for an edge filter, resulting from changes within their manufacturing tolerance range, for both the bend radius and NA. Finally, it is shown that bend-radius tuning during fabrication of such filters is a means of mitigating the effect of manufacturing variations.  相似文献   

11.
The coupled-line thru-reflection-line (TRL) calibration technique is developed to measure the differential-mode (DM) and common-mode (CM) scattering parameters of the symmetric coupled-line discontinuity structure. Under DM and CM excitation conditions, a four-port symmetric coupled-line network can be treated as equivalent DM and CM two-port half-circuits. The developed coupled-line TRL calibrators can also be equivalent to DM and CM half-calibrators, and then applied to measure the DM and CM scattering parameters of the symmetric coupled-line discontinuity structure. To validate the effectiveness of the developed measurement method, the DM and CM characteristics of a coupled-line guided-wave structure, the periodically non-uniform coupled microstrip-line structure, are measured over the frequency range of 1-8 GHz. The measured results are shown in reasonable agreement with the simulated results. It demonstrates that the developed coupled-line TRL technique is an effective approach to characterise the symmetric coupled-line discontinuity structure.  相似文献   

12.
This paper presents a novel approach to the characterization of dielectrics exhibiting dispersion with frequency and the derivation of their equivalent circuits. The approach is based on the utilization of generalized relaxation time distribution (GRTD) facilitated by the analysis of dispersion in the dielectric constant complex plane. New parametric characterizations are introduced leading to a direct derivation of a distributed resistance-capacitance (R-C) network. The validity of this new methodology is examined by comparing reported measurements of a.c. resistivity and permittivity of zinc oxide (ZnO) varistors with those obtained from distributed parameter predictions. A discrete form of the equivalent network is also derived consisting of 16 R-C branches. The results of the distributed parameters and discrete equivalent network will be shown to agree satisfactorily with measurements over the frequency range of 30 Hz to 10 MHz. Remarkable agreement was also obtained between measured dispersion and network predictions for a perfluoropolyether microemulsion polymer over the frequency range of about 100. kHz up to 100 MHz using 18 R-C branches.  相似文献   

13.
Specific wear rate of composite materials plays a significant role in industry. The processes to measure it are both time and cost consuming. It is essential to suggest a modeling method to predict and analyze the effectiveness of parameters of specific wear rate. Nowadays, computational methods such as Artificial Neural Network (ANN), Fuzzy Inference System (FIS) and adaptive neuro-fuzzy inference system (ANFIS) are mainly considered as applicable tools from modeling point of view. ANFIS present integrate performance of neural network (NN) and fuzzy system (FS). Present paper investigates performance prediction of a specific wear rate of epoxy composites with various composition using ANFIS. The obtained results showed that ANFIS is a powerful tool in modeling specific wear rate. The obtained mean of squared error (MSE) for testing sets in present paper obtained 0.0071.  相似文献   

14.
Facilities layout, being a significant contributor to manufacturing performance, has been studied many times over the past few decades. Existing studies are mainly based on material handling cost and have neglected several critical variations inherent in a manufacturing system. The static nature of available models has reduced the quality of the estimates of performance and led to not achieving an optimal layout. Using a queuing network model, an established tool to quantify the variations of a system and operational performance factors including work-in-process (WIP) and utilisation, can significantly help decision makers in solving a facilities layout problem. The queuing model utilised in this paper is our extension to the existing models through incorporating concurrently several operational features: availability of raw material, alternate routing of parts, effectiveness of a maintenance facility, quality of products, availability of processing tools and material handling equipment. On the other hand, a queuing model is not an optimisation tool in itself. A genetic algorithm, an effective search process for exploring a large search space, has been selected and implemented to solve the layout problem modelled with queuing theory. This combination provides a unique opportunity to consider the stochastic variations while achieving a good layout. A layout problem with unequal area facilities is considered in this paper. A good layout solution is the one which minimises the following four parameters: WIP cost, material handling cost, deviation cost, and relocation cost. Observations from experimental analysis are also reported in this paper. Our proposed methodology demonstrates that it has a potential to integrate several related decision-making problems in a unified framework.  相似文献   

15.
Improved OCXO's oven using active thermal insulation   总被引:1,自引:0,他引:1  
This paper shows how it is possible to improve the performance of thermal enclosures by using a compensating system the principle of which has been described by F. Walls a few years ago (41st AFCS, 1987). It is shown that because of the thermal network between the outside temperature, the temperature sensor and the device to be regulated, the latter may undergo residual temperature variations which reduce the overall thermal efficiency of the oven. This paper shows how thermal transfer functions can be measured by using an experimental setup in which the node temperatures are measured by thermal sensors. By identifying the thermal response of the nodes with the theoretical transfer function under external temperature or heater excitation, the components of the equivalent R-C network can be determined. By knowing these thermal transfer functions, it is then possible to make use of a compensating system which can eliminate the parasitic static as well as dynamic thermal effects. Validating measurements and experimental results are presented which show the strong improvement achieved by this compensating system with respect to the conventional approach  相似文献   

16.
The semiconductor industry in Taiwan has received excellent performance ratings in the past. SinoPac’s statistics in January, 2016 found that two of the top four global packaging and testing companies are from Taiwan, including ASE Group and SPIL. In the IC packaging process, the wire bonding machine requires careful attention. It is also costly at approximately 50% of the equipment investment. Thus, this paper focuses on wire bonding machines’ production problems. According to the on-site interviews, three key control modules are identified, which may change a machine to an out-of-control state and are responsible for 90% of the defective products. A mathematical model is developed to determine the optimal production time of an imperfect production process. Taking the time value of money, the objective is to minimise the total of the set-up cost, inventory cost and the defect cost. Besides applying MacLaurin Series, a math property and an effective solution range are derived to help obtain near-optimal solutions. For the justification of solution quality, the bisection method working on practical data is used. Finally, managerial insights are explored from observed outputs through the changes of various parameters. Moreover, these explorations are confirmed by experts in this field.  相似文献   

17.
The costs of calibration   总被引:1,自引:0,他引:1  
Characterization of the operating parameters of newly developed instruments is met by the several meanings of calibration. In the commonly used form, calibration is used to verify reference values and check the graduations by performing an operational check through the measuring range. This alone can cost an appreciable portion of the sale price of the instrument or circuit board. However, a broad meaning of calibration is to fix the graduations, which requires establishment of the performance characteristics of the device. Equipment to perform the basic calibration may cost over $100, 000 while testers to establish performance run in excess of $1, 000, 000. Costs of such equipment and the time to do the tests are a necessary part of the pricing of each circuit board or instrument made.  相似文献   

18.
Optimal preventive maintenance in a production inventory system   总被引:1,自引:0,他引:1  
We consider a production inventory system that produces a single product type, and inventory is maintained according to an (S, s) policy. Exogenous demand for the product arrives according to a random process. Unsatisfied demands are not back ordered. Such a make-to-stock production inventory policy is found very commonly in discrete part manufacturing industry, e.g., automotive spare parts manufacturing. It is assumed that the demand arrival process is Poisson. Also, the unit production time, the time between failures, and the repair and maintenance times are assumed to have general probability distributions. We conjecture that, for any such system, the down time due to failures can be reduced through preventive maintenance resulting in possible increase in the system performance. We develop a mathematical model of the system, and derive expressions for several performance measures. One such measure (cost benefit) is used as the basis for optimal determination of the maintenance parameters. The model application is explained via detailed study of 21 variants of a numerical example problem. The optimal maintenance policies (obtained using a numerical search technique) vary widely depending on the problem parameters. Plots of the cost benefit versus the system characteristic parameters (such as, demand arrival rate, failure rate, production rate, etc.) reveal the parameter sensitivities. The results show that the actual values of the failure and maintenance costs, and their ratio are significant in determining the sensitivities of the system parameters.  相似文献   

19.
毫米波雷达阶跳频信号分析及运动补偿方法的实现   总被引:2,自引:0,他引:2  
分析了毫米波雷达阶跳频信号参数以及复杂目标运动对该信号合成目标一维距离像的影响,研究了三角波调制下运动速度参数的估计方法。仿真表明,该方法具有较高的测速精度和较好的抗噪声性能,能ADSP21062信号处理板上运行时间为1.2ms。  相似文献   

20.
We investigate mean-variance interactions of processing time as applied to process improvement and capacity design. For general capacity cost and flowcost functions, we demonstrate that production processes fall into one of six regions on the mean-variance interaction plane, each with its own policy implications. The general model is specialized to the case of an M/G/1 queue with linear and separable mean and variance costs, and with flowcosts proportional to mean queue length. Optimal solutions for processing-time mean and variance are derived, and easily obtained operating parameters are used to identify appropriate process improvement policies. A simulation example of a production network taken from industry verifies the efficacy of the linear M/G/1 model in a more general setting. We conclude that intelligent management of both processing capacity (i.e. mean processing time) and processing-time variances can be powerful tools for both capacity design and process improvement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号