首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Identification is the selection of the model type and of the model order by using measured data of a process with unknown characteristics. If the observations themselves are used, it is possible to identify automatically a good time-series model for stochastic data. The selected model is an adequate representation of the statistically significant spectral details in the observed process. Sometimes, identification has to be based on many less than N characteristics of the data. The reduced statistics information is assumed to consist of a long autoregressive (AR) model. That AR model has to be used for the estimation of moving average (MA) and of combined ARMA models and for the selection of the best model orders. The accuracy of ARMA models is improved by using four different types of initial estimates in a first stage. After a second stage, it is possible to select automatically which initial estimates were most favorable in the present case by using the fit of the estimated ARMA models to the given long AR model. The same principle is used to select the best type of the time-series models and the best model order. No spectral information is lost in using only the long AR representation instead of all data. The quality of the model identified from a long AR model is comparable to that of the best time-series model that can be computed if all observations are available.  相似文献   

2.
The singular value decomposition (SVD) autoregressive moving average, (ARMA) procedure is applied to computer-generated synthetic Doppler signals as well as in-vivo Doppler data recorded in the carotid artery. Two essential algorithmic parameters (the initially proposed model order and the number of overdetermined equations used) prove difficult to choose. The resulting spectra are very dependent on these two parameters. For the simulated data models orders of (3, 3) provide good results. However, when applying the SVD-ARMA algorithm to in-vivo Doppler signals no single set of model orders was capable of producing consistent spectral estimates throughout the cardiac cycle. Altering the model orders also necessitates the selection of new algorithmic parameters. Hence, the SVD-ARMA approach cannot be considered suitable as a spectral estimation technique, for real-time Doppler ultrasound systems  相似文献   

3.
The decision as to whether a contaminated site poses a threat to human health and should be cleaned up relies increasingly upon the use of risk assessment models. However, the more sophisticated risk assessment models become, the greater the concern with the uncertainty in, and thus the credibility of, risk assessment. In particular, when there are several equally plausible models, decision makers are confused by model uncertainty and perplexed as to which model should be chosen for making decisions objectively. When the correctness of different models is not easily judged after objective analysis has been conducted, the cost incurred during the processes of risk assessment has to be considered in order to make an efficient decision. In order to support an efficient and objective remediation decision, this study develops a methodology to cost the least required reduction of uncertainty and to use the cost measure in the selection of candidate models. The focus is on identifying the efforts involved in reducing the input uncertainty to the point at which the uncertainty would not hinder the decision in each equally plausible model. First, this methodology combines a nested Monte Carlo simulation, rank correlation coefficients, and explicit decision criteria to identify key uncertain inputs that would influence the decision in order to reduce input uncertainty. This methodology then calculates the cost of required reduction of input uncertainty in each model by convergence ratio, which measures the needed convergence level of each key input's spread. Finally, the most appropriate model can be selected based on the convergence ratio and cost. A case of a contaminated site is used to demonstrate the methodology.  相似文献   

4.
An estimation algorithm for stationary random data automatically selects a single time-series (TS) model for a given number of observations. The parameters of that model accurately represent the spectral density and the autocovariance function of the data. The increased computational speed has given the possibility to compute hundreds of TS models and to select only one. The computer program uses a selection criterion to determine the best model type and model order from a large number of candidates. That selected model includes all statistically significant details that are present in the data, and no more. The spectral density of high-order TS models is the same as the raw periodogram, and the autocorrelation function can be the same as the lagged product (LP) estimate. Therefore, the periodogram and the LP autocorrelation function are very high-order TS candidates. However, those high-order models are never selected in practice because they contain many insignificant details. The automatic selection with the algorithm lets the data speak for themselves: a single model is selected without user interaction. The automatic program can be implemented in measurement instruments for maintenance or in radar, by automatically detecting differences in signal properties  相似文献   

5.
The paper considers the possible ARMA models which can be derived from the discrete-time state space model. This is achieved by the definition of the regular observation matrix. ARMA models of different order are obtained and the number of identified parameters of these models is determined. The almost sure existence of the minimal parameter ARMA model is shown. On this basis, the classification of ARMA models for vibrating systems is presented.  相似文献   

6.
Time-series analysis if data are randomly missing   总被引:1,自引:0,他引:1  
Maximum-likelihood (ML) theory presents an elegant asymptotic solution for the estimation of the parameters of time-series models. Unfortunately, the performance of ML algorithms in finite samples is often disappointing, especially in missing-data problems. The likelihood function is symmetric with respect to the unit circle for the estimated zeros of time-series models. As a consequence, the unit circle is either a local maximum or a local minimum in the likelihood of moving-average (MA) models. This is a trap for nonlinear optimization algorithms that often converge to poor models, with estimated zeros precisely on the unit circle. With ML estimation, it is much easier to estimate a long autoregressive (AR) model with only poles. The parameters of that long AR model can then be used to estimate MA and autoregressive moving-average (ARMA) models for different model orders. The accuracy of the estimated AR, MA, and ARMA spectra is very good. The robustness is excellent as long as the AR order is less than 10 or 15. For still-higher AR orders until about 60, the possible convergence to a useful model will depend on the missing fraction and on the specific properties of the data at hand.  相似文献   

7.
Automatic spectral analysis with time series models   总被引:5,自引:0,他引:5  
The increased computational speed and developments in the robustness of algorithms have created the possibility to identify automatically a well-fitting time series model for stochastic data. It is possible to compute more than 500 models and to select only one, which certainly is one of the better models, if not the very best. That model characterizes the spectral density of the data. Time series models are excellent for random data if the model type and the model order are known. For unknown data characteristics, a large number of candidate models have to be computed. This necessarily includes too low or too high model orders and models of the wrong types, thus requiring robust estimation methods. The computer selects a model order for each of the three model types. From those three, the model type with the smallest expectation of the prediction error is selected. That unique selected model includes precisely the statistically significant details that are present in the data  相似文献   

8.
This paper presents a new weighted fuzzy multi-objective model to integrated supplier selection, order quantity allocation and customer order scheduling problem to prepare a responsive and order-oriented supply chain in a make-to-order manufacturing system. Total cost and quality of purchased parts as well as the reliability of on-time delivery of customer orders are regarded as the objectives of the model. On the other hand, flexible suppliers can contribute to the responsiveness and flexibility of entire supply chain in the face of uncertain customer orders. Therefore, a mathematical measure is developed for evaluating the volume flexibility of suppliers and is considered as the other objective of the model. Furthermore, by considering the effect of interdependencies between the selection criteria and to handle inconsistent and uncertain judgments, a fuzzy analytic network process method is used to identify top suppliers and consider as the last objective. In order to optimise these objectives, the decision-maker needs to decide from which supplier to purchase parts needed to assemble the customer orders, how to allocate the demand for parts between the selected suppliers, and how to schedule the customer orders for assembled products over the planning time horizon. Numerical examples are presented and computational analysis is reported.  相似文献   

9.
Laser-Doppler Anemometry (LDA) is used to measure the velocity of gases and liquids with observations irregularly spaced in time. Equidistant resampling turns out to be better than slotting techniques. After resampling, two ways of spectral estimation are compared. The first estimate is a windowed periodogram and the second is the spectrum of a time series model. That is an estimated autoregressive moving average (ARMA) process whose orders are automatically selected from the data with an objective statistical criterion. Typically, the ARMA spectrum is better than the best windowed periodogram  相似文献   

10.

Multilevel modeling is often used in the social sciences for analyzing data that has a hierarchical structure, e.g., students nested within schools. In an earlier study, we investigated the performance of various prediction rules for predicting a future observable within a hierarchical data set (Afshartous & de Leeuw, 2004). We apply the multilevel prediction approach to the NELS:88 educational data in order to assess the predictive performance on a real data set; four candidate models are considered and predictions are evaluated via both cross-validation and bootstrapping methods. The goal is to develop model selection criteria that assess the predictive ability of candidate multilevel models. We also introduce two plots that 1) aid in visualizing the amount to which the multilevel model predictions are “shrunk” or translated from the OLS predictions, and 2) help identify if certain groups exist for which the predictions are particularly good or bad.

  相似文献   

11.
Condition based maintenance (CBM) is an important maintenance strategy in practice. In this paper, we propose a CBM method to effectively incorporate system health observations into maintenance decision making to minimise the total maintenance cost and cost variability. In this method, the system degradation process is described by a Cox PH model and the proposed framework includes simulation of failure process and maintenance policy optimisation using adaptive nested partition with sequential selection (ANP-SS) method, which can adaptively select or create the most promising region of candidates to improve the efficiency. Different from existing CBM strategies, the proposed method relaxes some restrictions on the system degradation model and taking the cost variation as one of the optimisation objectives. A real industry case study is used to demonstrate the effectiveness of our framework.  相似文献   

12.
An integrated approach to the interrelated activities of product design, materials selection and cost estimation is proposed. The wide range of engineering materials is first narrowed to a limited number of candidates using design limitations and performance requirements. Each of the candidate materials is used to develop an optimum design which is then used in cost estimation. An optimization technique, such as benefit-cost analysis, is used to select the optimum design-material combination. A case study is presented to illustrate the use of the integrated approach.  相似文献   

13.
基于ARMA模型的水下爆炸冲击谱预测   总被引:1,自引:1,他引:0  
针对舰船设备抗冲击设计冲击环境预测难的问题,提出了利用自回归滑动平均模型(ARMA模型)预测水下爆炸冲击响应谱的方法,并引入遗传算法优化ARMA模型阶数。理论介绍了舰船设备水下爆炸冲击谱模型及ARMA模型。建模所需样本数据通过对有限元软件仿真输出的冲击响应信号进行去趋势化和平稳化处理获得,根据该样本数据的自相关函数和偏自相关函数统计特性分析,验证了ARMA建模的可行性。在此基础上,利用遗传算法优化后的ARMA模型,对水下爆炸冲击响应信号进行预测,并通过分析预测误差特性评估预测效果。研究结果表明,基于遗传算法优化的ARMA模型可以很好的预测水下爆炸冲击响应信号,从而为舰船设备抗冲击设计提供帮助。  相似文献   

14.
在分析加权平均法、模糊综合评价法、TOPSIS法和灰色关联度评价法各自优缺点的基础之上,针对单一评价方法的不足,运用序号总和理论与众数理论,结合上述4种多指标评价方法,建立工程选材组合评价模式.以低温存储罐材料的选择为例,从功能性和经济性角度出发,选择了8种评价指标,由层次分析法得到10种候选材料的评价指标的权重,运用上述组合评价模式进行组合评价.结果表明,全硬态301型不锈钢是最佳的低温存储罐材料,与客现实际相符,且组合评价模式所得排序结果优于单一评价方法.在工程设计中使用组合评价模式进行选材评价,有助于弥补单一评价法的缺陷,是工程选材决策的有力工具.  相似文献   

15.
In the field of information security, a gap exists in the study of coreference resolution of entities. A hybrid method is proposed to solve the problem of coreference resolution in information security. The work consists of two parts: the first extracts all candidates (including noun phrases, pronouns, entities, and nested phrases) from a given document and classifies them; the second is coreference resolution of the selected candidates. In the first part, a method combining rules with a deep learning model (Dictionary BiLSTM-Attention-CRF, or DBAC) is proposed to extract all candidates in the text and classify them. In the DBAC model, the domain dictionary matching mechanism is introduced, and new features of words and their contexts are obtained according to the domain dictionary. In this way, full use can be made of the entities and entity-type information contained in the domain dictionary, which can help solve the recognition problem of both rare and long entities. In the second part, candidates are divided into pronoun candidates and noun phrase candidates according to the part of speech, and the coreference resolution of pronoun candidates is solved by making rules and coreference resolution of noun phrase candidates by machine learning. Finally, a dataset is created with which to evaluate our methods using information security data. The experimental results show that the proposed model exhibits better performance than the other baseline models.  相似文献   

16.
Do lead time constraints only lead to re-think and re-optimise the inventory positioning along the supply chain or can they impact on the design of the supply chain itself? To answer such a question, we integrate the lead time constraints in a multi-echelon supply chain design model and challenge the difficulty of combining in the same model the long-term decisions (facility location, supplier selection) with the midterm decisions (inventory placement and replenishment, delivery lead time). The model guarantees the respect of the quoted lead time associated with each customer order and the replenishment of the different stocks (raw materials, intermediate and final products) in the different stages of the supply chain between any pair of consecutive orders. We use the model to investigate the impact of the quoted lead time and customer’s order frequency on supply chain design decisions and costs. Some of our results indicate that the lead time constraints can lead to bringing the sites of manufacturing and distribution close to the demand zone and to select local suppliers in spite of their higher cost.  相似文献   

17.
This paper addresses an advanced manufacturing technology selection problem by proposing a new common-weight multi-criteria decision-making (MCDM) approach in the evaluation framework of data envelopment analysis (DEA). We improve existing technology selection models by giving a new mathematical formulation to simplify the calculation process and to ensure its use in more general situations with multiple inputs and multiple outputs. Further, an algorithm is provided to solve the proposed model based on mixed-integer linear programming and dichotomy. Compared with previous approaches for technology selection, our approach brings new contributions. First, it guarantees that only one decision-making unit (DMU) (referring to a technology) can be evaluated as efficient and selected as the best performer while maximising the minimum efficiency among all the DMUs. Second, the number of mixed-integer linear programs to solve is independent of the number of candidates. In addition, it guarantees the uniqueness of the final optimal set of common weights. Two benchmark instances are used to compare the proposed approach with existing ones. A computational experiment with randomly generated instances is further proceeded to show that the proposed approach is more suitable for situations with large datasets.  相似文献   

18.
Facts and fiction in spectral analysis   总被引:3,自引:0,他引:3  
This analysis is limited to the spectral analysis of stationary stochastic processes with unknown spectral density. The main spectral estimation methods are: parametric with time series models, or nonparametric with a windowed periodogram. A single time series model will be chosen with a statistical criterion from three previously estimated and selected models: the best autoregressive (AR) model, the best moving average (MA) model, and the best combined ARMA model. The accuracy of the spectrum, computed from this single selected time series model, is compared with the accuracy of some windowed periodogram estimates. The time series model generally gives a spectrum that is better than the best possible windowed periodogram. It is a fact that a single good time series model can be selected automatically for statistical data with unknown spectral density. It is fiction that objective choices between windowed periodograms can be made  相似文献   

19.
Activity-Based Costing (ABC) was developed to address the deficiencies of traditional accounting systems in the modern business environment by helping managers understand product and customer profitability and identify priority areas for process improvements. In this study, activity-based costing (ABC) concepts are integrated with a Mixed Integer Program (MIP) for order management and profitability analysis in the case of a firm facing demand in excess of capacity. The model considers unit-level, batch, and order-related costs within a mixed-integer programming model representing the firm's operating structure. The performance criterion of profit and service levels during the 20-period planning horizon is used for model evaluation in a comparison with the results from a Theory of Constraints (TOC) formulation. The analysis of these competing models will provide guidelines for applications of order management models with simultaneous consideration of production planning and profitability analysis, as well as help managers understand product and customer profitability, and identify priority areas for process improvements. The results indicate that the ABC-based model is more effective in increasing profitability and reducing inventory levels when compared with the TOC-based formulation, making better use of overhead cost information in the selection of orders.  相似文献   

20.
拟Hitting集及其在基于模型的故障诊断中的应用   总被引:1,自引:0,他引:1  
王巍  李瀛 《振动工程学报》1999,12(3):434-438
基于模型的诊断推理又称为基于深知识的诊断推理,它利用了系统结构和行为等方面的深层知识,克服了传统故障诊断专家系统中过分依赖于专家经验的固有缺陷,从而引起了研究者的广泛兴趣。基于模型的故障诊断通常分两步进行,第一步是与领域相关的冲突识别,第二步是与领域无关的候选产生。本文研究了基于模型故障诊断中的候选产生方法,提出了拟hitting集的概念,给出一种由拟 hitting 集求解最小hitting 集的方法,并对该方法的正确性给予了证明。在此基础上,开发了一种基于拟hitting 集的候选产生的递推算法。由于在求解过程中仅仅涉及到两个集合之间的运算操作,因而该算法简单实用,极大地减少了诊断的计算量,且程序容易实现。特别是对于复杂的被诊断对象系统,该算法可以明显地提高诊断效率,以满足实时性的要求  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号