首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper describes computer-based heuristic models for Reliability and Maintainability (R&M) allocation for large systems. The model is an embellishment to the Maintenance Allocation Program (MAP) developed at McDonnell Douglas Corporation. The new version of MAP known as REMAP (Reliability Embellished MAP) is a decision support tool for contractors who are involved in large scale design projects such as aircraft design. The REMAP allows the user to give preliminary R&M information about the system. The REMAP gives “what if” changes on the requirements required to meet the desired design reliability specification.  相似文献   

2.
A one-dimensional simulation procedure is developed for use in estimating structural reliability in multi-dimensional load and resistance space with the loads represented as stochastic process. The technique employed is based on the idea of using ‘strips’ of points parallel to each other and sampled on the limit state hyperplanes. The ‘local’ outcrossing rate and the zero time failure probability Pf(0) associated with the narrow strips are derived using the conditional reliability index. When the domain boundary consists of a set of limit states, second order bounds are used to obtain a lower bound approximation of the outcrossing rate and Pf(0) associated with the union of a set of λ strips. It is shown by examples that for high reliability problems, λ may be much less than the number of limit states without significant loss of accuracy and with considerable saving in computation time. It was also found that the rate of convergence of the simulations is quite fast even without using importance sampling.  相似文献   

3.
Adaptive simulation for system reliability analysis of large structures   总被引:6,自引:0,他引:6  
This article proposes an efficient simulation-based methodology to estimate the system reliability of large structures. The proposed method uses a hybrid approach: first, a probabilistic enumeration technique is used to identify a significant system failure sequence. This provides an initial sampling domain for an adaptive importance sampling procedure. As further simulations are performed, information about other significant sequences is incorporated to refine the sampling domain and to estimate the failure probability of the system. The adaptive sampling overcomes the restrictive assumptions of analytical techniques, yet achieves the robustness and accuracy of basic Monte Carlo simulation in an efficient manner. In this article, the proposed method is implemented using the ANSYS finite element software, and is applied to the system reliability estimation of two redundant structural systems, a six-story building frame and a transmission tower. Both ductile and brittle failure modes are considered. The method is structured in a modular form such that it can be easily applied to different types of problems and commercial software, thus facilitating practical application.  相似文献   

4.
Kan  Guangyuan  He  Xiaoyan  Li  Jiren  Ding  Liuqian  Zhang  Dawei  Lei  Tianjie  Hong  Yang  Liang  Ke  Zuo  Depeng  Bao  Zhenxin  Zhang  Mengjie 《Neural computing & applications》2018,29(7):577-593

Artificial neural network (ANN)-based data-driven model is an effective and robust tool for multi-input single-output (MISO) system simulation task. However, there are several conundrums which deteriorate the performance of the ANN model. These problems include the hard task of topology design, parameter training, and the balance between simulation accuracy and generalization capability. In order to overcome conundrums mentioned above, a novel hybrid data-driven model named KEK was proposed in this paper. The KEK model was developed by coupling the K-means method for input clustering, ensemble back-propagation (BP) ANN for output estimation, and K-nearest neighbor (KNN) method for output error estimation. A novel calibration method was also proposed for the automatic and global calibration of the KEK model. For the purpose of intercomparison of model performance, the ANN model, KNN model, and proposed KEK model were applied for two applications including the Peak benchmark function simulation and the real-world electricity system daily total load forecasting. The testing results indicated that the KEK model outperformed other two models and showed very good simulation accuracy and generalization capability in the MISO system simulation tasks.

  相似文献   

5.
A neural network model that processes financial input data is developed to estimate the market price of options at closing. The network's ability to estimate closing prices is compared to the Black-Scholes model, the most widely used model for the pricing of options. Comparisons reveal that the mean squared error for the neural network is less than that of the Black-Scholes model in about half of the cases examined. The differences and similarities in the two modeling approaches are discussed. The neural network, which uses the same financial data as the Black-Scholes model, requires no distribution assumptions and learns the relationships between the financial input data and the option price from the historical data. The option-valuation equilibrium model of Black-Scholes determines option prices under the assumptions that prices follow a continuous time path and that the instantaneous volatility is nonstochastic.  相似文献   

6.
Software cost estimation is an important concern for software managers and other software professionals. The hypothesized model in this research suggests that an organization's use of an estimate influences its estimating practices which influence both the basis of the estimating process and the accuracy of the estimate. The model also suggests that the estimating basis directly influences the accuracy of the estimate. A study of business information systems managers and professionals at 112 different organizations using causal analysis with the Equations Modeling System (EQS) refined the model. The refined model shows that no managerial practice in this study discourages the use of intuition, guessing and personal memory in cost estimating. Although user commitment and accountability appear to foster algorithm-based estimating, such an algorithmic basis does not portend greater accuracy. Only one managerial practice-the use of the estimate in performance evaluations of software managers and professionals-presages greater accuracy. By implication, the research suggests somewhat ironically that the most effective approach to improve estimating accuracy may be to make estimators, developers and managers more accountable for the estimate even though it may be impossible to direct them explicitly on how to produce a more accurate one  相似文献   

7.
Mattel Toys is a leading toy manufacturing company and Barbie is one of Mattel's major product lines. Mattel develops several variations of the toy every year which requires many duplicate tools for standard figure parts and tools for parts similar to the standard parts. The tooling engineer estimates the type of tools and the number of tools to be built to support a given production rate. This estimate, represented in a tool plan is carried out manually, giving rise to inaccuracies. A decision support system was developed to estimate the number of duplicate tools as well as the budgets to build these tools. The decision support system was checked for accuracy by conducting a case study.  相似文献   

8.
Combat system effectiveness simulation (CoSES) plays an irreplaceable role in the effectiveness measurement of combat systems. According to decades of research and practice, composable modeling and multi-domain modeling are recognized as two major modeling requirements in CoSES. Current effectiveness simulation researches attempt to cope with the structural and behavioral complexity of CoSES based on a unified technological space, and they are limited to their existing modeling paradigms and fail to meet these two requirements. In this work, we propose a model framework-based domain-specific composable modeling method to solve this problem. This method builds a common model framework using application invariant knowledge for CoSES, and designs domain-specific modeling infrastructures for subdomains as corresponding extension points of the framework to support the modeling of application variant knowledge. Therefore, this method supports domain-specific modeling in multiple subdomains and the composition of subsystem models across different subdomains based on the model framework. The case study shows that this method raises the modeling abstraction level, supports generative modeling, and promotes model reuse and composability.  相似文献   

9.
Modeling the generation of a wind farm and its effect on power system reliability is a challenging task, largely due to the random behavior of the output power. In this paper, we propose a new probabilistic model for assessing the reliability of wind farms in a power system at hierarchical level II (HLII), using a Monte Carlo simulation. The proposed model shows the effect of correlation between wind and load on reliability calculation. It can also be used for identifying the priority of various points of the network for installing new wind farms, to promote the reliability of the whole system. A simple grid at hierarchical level I (HLI) and a network in the north-eastern region of Iran are studied. Simulation results showed that the correlation between wind and load significantly affects the reliability.  相似文献   

10.
11.
Structural and Multidisciplinary Optimization - Time-dependent global reliability sensitivity can quantify the effect of input variables in their whole distribution ranges on the time-dependent...  相似文献   

12.
13.
A prospective payment system based on diagnosis procedure combination (DPC/PPS) was introduced to acute care hospitals in Japan in April 2003. In order to increase hospital income, hospitals must shorten the average length of stay (ALOS) and increase the number of patients. We constructed a simulation program for evaluating the relationships among ALOS, bed occupation rate (BOR) and hospital income of hospitals in which DPC/PPS has been introduced. This program can precisely evaluate the hospital income by regulating the ALOS and the number of patients for each DPC. By using this program, it is possible to predict the optimum ALOS and optimum number of inpatients for each DPC in order to increase hospital income.  相似文献   

14.
Missingness frequently complicates the analysis of longitudinal data. A popular solution for dealing with incomplete longitudinal data is the use of likelihood-based methods, when, for example, linear, generalized linear, or non-linear mixed models are considered, due to their validity under the assumption of missing at random (MAR). Semi-parametric methods such as generalized estimating equations (GEEs) offer another attractive approach but require the assumption of missing completely at random (MCAR). Weighted GEE (WGEE) has been proposed as an elegant way to ensure validity under MAR. Alternatively, multiple imputation (MI) can be used to pre-process incomplete data, after which GEE is applied (MI-GEE). Focusing on incomplete binary repeated measures, both methods are compared using the so-called asymptotic, as well as small-sample, simulations, in a variety of correctly specified as well as incorrectly specified models. In spite of the asymptotic unbiasedness of WGEE, results provide striking evidence that MI-GEE is both less biased and more accurate in the small to moderate sample sizes which typically arise in clinical trials.  相似文献   

15.
16.
This paper develops a Monte Carlo Simulation (MCS) approach to estimate the system reliability for a multistate manufacturing network with parallel production lines (MMN-PPL) considering finite buffer storage. System reliability indicates the probability that all workstations provide sufficient capacity to satisfy a specified demand and buffers possess adequate storage. The buffers are modeled as a network-structured MMN-PPL. Storage usage of buffers is analyzed based on the MMN-PPL. MCS algorithms are developed to generate the capacity state and to check the storage usage of buffers to determine whether the demand can be satisfied or not. System reliability of the MMN-PPL is estimated through simulation. The MCS approach is an efficient method to estimate system reliability for an MMN-PPL with a reasonable accuracy and time. A pair of practical examples including a tile and a touch panel manufacturing systems shows that system reliability is overestimated when buffer storage is assumed to be infinite. Demand satisfaction probability is further addressed to provide guidance for a proper production policy.  相似文献   

17.
Zoran  Igor   《Data & Knowledge Engineering》2008,67(3):504-516
The paper compares different approaches to estimate the reliability of individual predictions in regression. We compare the sensitivity-based reliability estimates developed in our previous work with four approaches found in the literature: variance of bagged models, local cross-validation, density estimation, and local modeling. By combining pairs of individual estimates, we compose a combined estimate that performs better than the individual estimates. We tested the estimates by running data from 28 domains through eight regression models: regression trees, linear regression, neural networks, bagging, support vector machines, locally weighted regression, random forests, and generalized additive model. The results demonstrate the potential of a sensitivity-based estimate, as well as the local modeling of prediction error with regression trees. Among the tested approaches, the best average performance was achieved by estimation using the bagging variance approach, which achieved the best performance with neural networks, bagging and locally weighted regression.  相似文献   

18.
Thomas Most 《Computers & Structures》2011,89(17-18):1664-1672
In this paper several methods for model assessment considering uncertainties are discussed. Sensitivity analysis is performed to quantify the influence of the individual model input parameters. In addition to the well-known analysis of a single model, a new procedure for quantifying the influence of the model choice on the uncertainty of the model prediction is proposed. Furthermore, a procedure is presented which can be used to estimate the model framework uncertainty and which enables the selection of the optimal model with the best compromise between model input and framework uncertainty. Finally Bayesian methods for model selection are extended for model assessment without measurements using model averaging as reference.  相似文献   

19.
20.
Zero-Inertia Models (ZIMs), or Diffusion-Wave Models (DWMs), have been widely used in flood modelling in the last decade. In this work, an alternative formulation is proposed based on a new depth-positivity-preserving condition to solve the zero-inertia governing equation. The new condition does not use a flux limiter and is practical for flood simulations with wetting and drying over complex domain topographies. Two time stepping methods are considered and studied along with the proposed numerical model. The first one is based on the Courant-Friedrichs-Lewy (CFL) condition, which is widely used to control the time step for the explicit shallow water equation solvers; the second one is the adaptive time stepping (ATS) reported by Hunter et al. [1], which was specifically designed for a DWM. Numerical results and root-mean-square-error (RMSE) analysis show that the new model is able to provide stable and accurate solutions without the necessity for a flux limiter. Computational efficiency is significantly improved under the CFL constraint.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号