首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   43篇
  免费   2篇
电工技术   1篇
化学工业   1篇
机械仪表   3篇
建筑科学   3篇
无线电   11篇
一般工业技术   20篇
冶金工业   3篇
自动化技术   3篇
  2017年   2篇
  2016年   2篇
  2010年   4篇
  2008年   2篇
  2007年   4篇
  2006年   3篇
  2005年   3篇
  2004年   3篇
  2003年   3篇
  2002年   1篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   2篇
  1997年   4篇
  1996年   2篇
  1991年   1篇
  1986年   1篇
  1984年   1篇
  1981年   1篇
排序方式: 共有45条查询结果,搜索用时 437 毫秒
1.
2.
In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed.  相似文献   
3.
Optimization of system reliability in the presence of common cause failures   总被引:1,自引:0,他引:1  
The redundancy allocation problem is formulated with the objective of maximizing system reliability in the presence of common cause failures. These types of failures can be described as events that lead to simultaneous failure of multiple components due to a common cause. When common cause failures are considered, component failure times are not independent. This new problem formulation offers several distinct benefits compared to traditional formulations of the redundancy allocation problem. For some systems, recognition of common cause failure events is critical so that the overall system reliability estimation and associated design resembles the true system reliability behavior realistically. Since common cause failure events may vary from one system to another, three different interpretations of the reliability estimation problem are presented. This is the first time that mixing of components together with the inclusion of common cause failure events has been addressed in the redundancy allocation problem. Three non-linear optimization models are presented. Solutions to three different problem types are obtained. They support the position that consideration of common cause failures will lead to different and preferred “optimal” design strategies.  相似文献   
4.
A flexible procedure is described and demonstrated to determine approximate confidence intervals for system reliability when there is uncertainty regarding component reliability information. The approach is robust, and applies to many system-design configurations and component time-to-failure distributions, resulting in few restrictions for the use of these confidence intervals. The methods do not require any parametric assumptions for component reliability or time-to-failure, and allows type-I or -II censored data records. The confidence intervals are based on the variance of the component and system reliability estimates and a lognormal distribution assumption for the system reliability estimate. This approach applies to any system design which can be decomposed into series and/or parallel connections between the components. To evaluate the validity of the confidence limits, numerous simulations were performed for two hypothetical systems with different data sample-sizes and confidence levels. The test cases and empirical results demonstrate that this new method for estimating confidence intervals provides good coverage, can be readily applied, requires only minimal computational effort, and applies for a much greater range of design configurations and data types compared to other methods. For many design problems, these confidence intervals are preferable because there is no requirement for an exponential time-to-failure distribution nor are component data limited to binomial data  相似文献   
5.
An algorithm is presented which solves the redundancy-allocation problem when the objective is to maximize a lower percentile of the system time-to-failure distribution. The algorithm uses a genetic algorithm to search the prospective solution-space and a bisection search as a function evaluator. Previously, the problem has most often been formulated to maximize system reliability. For many engineering-design problems, this new formulation is more appropriate because there is often no clearly defined mission time on which to base component and system reliability. Additionally, most system designers and users are risk-averse, and maximization of a lower percentile of the system time-to-failure distribution is a more conservative (less risky) strategy compared to maximization of the mean or median time-to-failure. Results from over 60 examples clearly indicate that the preferred system design is sensitive to the user's perceived risk. We infer from these results that engineering-design decisions need to consider risk explicitly, and use of mean time-to-failure as a singular measure of product integrity is insufficient. Similarly, the use of system reliability as the principal performance measure is unwise unless mission time is clearly defined  相似文献   
6.
A custom genetic algorithm was developed and implemented to solve multiple objective multi-state reliability optimization design problems. Many real-world engineering design problems are multi-objective in nature, and among those, several of them have various levels of system performance ranging from perfectly functioning to completely failed. This multi-objective genetic algorithm uses the universal moment generating function approach to evaluate the different reliability or availability indices of the system. The components are characterized by having different performance levels, cost, weight, and reliability. The solution to the multi-objective multi-state problem is a set of solutions, known as the Pareto-front, from which the analyst may choose one solution for system implementation. Two illustrative examples are presented to show the performance of the algorithm; and the multi-objective formulation considered for both of them is the maximization of system availability, and the minimization of both system cost, and weight.  相似文献   
7.
Multi-objective scheduling problems: Determination of pruned Pareto sets   总被引:1,自引:0,他引:1  
There are often multiple competing objectives for industrial scheduling and production planning problems. Two practical methods are presented to efficiently identify promising solutions from among a Pareto optimal set for multi-objective scheduling problems. Generally, multi-objective optimization problems can be solved by combining the objectives into a single objective using equivalent cost conversions, utility theory, etc., or by determination of a Pareto optimal set. Pareto optimal sets or representative subsets can be found by using a multi-objective genetic algorithm or by other means. Then, in practice, the decision maker ultimately has to select one solution from this set for system implementation. However, the Pareto optimal set is often large and cumbersome, making the post-Pareto analysis phase potentially difficult, especially as the number of objectives increase. Our research involves the post Pareto analysis phase, and two methods are presented to filter the Pareto optimal set to determine a subset of promising or desirable solutions. The first method is pruning using non-numerical objective function ranking preferences. The second approach involves pruning by using data clustering. The k-means algorithm is used to find clusters of similar solutions in the Pareto optimal set. The clustered data allows the decision maker to have just k general solutions from which to choose. These methods are general, and they are demonstrated using two multi-objective problems involving the scheduling of the bottleneck operation of a printed wiring board manufacturing line and a more general scheduling problem.  相似文献   
8.
A new approach to the electricity generation expansion problem is proposed to minimize simultaneously multiple objectives, such as cost and air emissions, including CO2 and NOx, over a long term planning horizon. In this problem, system expansion decisions are made to select the type of power generation, such as coal, nuclear, wind, etc., where the new generation asset should be located, and at which time period expansion should take place. We are able to find a Pareto front for the multi-objective generation expansion planning problem that explicitly considers availability of the system components over the planning horizon and operational dispatching decisions. Monte-Carlo simulation is used to generate numerous scenarios based on the component availabilities and anticipated demand for energy. The problem is then formulated as a mixed integer linear program, and optimal solutions are found based on the simulated scenarios with a combined objective function considering the multiple problem objectives. The different objectives are combined using dimensionless weights and a Pareto front can be determined by varying these weights. The mathematical model is demonstrated on an example problem with interesting results indicating how expansion decisions vary depending on whether minimizing cost or minimizing greenhouse gas emissions or pollutants is given higher priority.  相似文献   
9.
A solution methodology is described and demonstrated to determine optimal design configurations for nonrepairable series-parallel systems with cold-standby redundancy. This problem formulation considers non-constant component hazard functions and imperfect switching. The objective of the redundancy allocation problem is to select from available components and to determine an optimal design configuration to maximize system reliability. For cold-standby redundancy, other formulations have generally required exponential component time-to-failure and perfect switching assumptions. For this paper, there are multiple component choices available for each subsystem and component time-to-failure is distributed according to an Erlang distribution. Optimal solutions are determined based on an equivalent problem formulation and integer programming. Compared to other available algorithms, the methodology presented here more accurately models many engineering design problems with cold-standby redundancy. Previously, it has been difficult to determine optimal solutions for this class of problems or even lo efficiently calculate system reliability. The methodology is successfully demonstrated on a large problem with 14 subsystems.  相似文献   
10.
Summary & Conclusions-This paper addresses system reliability optimization when component reliability estimates are treated as random variables with estimation uncertainty. System reliability optimization algorithms generally assume that component reliability values are known exactly, i.e., they are deterministic. In practice, that is rarely the case. For risk-averse system design, the estimation uncertainty, propagated from the component estimates, may result in unacceptable estimation uncertainty at the system-level. The system design problem is thus formulated with multiple objectives: (1) to maximize the system reliability estimate, and (2) to minimize its associated variance. This formulation of the reliability optimization is new, and the resulting solutions offer a unique perspective on system design. Once formulated in this manner, standard multiple objective concepts, including Pareto optimality, were used to determine solutions. Pareto optimality is an attractive alternative for this type of problem. It provides decision-makers the flexibility to choose the best-compromise solution. Pareto optimal solutions were found by solving a series of weighted objective problems with incrementally varied weights. Several sample systems are solved to demonstrate the approach presented in this paper. The first example is a hypothetical series-parallel system, and the second example is the fault tolerant distributed system architecture for a voice recognition system. The results indicate that significantly different designs are obtained when the formulation incorporates estimation uncertainty. If decision-makers are risk averse, and wish to consider estimation uncertainty, previously available methodologies are likely to be inadequate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号