共查询到20条相似文献,搜索用时 15 毫秒
1.
The goal of this study is to present an efficient strategy for reliability analysis of multidisciplinary analysis systems. Existing methods have performed the reliability analysis using nonlinear optimization techniques. This is mainly due to the fact that they directly apply multidisciplinary design optimization (MDO) frameworks to the reliability analysis formulation. Accordingly, the reliability analysis and the multidisciplinary analysis (MDA) are tightly coupled in a single optimizer, which hampers the use of recursive and function-approximation-based reliability analysis methods such as the first-order reliability method (FORM). In order to implement an efficient reliability analysis method for multidisciplinary analysis systems, we propose a new strategy named sequential approach to reliability analysis for multidisciplinary analysis systems (SARAM). In this approach, the reliability analysis and MDA are decomposed and arranged in a sequential manner, making a recursive loop. The key features are as follows. First, by the nature of the recursive loop, it can utilize the efficient advanced first-order reliability method (AFORM). It is known that AFORM converges fast in many cases and requires only the value and the gradient of the limit-state function. Second, the decomposed architecture makes it possible to execute concurrent subsystem analyses for both the reliability analysis and MDA. The concurrent subsystem analyses are conducted by using the global sensitivity equation (GSE). The efficiency of the SARAM method was verified using two illustrative examples taken from the literatures. Compared with existing methods, it showed the least number of subsystem analyses over the other methods while maintaining accuracy. 相似文献
2.
3.
It is difficult to model a distributed parameter system (DPS) due to the infinite-dimensional time/space nature and unknown nonlinear uncertainties. A low-dimensional and simple nonlinear model is often required for practical applications. In this paper, a spatio-temporal Volterra model is proposed with a series of spatio-temporal kernels for modeling unknown nonlinear DPS. To estimate these kernels, they are expanded onto spatial and temporal bases with unknown coefficients. To reduce the model dimension and parametric complexity in the spatial domain, the Karhunen–Loève (KL) method is used to find the dominant spatial bases. To reduce the parametric complexity in the temporal domain, the Laguerre polynomials are selected as temporal bases. Next, using the Galerkin method, this spatio-temporal modeling becomes a linear regression problem. Then unknown parameters can be easily estimated using the least-squares method in the temporal domain. After the time/space synthesis, the spatio-temporal Volterra model can be constructed. The convergence of parameter estimation can be guaranteed under certain conditions. This model has a low-dimensional and simple nonlinear structure, which is useful for the prediction and control of the DPS. The simulation and experiment demonstrate the effectiveness of the proposed modeling method. 相似文献
4.
In this research we propose a novel method of face recognition based on texture and shape information. Age invariant face recognition enables matching of an image obtained at a given point in time against an image of the same individual obtained at an earlier point in time and thus has important applications, notably in law enforcement. We investigate various types of models built on different levels of data granularity. At the global level a model is built on training data that encompasses the entire set of available individuals, whereas at the local level, data from homogeneous sub-populations is used and finally at the individual level a personalized model is built for each individual. We narrow down the search space by dividing the whole database into subspaces for improving recognition time. We use a two-phased process for age invariant face recognition. In the first phase we identify the correct subspace by using a probabilistic method, and in the second phase we find the probe image within that subspace. Finally, we use a decision tree approach to combine models built from shape and texture features. Our empirical results show that the local and personalized models perform best when rated on both Rank-1 accuracy and recognition time. 相似文献
5.
Modeling of distributed parameter processes is a challenging problem because of their complex spatio-temporal nature, nonlinearities and uncertainties. In this study, a spatio-temporal Hammerstein modeling approach is proposed for nonlinear distributed parameter processes. Firstly, the static nonlinear and the distributed dynamical linear parts of the Hammerstein model are expanded onto a set of spatial and temporal basis functions. In order to reduce the parametric complexity, the Karhunen–Loève decomposition is used to find the dominant spatial bases with Laguerre polynomials selected as the temporal bases. Then, using the Galerkin method, the spatio-temporal modeling will be reduced to a traditional temporal modeling problem. Finally, the unknown parameters can be easily estimated using the least squares estimation and the singular value decomposition. In the presence of unmodeled dynamics, a multi-channel modeling framework is proposed to further improve the modeling performance. The convergence of the modeling can be guaranteed under certain conditions. The simulations are presented to show the effectiveness of this modeling method and its potential to a wide range of distributed processes. 相似文献
6.
《Ergonomics》2012,55(5-6):499-512
Abstract When fault tree analysis (FTA) is used in analysing the system reliability including human functions, the correspondence between an abnormal event and the human reaction against it may become ambiguous. In order to supplement this defect, a case study has been conducted in this research to develop a new technique termed ‘Corrective Operation Diagram Analysis’, and an attempt has been made to establish the correspondence between the reliability analysis of equipment/hardware using FTA and human reliability analysis. 相似文献
7.
Optimization procedure is one of the key techniques to address the computational and organizational complexities of multidisciplinary
design optimization (MDO). Motivated by the idea of synthetically exploiting the advantage of multiple existing optimization
procedures and meanwhile complying with the general process of satellite system design optimization in conceptual design phase,
a multistage-multilevel MDO procedure is proposed in this paper by integrating multiple-discipline-feasible (MDF) and concurrent
subspace optimization (CSSO), termed as MDF-CSSO. In the first stage, the approximation surrogates of high-fidelity disciplinary
models are built by disciplinary specialists independently, based on which the single level optimization procedure MDF is
used to quickly identify the promising region and roughly locate the optimum of the MDO problem. In the second stage, the
disciplinary specialists are employed to further investigate and improve the baseline design obtained in the first stage with
high-fidelity disciplinary models. CSSO is used to organize the concurrent disciplinary optimization and system coordination
so as to allow disciplinary autonomy. To enhance the reliability and robustness of the design under uncertainties, the probabilistic
version of MDF-CSSO (PMDF-CSSO) is developed to solve uncertainty-based optimization problems. The effectiveness of the proposed
methods is verified with one MDO benchmark test and one practical satellite conceptual design optimization problem, followed
by conclusion remarks and future research prospects. 相似文献
8.
Wang Lei Xiong Chuang Wang Xiaojun Liu Guanhua Shi Qinghe 《Structural and Multidisciplinary Optimization》2019,60(3):1079-1095
Structural and Multidisciplinary Optimization - To meet the rising demand for high reliability in complex multidisciplinary engineering systems, more attention has been paid to reliability-based... 相似文献
9.
Yi Deng Jiacun Wang Tsai J.J.P. Beznosov K. 《Knowledge and Data Engineering, IEEE Transactions on》2003,15(5):1099-1119
Security system architecture governs the composition of components in security systems and interactions between them. It plays a central role in the design of software security systems that ensure secure access to distributed resources in networked environment. In particular, the composition of the systems must consistently assure security policies that it is supposed to enforce. However, there is currently no rigorous and systematic way to predict and assure such critical properties in security system design. A systematic approach is introduced to address the problem. We present a methodology for modeling security system architecture and for verifying whether required security constraints are assured by the composition of the components. We introduce the concept of security constraint patterns, which formally specify the generic form of security policies that all implementations of the system architecture must enforce. The analysis of the architecture is driven by the propagation of the global security constraints onto the components in an incremental process. We show that our methodology is both flexible and scalable. It is argued that such a methodology not only ensures the integrity of critical early design decisions, but also provides a framework to guide correct implementations of the design. We demonstrate the methodology through a case study in which we model and analyze the architecture of the Resource Access Decision (RAD) Facility, an OMG standard for application-level authorization service. 相似文献
10.
Yonathan Bard 《Performance Evaluation》1981,1(4):225-248
This paper describes an approach to system modeling based on heuristic mean value analysis. The virtues of the approach are conceptual simplicity and computational efficiency. The approach can be applied to a large variety of systems, and can handle features such as resource constraints, tightly and loosely coupled multiprocessors, distributed processing, and certain types of CPU priorities. Extensive validation results are presented, including truly predictive situations. The paper is intended primarily as a tutorial on the method and its applications, rather than as an exposition of research results. 相似文献
11.
Julita Vassileva 《User Modeling and User-Adapted Interaction》1996,6(2-3):185-223
The development of user-adaptive systems is of increasing importance for industrial applications. User modeling emerged from the need to represent in the system knowledge about the user in order to allow informed decisions on how to adapt to match the user's needs. Most of the research in this field, however, has been theoretical, top-down. Our approach, in contrast, was driven by the needs of the application and shows features of bottom-up, user-centered design.We have implemented a user modeling component supporting a task-based interface to a hypermedia information system for hospitals and tested it under realistic conditions. A new architecture for user modeling has been developed which focuses on the tasks performed by users. It allows adaptive browsing support for users with different level of experience, and a level of adaptability. The requirements analysis shows that the differences in the information needs of users with different levels of experience are not only quantitative, but qualitative. Experienced users are not only able to cope with a wider browsing space, but sometimes prefer to organize their search in a different way. That is why the user model and the interface of the system are designed to support a smooth transition in the access options provided to novice users and to expert users. 相似文献
12.
Cann AP Connolly M Ruuska R MacNeil M Birmingham TB Vandervoort AA Callaghan JP 《Ergonomics》2008,51(4):556-572
Despite the ongoing health problem of repetitive strain injuries, there are few tools currently available for ergonomic applications evaluating cumulative loading that have well-documented evidence of reliability and validity. The purpose of this study was to determine the inter-rater reliability of a posture matching based analysis tool (3DMatch, University of Waterloo) for predicting cumulative and peak spinal loads. A total of 30 food service workers were each videotaped for a 1-h period while performing typical work activities and a single work task was randomly selected from each for analysis by two raters. Inter-rater reliability was determined using intraclass correlation coefficients (ICC) model 2,1 and standard errors of measurement for cumulative and peak spinal and shoulder loading variables across all subjects. Overall, 85.5% of variables had moderate to excellent inter-rater reliability, with ICCs ranging from 0.30-0.99 for all cumulative and peak loading variables. 3DMatch was found to be a reliable ergonomic tool when more than one rater is involved. 相似文献
13.
《Ergonomics》2012,55(4):556-572
Despite the ongoing health problem of repetitive strain injuries, there are few tools currently available for ergonomic applications evaluating cumulative loading that have well-documented evidence of reliability and validity. The purpose of this study was to determine the inter-rater reliability of a posture matching based analysis tool (3DMatch, University of Waterloo) for predicting cumulative and peak spinal loads. A total of 30 food service workers were each videotaped for a 1-h period while performing typical work activities and a single work task was randomly selected from each for analysis by two raters. Inter-rater reliability was determined using intraclass correlation coefficients (ICC) model 2,1 and standard errors of measurement for cumulative and peak spinal and shoulder loading variables across all subjects. Overall, 85.5% of variables had moderate to excellent inter-rater reliability, with ICCs ranging from 0.30–0.99 for all cumulative and peak loading variables. 3DMatch was found to be a reliable ergonomic tool when more than one rater is involved. 相似文献
14.
André J. Torii Rafael H. Lopez Leandro F. F. Miguel 《Structural and Multidisciplinary Optimization》2016,54(2):317-332
There are available in the literature several papers on the development of methods to decouple the reliability analysis and the structural optimization to solve RBDO problems. Most of them focused on strategies that employ the First Order Reliability Method (FORM) to approximate the reliability constraints. Despite of all these developments, one limitation prevailed: the lack of accuracy in the approximation of the reliability constraints due to the use of FORM. Thus, in this paper, a novel approach for RBDO is presented in order to overcome such a limitation. In this approach, we use the concept of shifting vectors, originally developed in the context of the Sequential Optimization and Reliability Assessment (SORA). However, the shifting vectors are found and updated based on a novel strategy. The resulting framework is able to use any technique for the reliability analysis stage, such as Monte Carlo simulation, second order reliability methods, stochastic polynomials, among others. Thus, the proposed approach overcomes the aforementioned limitation of most of RBDO decoupling techniques, which required the use of FORM for reliability analysis. Several examples are analyzed in order to show the effectiveness of the methodology. Focus is given on examples that are poorly solved or even cannot be tackled by FORM based approaches, such as highly nonlinear limit state functions comprised by a maximum operator or problems with discrete random variables. It should be remarked that the proposed approach was not developed to be more computationally efficient than RBDO decoupling strategies based FORM, but to allow the utilization of any, including more accurate, reliability analysis method. 相似文献
15.
A new approach to fuzzy-neural system modeling 总被引:10,自引:0,他引:10
We develop simple but effective fuzzy-rule based models of complex systems from input-output data. We introduce a simple fuzzy-neural network for modeling systems, and we prove that it can represent any continuous function over a compact set. We introduce “fuzzy curves” and use them to: 1) identify significant input variables, 2) determine model structure, and 3) set the initial weights in the fuzzy-neural network model. Our method for input identification is computationally simple and, since we determine the proper network structure and initial weights in advance, we can train the network rapidly. Viewing the network as a fuzzy model gives insight into the real system, and it provides a method to simplify the neural network 相似文献
16.
David J. Pate Justin Gray Brian J. German 《Structural and Multidisciplinary Optimization》2014,49(5):743-760
The formulation of multidisciplinary design, analysis, and optimization (MDAO) problems has become increasingly complex as the number of analysis tools and design variables included in typical studies has grown. This growth in the scale and scope of MDAO problems has been motivated by the need to incorporate additional disciplines and to expand the parametric design space to enable the exploration of unconventional design concepts. In this context, given a large set of disciplinary analysis tools, the problem of determining a feasible data flow between tools to produce a specified set of system-level outputs is combinatorially challenging. The difficulty is compounded in multi-fidelity problems, which are of increasing interest to the MDAO community. In this paper, we propose an approach for addressing this problem based on the formalism of graph theory. The approach begins by constructing the maximal connectivity graph (MCG) describing all possible interconnections between a set of analysis tools. Graph operations are then conducted to reduce the MCG to a fundamental problem graph (FPG) that describes the connectivity of analysis tools needed to solve a specified system-level design problem. The FPG does not predispose a particular solution procedure; any relevant MDO solution architecture could be selected to implement the optimization. Finally, the solution architecture can be represented in a problem solution graph (PSG). The graph approach is applied to an example problem based on a commercial aircraft MDAO study. 相似文献
17.
Gokhale S.S. Michael Rung-Tsong Lyu 《IEEE transactions on pattern analysis and machine intelligence》2005,31(8):643-656
Structure-based techniques enable an analysis of the influence of individual components on the application reliability. In an effort to ensure analytical tractability, prevalent structure-based analysis techniques are based on assumptions which preclude the use of these techniques for reliability analysis during the testing and operational phases. In this paper, we develop simulation procedures to assess the impact of individual components on the reliability of an application in the presence of fault detection and repair strategies that may be employed during testing. We also develop simulation procedures to analyze the application reliability for various operational configurations. We illustrate the potential of simulation procedures using several examples. Based on the results of these examples, we provide novel insights into how testing and repair strategies can be tailored depending on the application structure to achieve the desired reliability in a cost-effective manner. We also discuss how the results could be used to explore alternative operational configurations of a software application taking into consideration the application structure so as to cause minimal interruption in the field. 相似文献
18.
Xu Li Wenkun Gao Liangxian Gu Chunlin Gong Zhao Jing Hua Su 《Structural and Multidisciplinary Optimization》2017,56(5):1077-1092
By coupling the low-fidelity (LF) model with the high-fidelity (HF) samples, the variable-fidelity model (VFM) offers an efficient way to overcome the expensive computing challenge in multidisciplinary design optimization (MDO). In this paper, a cooperative radial basis function (Co-RBF) method for the VFM is proposed by modifying the basis function of RBF. The RBF method is constructed on the HF samples, while the Co-RBF method incorporates the entire information of the LF model with the HF samples. In Co-RBF, the LF model is regard as a basis function of Co-RBF and the HF samples are utilized to compute the Co-RBF model coefficients. Two numerical functions and three engineering problems are adopted to verify the proposed Co-RBF method. The predictive results of Co-RBF are compared with those of RBF and Co-Kriging, which show that the Co-RBF method improves the efficiency, accuracy and robustness of the existing VFMs. 相似文献
19.
Evidence theory employs a much more general and flexible framework to quantify the epistemic uncertainty, and thereby it is adopted to conduct reliability analysis for engineering structures recently. However, the large computational cost caused by its discrete property significantly influences the practicability of evidence theory. This paper proposes an efficient response surface (RS) method to evaluate the reliability for structures using evidence theory, and hence improves its applicability in engineering problems. A new design of experiments technique is developed, whose key issue is the search of the important control points. These points are the intersections of the limit-state surface and the uncertainty domain, thus they have a significant contribution to the accuracy of the subsequent established RS. Based on them, a high precise radial basis functions RS to the actual limit-state surface is established. With the RS, the reliability interval can be efficiently computed for the structure. Four numerical examples are investigated to demonstrate the effectiveness of the proposed method. 相似文献
20.
José Cortiñas Abrahantes Tomasz Burzykowski 《Computational statistics & data analysis》2010,54(6):1457-1466
The linear mixed effects model has become a standard tool for the analysis of continuous hierarchical data such as, for example, repeated measures or data from meta-analyses. However, in certain situations the model does pose unavoidable computational problems. In the context of surrogate markers, this problem has appeared when using an estimation and prediction-based approach for the evaluation of surrogate endpoints. Convergence problems can occur mainly due to small between-trial variability or small number of trials. A number of alternative strategies has been proposed and studied for normally distributed data, but not such study has been conducted for other types of endpoints. The idea is to study if such simplified strategies, which always ignore individual level surrogacy, can also be applied when both surrogate and true endpoints are of failure-time types. It is shown via simulations that the 3 simplified strategies produced biased estimates, especially for the cases in which the strength of individual level association is different from the strength of trial level association. For this reason, it is recommended not to use simplified strategies when dealing with failure-time data, in contrast to the case of normally distributed data, for which simplified strategies are recommended. Possible reasons for this discrepancy might be that, in this case, ignoring the individual level association influences estimates of the mean structure parameters, what results in distorted estimates of the trial level association. 相似文献