首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Determining optimal conditions for protein storage while maintaining a high level of protein activity is an important question in pharmaceutical research. A designed experiment based on a space-filling design was conducted to understand the effects of factors affecting protein storage and to establish optimal storage conditions. Different model-selection strategies to identify important factors may lead to very different answers about optimal conditions. Uncertainty about which factors are important, or model uncertainty, can be a critical issue in decision-making. We use Bayesian variable selection methods for linear models to identify important variables in the protein storage data, while accounting for model uncertainty. We also use the Bayesian framework to build predictions based on a large family of models, rather than an individual model, and to evaluate the probability that certain candidate storage conditions are optimal.  相似文献   

2.
Grid-Based Heuristic Method for Multifactor Landfill Siting   总被引:1,自引:0,他引:1  
Siting a landfill requires the processing of a large amount of spatial data. However, the manual processing of spatial data is tedious. A geographical information system (GIS), although capable of handling spatial data in siting analyses, generally lacks an optimization function. Optimization models are available for use with a GIS, but they usually have difficulties finding the optimal site from a large area within an acceptable computational time, and not easily directly available with a raster-based GIS. To overcome this difficulty, this study developed a two stage heuristic method. Multiple factors for landfill siting are considered and a weighted sum is computed for evaluating the suitability of a candidate site. The method first finds areas with significantly high potentialities and then applies a previously developed mixed-integer programming model to locate the optimal site within the potential areas, and can significantly reduce the computational time required for resolving a large siting problem. A case study was implemented to demonstrate the effectiveness of the proposed method, and a comparison with the previously developed model was provided and discussed.  相似文献   

3.
Computational modeling is being used increasingly in neuroscience. In deriving such models, inference issues such as model selection, model complexity, and model comparison must be addressed constantly. In this article we present briefly the Bayesian approach to inference. Under a simple set of commonsense axioms, there exists essentially a unique way of reasoning under uncertainty by assigning a degree of confidence to any hypothesis or model, given the available data and prior information. Such degrees of confidence must obey all the rules governing probabilities and can be updated accordingly as more data becomes available. While the Bayesian methodology can be applied to any type of model, as an example we outline its use for an important, and increasingly standard, class of models in computational neuroscience--compartmental models of single neurons. Inference issues are particularly relevant for these models: their parameter spaces are typically very large, neurophysiological and neuroanatomical data are still sparse, and probabilistic aspects are often ignored. As a tutorial, we demonstrate the Bayesian approach on a class of one-compartment models with varying numbers of conductances. We then apply Bayesian methods on a compartmental model of a real neuron to determine the optimal amount of noise to add to the model to give it a level of spike time variability comparable to that found in the real cell.  相似文献   

4.
A good understanding of environmental effects on structural modal properties is essential for reliable performance of vibration-based damage diagnosis methods. In this paper, a method of combining principal component analysis (PCA) and support vector regression (SVR) technique is proposed for modeling temperature-caused variability of modal frequencies for structures instrumented with long-term monitoring systems. PCA is first applied to extract principal components from the measured temperatures for dimensionality reduction. The predominant feature vectors in conjunction with the measured modal frequencies are then fed into a support vector algorithm to formulate regression models that may take into account thermal inertia effect. The research is focused on proper selection of the hyperparameters to obtain SVR models with good generalization performance. A grid search method with cross validation and a heuristic method are utilized for determining the optimal values of SVR hyperparameters. The proposed method is compared with the method directly using measurement data to train SVR models and the multivariate linear regression (MLR) method through the use of long-term measurement data from a cable-stayed bridge. It is shown that PCA-compressed features make the training and validation of SVR models more efficient in both model accuracy and computational costs, and the formulated SVR model performs much better than the MLR model in generalization performance. When continuously measured data is available, the SVR model formulated taking into account thermal inertia effect can achieve more accurate prediction than that without considering thermal inertia effect.  相似文献   

5.
Popular methods for fitting unidimensional item response theory (IRT) models to data assume that the latent variable is normally distributed in the population of respondents, but this can be unreasonable for some variables. Ramsay-curve IRT (RC-IRT) was developed to detect and correct for this nonnormality. The primary aims of this article are to introduce RC-IRT less technically than it has been described elsewhere; to evaluate RC-IRT for ordinal data via simulation, including new approaches for model selection; and to illustrate RC-IRT with empirical examples. The empirical examples demonstrate the utility of RC-IRT for real data, and the simulation study indicates that when the latent distribution is skewed, RC-IRT results can be more accurate than those based on the normal model. Along with a plot of candidate curves, the Hannan-Quinn criterion is recommended for model selection. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The problem of crack detection has been studied by many researchers, and many methods of approaching the problem have been developed. To quantify the crack extent, most methods follow the model updating approach. This approach treats the crack location and extent as model parameters, which are then identified by minimizing the discrepancy between the modeled and the measured dynamic responses. Most methods following this approach focus on the detection of a single crack or multicracks in situations in which the number of cracks is known. The main objective of this paper is to address the crack detection problem in a general situation in which the number of cracks is not known in advance. The crack detection methodology proposed in this paper consists of two phases. In the first phase, different classes of models are employed to model the beam with different numbers of cracks, and the Bayesian model class selection method is then employed to identify the most plausible class of models based on the set of measured dynamic data in order to identify the number of cracks on the beam. In the second phase, the posterior (updated) probability density function of the crack locations and the corresponding extents is calculated using the Bayesian statistical framework. As a result, the uncertainties that may have been introduced by measurement noise and modeling error can be explicitly dealt with. The methodology proposed herein has been verified by and demonstrated through a comprehensive series of numerical case studies, in which noisy data were generated by a Bernoulli–Euler beam with semirigid connections. The results of these studies show that the proposed methodology can correctly identify the number of cracks even when the crack extent is small. The effects of measurement noise, modeling error, and the complexity of the class of identification model on the crack detection results have also been studied and are discussed in this paper.  相似文献   

7.
Emergency flood management is enhanced by using models that can estimate the timing and location of flooding. Typically, flood routing and inundation prediction is accomplished by using one-dimensional (1D) models. These have been the models of choice because they are computationally simple and quick. However, these models do not adequately represent the complex physical processes present for shallow flows located in the floodplain or in urban areas. Two-dimensional (2D) models developed on the basis of the full hydrodynamic equations can be used to represent the complex flow phenomena that exist in the floodplain and are, therefore, recommended by the National Research Council for increased use in flood analysis studies. The major limitation of these models is the increased computational cost. Two-dimensional flood models are prime candidates for parallel computing, but traditional methods/equipment (e.g., message passing paradigm) are more complex in terms of code refactoring and hardware setup. In addition, these hardware systems may not be available or accessible to modelers conducting flood analyses. This paper presents a 2D flood model that implements multithreading for use on now-prevalent multicore computers. This desktop parallel computing architecture has been shown to decrease computation time by 14 times on a 16-processor computer and, when coupled with a wet cell tracking algorithm, has been shown to decrease computation by as much as 310 times. These accomplishments make high-fidelity flood modeling more feasible for flood inundation studies using readily available desktop computers.  相似文献   

8.
A mathematical model is developed to allow derivation of optimal treatment schedules for the radiotherapy of exponentially-growing tumours. Preliminary calculations based on available data suggest that optimal schedules would (in general) be more protracted than conventional schedules and might achieve a significantly better tumour cell kill without causing excessive damage to normal connective tissue. The model is too simple, and the data inadequate, for the conclusions reached to be used as a guide to clinical practice at present. However, the analysis can be extended to more realistic models which may be of clinical benefit when the appropriate data can be obtained.  相似文献   

9.
Neural network (NN) models for time series forecasting were initially used in economic fields. In this paper, NN models for time series forecasting are introduced for use in forecasting the settlement of chimney foundations. The data sets used in the NN models were measured in the field. Seven models with different input series are developed to determine the optimal structure of the network. In evaluating the network performance, the network model that uses the previous nine months’ settlement values as input is selected as the optimal model. The analysis results demonstrate that the settlement values predicted by the optimal model are in good agreement with the field measurements. In addition, as the number of data points in the input series increases, the NN performance clearly improves, and this improvement stops after the input series has increased to a certain extent. This demonstrates that the time-series-based NN model can also be successfully applied to predict foundation settlement.  相似文献   

10.
Selecting an optimal project delivery system is a critical task that owners should do to ensure project success. This selection is a complex decision-making process. The complexity arises from the uncertain or not well-defined parameters and/or the multiple criteria structure of such decisions. In this study, a decision aid model using the analytical hierarchy process (AHP) coupled with rough approximation concepts is developed to assist the owners. The selection criteria are determined by studying a number of benchmarks. The model ranks the alternative delivery systems by considering both benchmark results and owner’s opinion. In interval AHP, an optimization procedure is performed via obtaining the upper and the lower linear programming models to determine the interval priorities for alternative project delivery systems. In cases having incomparable alternatives, which is the most likely case in uncertain decision making, the model uses rough set-based measures to reduce the number of decision criteria to a subset, which is able to fully rank the alternatives. To illustrate the applicability and usefulness of this methodology, a real world case study will be demonstrated.  相似文献   

11.
This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach. The idea behind TMCMC is to avoid the problem of sampling from difficult target probability density functions (PDFs) but sampling from a series of intermediate PDFs that converge to the target PDF and are easier to sample. The TMCMC approach is motivated by the adaptive Metropolis–Hastings method developed by Beck and Au in 2002 and is based on Markov chain Monte Carlo. It is shown that TMCMC is able to draw samples from some difficult PDFs (e.g., multimodal PDFs, very peaked PDFs, and PDFs with flat manifold). The TMCMC approach can also estimate evidence of the chosen probabilistic model class conditioning on the measured data, a key component for Bayesian model class selection and model averaging. Three examples are used to demonstrate the effectiveness of the TMCMC approach in Bayesian model updating, model class selection, and model averaging.  相似文献   

12.
Bouc–Wen class models have been widely used to efficiently describe smooth hysteretic behavior in time history and random vibration analyses. This paper proposes a generalized Bouc–Wen model with sufficient flexibility in shape control to describe highly asymmetric hysteresis loops. Also introduced is a mathematical relation between the shape-control parameters and the slopes of the hysteresis loops, so that the model parameters can be identified systematically in conjunction with available parameter identification methods. For use in nonlinear random vibration analysis by the equivalent linearization method, closed-form expressions are derived for the coefficients of the equivalent linear system in terms of the second moments of the response quantities. As an example application, the proposed model is successfully fitted to the highly asymmetric hysteresis loops obtained in laboratory experiments for flexible connectors used in electrical substations. The model is then employed to investigate the effect of dynamic interaction between interconnected electrical substation equipment by nonlinear time-history and random vibration analyses.  相似文献   

13.
The set of statistical methods available to developmentalists is continually being expanded, allowing for questions about change over time to be addressed in new, informative ways. Indeed, new developments in methods to model change over time create the possibility for new research questions to be posed. Latent transition analysis, a longitudinal extension of latent class analysis, is a method that can be used to model development in discrete latent variables, for example, stage processes, over 2 or more times. The current article illustrates this approach using a new SAS procedure, PROC LTA, to model change over time in adolescent and young adult dating and sexual risk behavior. Gender differences are examined, and substance use behaviors are included as predictors of initial status in dating and sexual risk behavior and transitions over time. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
Civil engineering graduates need to be competent in hydraulic theory, as well as in the application of that theory to the solution of practical problems. Teachers of hydraulic design are faced with the dilemma that most realistic hydraulics problems are too complex to solve by hand, while most commercially available software packages obscure the theoretical background for program algorithms. Equation solvers provide a valuable tool for bridging these gaps. Students can develop an appropriate linear or nonlinear mathematical model to depict a realistic system, then use an equation solver package to solve that model for any combination of input data desired. Computer-based studio classrooms further enhance the learning experience by allowing students to solve problems under the instructor's supervision during class periods. This paper will describe how effective equation solvers and the studio classroom can be in teaching hydraulic design for water distribution systems and open-channel flow. The theory is developed in class through the use of printed notes. Students then develop the nonlinear mathematical model for a simple example, solve the model using an equation solver, and check the correctness of the solution. Students are able to investigate the dynamic response and the sensitivity of the model by varying the equation solver input variable values. Next they apply the theory and solution methods to a practical applications exercise. The final step is to complete a comprehensive, realistic design problem. Students are required to present their results to the class at all stages of the process. Course-end evaluation scores have risen significantly since the class has been converted to the studio format. Student comments indicate that they think equation solvers are a valuable engineering design tool, not only for learning, but in professional practice as well. The instructor has observed that students learn and retain the theory much better when they can apply it immediately to realistic problems. Much more realistic and sophisticated quizzes can be given when the students have computers available to assist with the analysis.  相似文献   

15.
The compression index is an important soil property that is essential to many geotechnical designs. Over the decades, a number of empirical correlations have been proposed to relate the compressibility to other soil index properties, such as the liquid limit, plasticity index, in situ water content, void ratio, specific gravity, etc. The reliability and thus predictability of these correlations are always being questioned. Moreover, selection between simple and complicated models is a difficult task and often depends on subjective judgments. A more complicated model obviously provides “better fit” to the data but not necessarily offers an acceptable degree of robustness to measurement noise and modeling error. In the present study, the Bayesian probabilistic approach for model class selection is used to revisit the empirical multivariate linear regression formula of the compression index. The criterion in the formula structure selection is based on the plausibility of a class of formulas conditional on the measurement, instead of considering the likelihood only. The plausibility balances between the data fitting capability and sensitivity to measurement and modeling error, which is quantified by the Ockham factor. The Bayesian method is applied to analyze a data set of 795 records, including the compression index and other well-known geotechnical index properties of marine clay samples collected from various sites in South Korea. It turns out that the correlation formula linking the compression index to the initial void ratio and liquid limit possesses the highest plausibility among a total of 18 candidate classes of formulas. The physical significance of this most plausible correlation is addressed. It turns out to be consistent with previous studies and the Bayesian method provides the confirmation from another angle.  相似文献   

16.
In epidemiological investigations, when the estimation of integrated exposures over long time intervals covering years or decades is required, the quantitative assignment of exposure levels by simplistic models may prove to be inadequate for most applications. This difficulty may be partially addressed by modifying the mathematical models used for the prediction of dispersions of emissions from pollution sources. A theoretical model based on the atmospheric dispersion of contaminants is proposed. While the development of the theoretical model is straightforward, the data requirements in the application of the model may impose some limitations. The methods developed to resolve or alleviate these limitations suggest that many currently used environmental exposure assignment techniques may be too crude to be of value; even the more sophisticated method proposed can only be used with some reservations. Although several difficulties associated with environmental exposure estimation remain unresolved, the careful and rigorous analysis of the available data and the application of the method suggested here can reduce the exposure misclassification errors to acceptable levels. The quantitative estimations of the limitations are based on estimation procedures and aerometric data used in a hilly terrain, and thus should represent testing of the method under an extreme condition.  相似文献   

17.
The question as to which structural equation model should be selected when multitrait-multimethod (MTMM) data are analyzed is of interest to many researchers. In the past, attempts to find a well-fitting model have often been data-driven and highly arbitrary. In the present article, the authors argue that the measurement design (type of methods used) should guide the choice of the statistical model to analyze the data. In this respect, the authors distinguish between (a) interchangeable methods, (b) structurally different methods, and (c) the combination of both kinds of methods. The authors present an appropriate model for each type of method. All models allow separating measurement error from trait influences and trait-specific method effects. With respect to interchangeable methods, a multilevel confirmatory factor model is presented. For structurally different methods, the correlated trait-correlated (method-1) model is recommended. Finally, the authors demonstrate how to appropriately analyze data from MTMM designs that simultaneously use interchangeable and structurally different methods. All models are applied to empirical data to illustrate their proper use. Some implications and guidelines for modeling MTMM data are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
In many dynamic analysis procedures the size of the problem is curtailed by truncation of the modes selected for study. A preferable approach in the case of a structure such as a large space frame is to identify and include those natural modes in the order of their increasing importance. Two of the many criteria which can be used to determine importance are examined in this paper. The first involves establishment of a “completeness index” reflecting the model identities satisfied by the natural frequencies and integrals of the mode shapes. The second involves displacements at a point of excitation and a response point. Both selection methods are implemented on a simplified model of the Initial Operating Configuration Space Station and comparative transient response analyses are made. It is shown that mode selection is a function of more than one variable. The first method performs well in reducing model error yet the second is necessary to predict those modes excited by various forcing functions when control of displacements is the chief concern. Different mode selection criteria will be necessary when other variables are involved.  相似文献   

19.
铰接式车辆的路径跟踪控制是矿山自动化领域中的关键技术,而数学模型和路径跟踪控制方法是铰接式车辆路径跟踪控制中的两项重要研究内容。在数学模型研究中,铰接式车辆的无侧滑经典运动学模型较为适合作为低速路径跟踪控制的参考模型,而有侧滑运动学模型作为参考模型时则可能导致侧滑加剧。此外基于牛顿–欧拉法建立的铰接式车辆四自由度动力学模型原则上满足路径跟踪控制的需求,但是还需要解决当前的四自由度模型无法同时反映瞬态转向特性和稳态转向特性的问题。在路径跟踪控制方法研究中,反馈线性化控制、最优控制、滑模控制等无前馈信息的控制方法无法有效解决铰接式车辆跟踪存在较大幅度曲率突变的参考路径时误差较大的问题,前馈–反馈控制可以用于解决上述问题,但是在参考路径具有不同幅度的曲率突变时需要解决自动调整预瞄距离的问题,而模型预测控制,尤其是非线性模型预测控制,可以更加有效地利用前馈信息,且不需要考虑预瞄距离的设置,从而可以有效提高铰接式车辆跟踪存在较大幅度曲率突变的参考路径时的精确性。此外,对于基于非线性模型预测控制的铰接式车辆路径跟踪控制,还需深化三个方面的研究。首先,该控制方法仍然存在误差最大值随参考速度增大而增加的趋势。其次,目前该控制方法以运动学模型作为预测模型,无法解决铰接式车辆以较高的参考速度运行时侧向速度导致的精确性下降和安全性恶化的问题。最后,还需对这种控制方法进行实时性方面的优化研究。   相似文献   

20.
The use of design/build (DB) contracting by transportation agencies has been steadily increasing as a project delivery system for large complex highway projects. However, moving to DB from traditional design-bid-build procurement can be a challenge. One significant barrier is gaining acceptance of a best-value selection process in which technical aspects of a proposal are considered separately and then combined with price to determine the winning proposal. These technical aspects mostly consist of qualitative criteria, thus making room for human errors or biases. Any perceived presence of bias or influence in the selection process can lead to public mistrust and protests by bidders. It is important that a rigorous quantitative mathematical analysis of the evaluation process be conducted to determine whether bias exists and to eliminate it. The paper discusses two potential sources of bias—evaluators and weighting model—in the DB selection process and presents mathematical models to detect and remove biases should they exist. A score normalization model deals with biases from the evaluators; then a graphical weight-space volume model and a Monte Carlo statistical sampling model are developed to remove biases from the weighting model. The models are then tested and demonstrated using results from the DB bridge replacement project for the collapsed Mississippi River bridge of Interstate 35W in Minneapolis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号