首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Bayesian inference has commonly been performed on nonlinear mixed effects models. However, there is a lack of research into performing Bayesian optimal design for nonlinear mixed effects models, especially those that require searches to be performed over several design variables. This is likely due to the fact that it is much more computationally intensive to perform optimal experimental design for nonlinear mixed effects models than it is to perform inference in the Bayesian framework. Fully Bayesian experimental designs for nonlinear mixed effects models are presented, which involve the use of simulation-based optimal design methods to search over both continuous and discrete design spaces. The design problem is to determine the optimal number of subjects and samples per subject, as well as the (near) optimal urine sampling times for a population pharmacokinetic study in horses, so that the population pharmacokinetic parameters can be precisely estimated, subject to cost constraints. The optimal sampling strategies, in terms of the number of subjects and the number of samples per subject, were found to be substantially different between the examples considered in this work, which highlights the fact that the designs are rather problem-dependent and can be addressed using the methods presented.  相似文献   

2.
3.
4.
Many design problems can be formulated in terms of a number of nonlinear equations and inequalities. A solution of such equations is called a design. In the present paper a method is given for the random generation of a large number of designs, which can be ordered according to some criterion. Using highspeed computer calculation, a large number of random designs is generated from which the designer can make a choice. A trend in selecting certain designs can be imposed: A requirement can be optimized or an optimal compromise between two or more conflicting requirements can be determined. An arbitrary number of variables can be specified. An interactive program has been written and implemented on a number of interactive computer graphics systems. Using the graphic functions, search areas in which the optimum is expected to lie can be modified interactively, thus increasing the speed of convergence. Information about the equations can be given either interactively or in a file, therefore equations can be given either interactively or in a file, therefore relieving the user from the task of programming. The program has been applied to the design of gear pairs, steam condensors, and bearings.  相似文献   

5.
基于池的无监督线性回归主动学习   总被引:2,自引:0,他引:2  
刘子昂  蒋雪  伍冬睿 《自动化学报》2021,47(12):2771-2783
在许多现实的机器学习应用场景中, 获取大量未标注的数据是很容易的, 但标注过程需要花费大量的时间和经济成本. 因此, 在这种情况下, 需要选择一些最有价值的样本进行标注, 从而只利用较少的标注数据就能训练出较好的机器学习模型. 目前, 主动学习(Active learning)已广泛应用于解决这种场景下的问题. 但是, 大多数现有的主动学习方法都是基于有监督场景: 能够从少量带标签的样本中训练初始模型, 基于模型查询新的样本, 然后迭代更新模型. 无监督情况下的主动学习却很少有人考虑, 即在不知道任何标签信息的情况下最佳地选择要标注的初始训练样本. 这种场景下, 主动学习问题变得更加困难, 因为无法利用任何标签信息. 针对这一场景, 本文研究了基于池的无监督线性回归问题, 提出了一种新的主动学习方法, 该方法同时考虑了信息性、代表性和多样性这三个标准. 本文在3个不同的线性回归模型(岭回归、LASSO (Least absolute shrinkage and selection operator)和线性支持向量回归)和来自不同应用领域的12个数据集上进行了广泛的实验, 验证了其有效性.  相似文献   

6.
For the algorithmic construction of optimal experimental designs, it is important to be able to evaluate small modifications of given designs in terms of the optimality criteria at a low computational cost. This can be achieved by using powerful update formulas for the optimality criteria during the design construction. The derivation of such update formulas for evaluating the impact of changes to the levels of easy-to-change factors and hard-to-change factors in split-plot designs as well as the impact of a swap of points between blocks or whole plots in block designs or split-plot designs is described.  相似文献   

7.
Effective design of parallel matrix multiplication algorithms relies on the consideration of many interdependent issues based on the underlying parallel machine or network upon which such algorithms will be implemented, as well as, the type of methodology utilized by an algorithm. In this paper, we determine the parallel complexity of multiplying two (not necessarily square) matrices on parallel distributed-memory machines and/or networks. In other words, we provided an achievable parallel run-time that can not be beaten by any algorithm (known or unknown) for solving this problem. In addition, any algorithm that claims to be optimal must attain this run-time. In order to obtain results that are general and useful throughout a span of machines, we base our results on the well-known LogP model. Furthermore, three important criteria must be considered in order to determine the running time of a parallel algorithm; namely, (i) local computational tasks, (ii) the initial data layout, and (iii) the communication schedule. We provide optimality results by first proving general lower bounds on parallel run-time. These lower bounds lead to significant insights on (i)–(iii) above. In particular, we present what types of data layouts and communication schedules are needed in order to obtain optimal run-times. We prove that no one data layout can achieve optimal running times for all cases. Instead, optimal layouts depend on the dimensions of each matrix, and on the number of processors. Lastly, optimal algorithms are provided.  相似文献   

8.
Integral controllability is necessary and sufficient for a multivariable model to be usable in a decoupling controller with integral action that can be arbitrarily detuned without jeopardizing closed-loop robust stability. The design of experiments for identification of integral controllable models is challenging, because it must satisfy cumbersome eigenvalue inequalities involving a coupling between the real system and its model. To address this challenge, an optimization-based mathematical framework is developed that characterizes efficient identification experiments ensuring integral controllability. The proposed framework recovers well known experiment designs but also produces new ones of both theoretical and practical interest. Such designs are expressed either analytically or as a result of numerical optimization and are demonstrated in a number of examples. These designs can be easily implemented in industrial practice. By combining additional objectives or constraints of interest, the proposed framework can further serve as a basis for new experiment designs in future work.  相似文献   

9.
This paper presents some approaches to the optimal design of stacked-ply composite flywheels. The laminations of the disk are constructed such that the principal fiber direction is either tangential or radial. In this study, optimization problems are formulated to maximize the energy density of the flywheel. This is accomplished by allowing arbitrary, continuous, variation of the orientation of the fibers in the radial plies. The paper compares designs based on minimizing cost functions related to the (1) the maximum stress, (2) the maximum strain, and (3) the Tsai–Wu failure criteria. It is shown that the optimized designs provide an improvement in the flywheel energy density when compared to a standard stacked-ply design. The results also show that, for a given disk design, the estimate of the energy density can vary greatly depending on the failure criteria employed.  相似文献   

10.
Companies frequently decide on the location and design for new facilities in a sequential way. However, for a fixed number of new facilities, the company might be able to improve its profit by taking its decisions for all the facilities simultaneously. In this paper we compare three different strategies: simultaneous location and independent design of two facilities in the plane, the same with equal designs, and the sequential approach of determining each facility in turn. The basic model is profit maximization for the chain, taking market share, location costs and design costs into account. The market share captured by each facility depends on the distance to the customers (location) and its quality (design), through a probabilistic Huff-like model. Recent research on this type of models was aimed at finding global optima for a single new facility, holding quality fixed or variable, but no exact algorithm has been proposed to find optimal solutions for more than one facility. We develop such an exact interval branch-and-bound algorithm to solve both simultaneous location and design two-facility problems. Then, we present computational results and exhibit the differences in locations and qualities of the optimal solutions one may obtain by the sequential and simultaneous approaches.  相似文献   

11.
11 Introduction The filters are widely used in many applications of signal processing. Filter design is an important research problem in many diverse application areas. The filters we usually refer to are temporal filters, which pass the frequency components of interest and attenuate the others. A spatial filter passes the signal radiating from a specific location and attenuates signals from other locations. Beamformer that widely used in radar, sonar,and wireless communications is a kind of …  相似文献   

12.
In most industrial applications, only limited statistical information is available to describe the input uncertainty model due to expensive experimental testing costs. It would be unreliable to use the estimated input uncertainty model obtained from insufficient data for the design optimization. Furthermore, when input variables are correlated, we would obtain non-optimum design if we assume that they are independent. In this paper, two methods for problems with a lack of input statistical information—possibility-based design optimization (PBDO) and reliability-based design optimization (RBDO) with confidence level on the input model—are compared using mathematical examples and an Abrams M1A1 tank roadarm example. The comparison study shows that PBDO could provide an unreliable optimum design when the number of samples is very small. In addition, PBDO provides an optimum design that is too conservative when the number of samples is relatively large. Furthermore, the obtained PBDO designs do not converge to the optimum design obtained using the true input distribution as the number of samples increases. On the other hand, RBDO with confidence level on the input model provides a conservative and reliable optimum design in a stable manner. The obtained RBDO designs converge to the optimum design obtained using the true input distribution as the number of samples increases.  相似文献   

13.
Design of optimal plans for environmental planning and management applications should ideally consider the multiple quantitative and qualitative criteria relevant to the problem. For example, in ground water monitoring design problems, qualitative criteria such as acceptable spatial extent and shape of the contaminant plume predicted from the monitored locations can be equally important as the typical quantitative criteria such as economic costs and contaminant prediction accuracy. Incorporation of qualitative criteria in the problem-solving process is typically done in one of two ways: (a) quantifying approximate representations of the qualitative criteria, which are then used as additional criteria during the optimization process, or (b) post-optimization analysis of designs by experts to evaluate the overall performance of the optimized designs with respect to the qualitative criteria. These approaches, however, may not adequately represent all of the relevant qualitative information that affect a human expert involved in design (e.g. engineers, stakeholders, regulators, etc.), and do not necessarily incorporate the effect of the expert's own learning process on the suitability of the final design. The Interactive Genetic Algorithm with Mixed Initiative Interaction (IGAMII) is a novel approach that addresses these limitations by using a collaborative human-computer search strategy to assist users in designing optimized solutions to their applications, while also learning about their problem. The algorithm adaptively learns from the expert's feedback, and explores multiple designs that meet her/his criteria using both the human expert and a simulated model of the expert's responses in a collaborative fashion. The algorithm provides an introspection-based learning framework for the human expert and uses the human's subjective confidence measures to adjust the optimization search process to the transient learning process of the user. This paper presents the design and testing of this computational framework, and the benefits of using this approach for solving groundwater monitoring design problems.  相似文献   

14.
Optimal Experiment Design (OED) is a well-developed concept for regression problems that are linear-in-the-parameters. In case of experiment design to identify nonlinear Takagi-Sugeno (TS) models, non-model-based approaches or OED restricted to the local model parameters (assuming the partitioning to be given) have been proposed. In this article, a Fisher Information Matrix (FIM) based OED method is proposed that considers local model and partition parameters. Due to the nonlinear model, the FIM depends on the model parameters that are subject of the subsequent identification. To resolve this paradoxical situation, at first a model-free space filling design (such as Latin Hypercube Sampling) is carried out. The collected data permits making design decisions such as determining the number of local models and identifying the parameters of an initial TS model. This initial TS model permits a FIM-based OED, such that data is collected which is optimal for a TS model. The estimates of this first stage will in general not be ideal. To become robust against parameter mismatch, a sequential optimal design is applied. In this work the focus is on D-optimal designs. The proposed method is demonstrated for three nonlinear regression problems: an industrial axial compressor and two test functions.  相似文献   

15.
Cost‐efficient multi‐objective design optimization of antennas is presented. The framework exploits auxiliary data‐driven surrogates, a multi‐objective evolutionary algorithm for initial Pareto front identification, response correction techniques for design refinement, as well as generalized domain segmentation. The purpose of this last mechanism is to reduce the volume of the design space region that needs to be sampled in order to construct the surrogate model, and, consequently, limit the number of training data points required. The recently introduced segmentation concept is generalized here to allow for handling an arbitrary number of design objectives. Its operation is illustrated using an ultra‐wideband monopole optimized for best in‐band reflection, minimum gain variability, and minimum size. When compared with conventional surrogate‐based approach, segmentation leads to reduction of the initial Pareto identification cost by over 20%. Numerical results are supported by experimental validation of the selected Pareto‐optimal antenna designs.  相似文献   

16.
Because innovative and creative design is essential to a successful product, this work brings the benefits of generative design in the conceptual phase of the product development process so that designers/engineers can effectively explore and create ingenious designs and make better design decisions. We proposed a state-of-the-art generative design technique (GDT), called Space-filling-GDT (Sf-GDT), for the creation of innovative designs. The proposed Sf-GDT has the ability to create variant optimal design alternatives for a given computer-aided design (CAD) model. An effective GDT should generate design alternatives that cover the entire design space. Toward that end, the criterion of space-filling is utilized, which uniformly distribute designs in the design space thereby giving a designer a better understanding of possible design options. To avoid creating similar designs, a weighted-grid-search approach is developed and integrated into the Sf-GDT. One of the core contributions of this work lies in the ability of Sf-GDT to explore hybrid design spaces consisting of both continuous and discrete parameters either with or without geometric constraints. A parameter-free optimization technique, called Jaya algorithm, is integrated into the Sf-GDT to generate optimal designs. Three different design parameterization and space formulation strategies; explicit, interactive, and autonomous, are proposed to set up a promising search region(s) for optimization. Two user interfaces; a web-based and a Windows-based, are also developed to utilize Sf-GDT with the existing CAD software having parametric design abilities. Based on the experiments in this study, Sf-GDT can generate creative design alternatives for a given model and outperforms existing state-of-the-art techniques.  相似文献   

17.
This study extends Duncan's [1] model to two different manufacturing process models in which the processes continue and discontinue in operations during the search for the assignable cause. A more realistic assumption considered in this paper is that the cost of repair and the net hourly out-of-control income are functions of detection delay. In the continuous model, detection delay is defined as the elapsed time from the time when the shift of the process occurs until it is identified by control charts and the assignable cause is eliminated. The discontinuous model defines detection delay as the time interval from the occurrence of the process shift to the completion of testing a set of samples and interpreting the results. An efficient procedure is developed to determine the optimal designs without using any approximation approach. Thus, the proposed procedure can obtain the truly optimal designs rather than those approximate designs determined by Duncan [1] and other subsequent researchers. This paper illustrates several numerical examples and makes some relevant comparisons. The results indicate that this optimal solution procedure is more accurate than that of Panagos et al. [2]. Also, detection delay is sensitive to the economic design of control charts.  相似文献   

18.
Robust model selection procedures control the undue influence that outliers can have on the selection criteria by using both robust point estimators and a bounded loss function when measuring either the goodness-of-fit or the expected prediction error of each model. Furthermore, to avoid favoring over-fitting models, these two measures can be combined with a penalty term for the size of the model. The expected prediction error conditional on the observed data may be estimated using the bootstrap. However, bootstrapping robust estimators becomes extremely time consuming on moderate to high dimensional data sets. It is shown that the expected prediction error can be estimated using a very fast and robust bootstrap method, and that this approach yields a consistent model selection method that is computationally feasible even for a relatively large number of covariates. Moreover, as opposed to other bootstrap methods, this proposal avoids the numerical problems associated with the small bootstrap samples required to obtain consistent model selection criteria. The finite-sample performance of the fast and robust bootstrap model selection method is investigated through a simulation study while its feasibility and good performance on moderately large regression models are illustrated on several real data examples.  相似文献   

19.
Since the cost of installing and maintaining sensors is usually high, sensor locations should always be strategically selected to extract most of the information. For inferring certain quantities of interest (QoIs) using sensor data, it is desirable to explore the dependency between observables and QoIs to identify optimal placement of sensors. Mutual information is a popular dependency measure, however, its estimation in high dimensions is challenging as it requires a large number of samples. This also comes at a significant computational cost when samples are obtained by simulating complex physics-based models. Similarly, identifying the optimal design/location requires a large number of mutual information evaluations to explore a continuous design space. To address these challenges, two novel approaches are proposed. First, instead of estimating mutual information in high-dimensions, we map the limited number of samples onto a lower dimensional space while capturing dependencies between the QoIs and observables. We then estimate a lower bound of the original mutual information in this low dimensional space, which becomes our new dependence measure between QoIs and observables. Second, we use Bayesian optimization to search for optimal sensor locations in a continuous design space while reducing the number of lower bound evaluations. Numerical results on both synthetic and real data are provided to compare the performance of the lower bound with the estimate of mutual information in high dimensions, and a puff-based dispersion model is used to evaluate the sensor placement of the Bayesian optimization for a chemical release problem. The results show that the proposed approaches are both effective and efficient in capturing dependencies and inferring the QoIs.  相似文献   

20.
Group-randomized study designs are useful when individually-randomized designs either are not possible, or will not be able to estimate the parameters of interest. Group-randomized trials often have small number of experimental units or groups and strong geographically-induced between-unit correlation, thereby increasing the chance of obtaining a "bad" randomization outcome. It has been suggested to highly constrain the design through restriction to those allocations that meet specified criteria based on certain covariates available at the baseline. We describe a SAS macro that allocates treatment conditions in a two-arm stratified group-randomized design that ensures balance on relevant covariates. The application of the macro is illustrated using two examples of group-randomized designs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号