首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a method of simulation to compute optimal group sequential tests that minimize the average sample size and meet significance level and power requirements. Optimal designs can be used in clinical trials directly, and provide a standard for assessing the e?ciency of other designs as well. The proposed method is conceptually simple and straightforward.  相似文献   

2.
Abstract

A new approach to adaptive design of clinical trials is proposed in a general multiparameter exponential family setting, based on generalized likelihood ratio statistics and optimal sequential testing theory. These designs are easy to implement, maintain the prescribed Type I error probability, and are asymptotically efficient. Practical issues involved in clinical trials allowing mid-course adaptation and the large literature on this subject are discussed, and comparisons between the proposed and existing designs are presented in extensive simulation studies of their finite-sample performance, measured in terms of the expected sample size and power functions.  相似文献   

3.
Abstract

We consider a clinical trial with three competing treatments and study designs that allocate subjects sequentially in order to maximize the power of relevant tests. Two different criteria are considered: the first is to find the best treatment and the second is to order all three. The power converges to one in an exponential rate and we find the optimal allocation that maximizes this rate by large deviation theory. For the first criterion the optimal allocation has the plausible property that it assigns a small fraction of subjects to the inferior treatment. The optimal allocation depends heavily on the unknown parameters and, therefore, in order to implement it, a sequential adaptive scheme is considered. At each stage of the trial the parameters are estimated and the next subject is allocated according to the estimated optimal allocation. We study the asymptotic properties of this design by large deviations theory and the small sample behavior by simulations. Our results demonstrate that, unlike the two-treatments case, adaptive design can provide significant improvement in power.  相似文献   

4.
Practical ways are still needed for optimizing the design of chemical processes, with allowance for uncertain specifications and future changes in economic parameters. This problem fits the two-stage formulation of Dantzig[1]. Unfortunately, we found several practical difficulties in applying this two-stage analysis to realistic process design models. Some of these can be overcome by proper formulation of the process model or by improvements in the computational algotrithm. In the end, we obtained, for two realistic process models, optimal designs with improved flexibility compared to designs based on fixed parameters; the well-known inequalities were satisfied. More generally, the paper includes suggestions for practical efficient use of the two-stage approach in process design.  相似文献   

5.
We present a molecular clustering approach for the efficient incorporation of solvent design information into process synthesis in the integrated design of solvent/process systems. The approach is to be used in conjunction with an integrated solvent/process design approach where the solvent design stage utilises multi-objective optimisation in order to identify Pareto optimal solvent candidates that are subsequently evaluated in a process synthesis stage. We propose to introduce the solvent design information into the process synthesis stage through the use of molecular clusters. The partitioning of the original Pareto optimal set of solvents leads to smaller compact groups of similar solvent molecules from which representative molecules are introduced into the process synthesis model as discrete options to determine the optimal process performance associated with the optimal solvent. We investigate two clustering strategies, serial and parallel clustering, that allow to effectively exploit the solvent-process design interactions to minimise the computational demands of the process synthesis stage. We further propose a clustering heuristic probability that can aid decision making in clustering and can significantly accelerate the search for the best integrated solvent-process systems. The presented method is illustrated with case studies in the design of solvents for liquid-liquid extraction, gas-absorption and extractive distillation systems.  相似文献   

6.
Abstract

In this overview article, I will focus on adaptive designs in “learn” clinical studies, the exploratory phase of the drug development process designed and carried out in order to establish drug efficacy and dose-response relationships. These designs directly address the goals of the learn-phase trial with respect to identification of dose to carry forward in the confirmatory phase, estimation of likelihood of success in confirmatory trial, and efficient early stopping for efficacy or for futility. A critical component of these designs is the dose-response model for efficacy and/or safety endpoints that capture prior information about the form and location of the clinically important dose response relationship. An additional ingredient in the Bayesian approach is a prior distribution for unknown parameters. Efficiency is gained by appropriate incorporation of longitudinal models that allow the efficient use of all available information.  相似文献   

7.
The adaptive input design (also called online redesign of experiments) for parameter estimation is very effective for the compensation of uncertainties in nonlinear processes. Moreover, it enables substantial savings in experimental effort and greater reliability in modeling.We present theoretical details and experimental results from the real-time adaptive optimal input design for parameter estimation. The case study considers separation of three benzoate by reverse phase liquid chromatography. Following a receding horizon scheme, adaptive D-optimal input designs are generated for a precise determination of competitive adsorption isotherm parameters. Moreover, numerical techniques for the regularization of arising ill-posed problems, e.g. due to scarce measurements, lack of prior information about parameters, low sensitivities and parameter correlations are discussed. The estimated parameter values are successfully validated by Frontal Analysis and the benefits of optimal input designs are highlighted when compared to various standard/heuristic input designs in terms of parameter accuracy and precision.  相似文献   

8.
We consider the problem of providing a fixed width confidence interval for the difference of two normal means when the variances are unknown and unequal. We propose a two-stage procedure that differs from those of Chapman (1950) and Ghosll (1975). The procedure provides the desired confidence, subject to the restriction on the width, for certain values of the design parameter h. Values of h are given by the Monte Carlo rnethod for various combinations of first stage sample size and confidence level. Finally, it is shown that the procedure is asymptotically more efficient than those of Chapmail and Ghosh with respect to total sample size, as the width of the interval approaches zero.  相似文献   

9.
In eco-design, the integration of environmental aspects into the earliest stage of design is considered with the aim of reducing adverse environmental impacts throughout a product's life cycle. An eco-design problem is therefore multi-objective, where several objectives (environmental, economic, and technological) are to be simultaneously optimized.The optimization of industrial processes usually requires solving expensive multi-objective optimization problems (MOPs). Aiming to solve efficiently MOPs, with a limited computational budget, this paper proposes a new framework called AMOEA-MAP. The framework relies on the structure of the NSGAII algorithm and possesses two novel operators: a memory-based adaptive partitioning strategy, which provides an adaptive reticulation of the search space for a quick identification of optimal zones with less computational effort; and a bi-population evolutionary algorithm, tailored for expensive optimization problems.To ascertain its generality, the framework is first tested on several tough benchmarks. Its performance is subsequently validated on a real-world eco-design problem.  相似文献   

10.
We consider the construction of one-sided group sequential designs where the stopping rule includes boundaries for early stopping to accept for futility and to reject for efficacy. The traditional assumption that all patients have the same likelihood of benefiting from the treatment is sometimes unrealistic and can underestimate the required sample size. This motivates us to power the design for an alternative where the treatment group observations come from a mixture of normal distributions. For the proposed setting, we use standardized test statistics based on sample means, and the test turns out to be an L-optimal similar test. Stopping boundaries and arm size for the design are determined by Type I and Type II error spending equations. We demonstrate the need for larger arm sizes when trying to detect a mixture alternative compared to trying to detect a pure shift alternative. The unknown variance case is discussed. With the mixture model, we discuss a more general definition of treatment effect. The maximum likelihood estimator for the treatment effect is discussed.  相似文献   

11.
A new framework to automate, augment, and accelerate steps in computer‐aided molecular design is presented. The problem is tackled in three stages: (1) composition design, (2) structure determination, and (3) extended design. Composition identification and structure determination are decoupled to achieve computational efficiency. Using approximate group‐contribution methods in the first stage, molecular compositions that fit design targets are identified. In the second stage, isomer structures of solution compositions are determined systematically, and structure‐based property corrections are used to refine the solution pool. In the final stage, the design is extended beyond the scope of group‐contribution methods by using problem‐specific property models. At each design stage, novel optimization models and graph theoretic algorithms generate a large and diverse pool of candidates using an assortment of property models. The wide applicability and computational efficiency of the proposed methodology are illustrated through three case studies. © 2013 American Institute of Chemical Engineers AIChE J, 59: 3686–3701, 2013  相似文献   

12.
In this paper, the significant development, current challenges and future opportunities in the field of chemical product design using computer-aided molecular design (CAMD) tools are highlighted. With the gaining of focus on the design of novel and improved chemical products, the traditional heuristic based approaches may not be effective in designing optimal products. This leads to the vast development and application of CAMD tools, which are methods that combine property prediction models with computer-assisted search in the design of various chemical products. The introduction and development of different classes of property prediction methods in the overall product design process is discussed. The exploration and application of CAMD tools in numerous single component product designs, mixture design, and later in the integrated process-product design are reviewed in this paper. Difficulties and possible future extension of CAMD are then discussed in detail. The highlighted challenges and opportunities are mainly about the needs for exploration and development of property models, suitable design scale and computational effort as well as sustainable chemical product design framework. In order to produce a chemical product in a sustainable way, the role of each level in a chemical product design enterprise hierarchy is discussed. In addition to process parameters and product quality, environment, health and safety performance are required to be considered in shaping a sustainable chemical product design framework. On top of these, recent developments and opportunities in the design of ionic liquids using molecular design techniques have been discussed.  相似文献   

13.
Biomass is a sustainable source of energy which can be utilised to produce value-added products such as biochemical products and biomaterials. In order to produce a sustainable supply of such value-added products, an integrated biorefinery is required. An integrated biorefinery is a processing facility that integrates multiple biomass conversion pathways to produce value-added products. To date, various biomass conversion pathways are available to convert biomass into a wide range of products. Due to the large number of available pathways, various systematic screening tools have been developed to address the process design aspect of an integrated biorefinery. Process design however, is often inter-linked with product design as it is important to identify the optimal molecule (based on desired product properties) prior to designing its optimal production routes. In cases where the desired product properties cannot be met by a single component chemical product, a mixture of chemicals would be required. In this respect, product and process design decisions would be a challenging task for an integrated biorefinery. In this work, a novel two-stage optimisation approach is developed to identify the optimal conversion pathways in an integrated biorefinery to convert biomass into the optimal mixtures in terms of target product properties. In the first stage, the optimal mixture is designed via computer-aided molecular design (CAMD) technique. CAMD technique is a reverse engineering approach which predicts the molecules with optimal properties using property prediction models. Different classes of property models such as group contribution (GC) models and quantitative structure property relationship (QSPR) are adapted in this work. The main component of the mixture is first determined from the target product properties. This is followed by the identifying of additive components to form an optimal mixture with the main component based on the desired product properties. Once the optimal mixture is determined, the second stage identifies the optimal conversion pathways via superstructural mathematical optimisation approach. With such approach, the optimal conversion pathways can be determined based on different optimisation objectives (e.g. highest product yield, lowest environmental impact etc.). To illustrate the proposed methodology, a case study on the design of fuel additives as a mixture of different molecules from palm-based biomass is presented. With the developed methodology, optimal fuel additives are designed based on optimal target properties. Once the optimal fuel additives are designed, the optimal conversion pathways in terms of highest product yield and economic performance that convert biomass into the optimal fuel additives are identified.  相似文献   

14.
Vacuum/pressure swing adsorption is an attractive and often energy efficient separation process for some applications. However, there is often a trade-off between the different objectives: purity, recovery and power consumption. Identifying those trade-offs is possible through use of multi-objective optimisation methods but this is computationally challenging due to the size of the search space and the need for high fidelity simulations due to the inherently dynamic nature of the process. This paper presents the use of surrogate modelling to address the computational requirements of high fidelity simulations needed to evaluate alternative designs. We present SbNSGA-II ALM, surrogate based NSGA-II, a robust and fast multi-objective optimisation method based on kriging surrogate models and NSGA-II with the Active Learning MacKay (ALM) design criterion. The method is evaluated by application to an industrially relevant case study: a two column six step system for CO2/N2 separation. A 5 times reduction in computational effort is observed.  相似文献   

15.
Abstract

In this article, we consider a variety of inference problems for high-dimensional data. The purpose of this article is to suggest directions for future research and possible solutions about p ? n problems by using new types of two-stage estimation methodologies. This is the first attempt to apply sequential analysis to high-dimensional statistical inference ensuring prespecified accuracy. We offer the sample size determination for inference problems by creating new types of multivariate two-stage procedures. To develop theory and methodologies, the most important and basic idea is the asymptotic normality when p → ∞. By developing asymptotic normality when p → ∞, we first give (a) a given-bandwidth confidence region for the square loss. In addition, we give (b) a two-sample test to assure prespecified size and power simultaneously together with (c) an equality-test procedure for two covariance matrices. We also give (d) a two-stage discriminant procedure that controls misclassification rates being no more than a prespecified value. Moreover, we propose (e) a two-stage variable selection procedure that provides screening of variables in the first stage and selects a significant set of associated variables from among a set of candidate variables in the second stage. Following the variable selection procedure, we consider (f) variable selection for high-dimensional regression to compare favorably with the lasso in terms of the assurance of accuracy and the computational cost. Further, we consider variable selection for classification and propose (g) a two-stage discriminant procedure after screening some variables. Finally, we consider (h) pathway analysis for high-dimensional data by constructing a multiple test of correlation coefficients.  相似文献   

16.
In this paper, we develop appropriate sampling methodologies for testing hypotheses regarding the difference of mean values from two independent (or dependent) normal populations when their variances are unknown and unequal. We design two-stage and purely sequential testing methodologies of hypotheses for comparing the unknown means by determining the appropriate sample sizes while controlling both type-I and type-II error probabilities at or below preassigned levels α, β respectively. Such methodologies are constructed under both unequal and equal sample size designs. We prove that both two-stage and purely sequential testing strategies enjoy a number of practically appealing properties. Extensive sets of computer simulations and real data analyses empirically validate our theoretical findings.  相似文献   

17.
Process uncertainty is almost always an issue during the design of chemical processes (CP). In the open literature it has been shown that consideration of process uncertainties in optimal design necessitates the incorporation of process flexibility. Such an optimal design can presumably operate reliably in the presence of process and modeling uncertainty. Halemane and Grossmann (1983) introduced a feasibility function for evaluating CP flexibility. They also formulated a two-stage optimization problem for estimating the optimal design margins. These formulations, however, are based implicitly on the assumption that during the operation stage, uncertain parameters can be determined with enough precision. This assumption is rather restrictive and is often not met in practice. When available experimental information at the operation stage does not allow a more precise estimate of some of the uncertain parameters, new formulations of the flexibility condition and the optimization problem under uncertainty are needed. In this article, we propose such formulations, followed by some computational experiments.  相似文献   

18.
This paper presents a comprehensive simultaneous synthesis approach based on stage‐wise superstructure to design cost‐optimal heat exchanger network (HEN). It is well known that the simultaneous synthesis model has very complicated mixed integer nonlinear programming formulations, which are non‐convex, non‐continuous and have many local optima. Up till now, it cannot be expected that an algorithm can find, in polynomial time, the global solution to the simultaneous synthesis problem of HEN. In order to reduce computational complexity, some simplified assumptions for structures, such as no stream splits, stream splits with isothermal mixing, no stream split flowing through more than one exchanger, etc, are adopted to prune the search space at the expense of neglecting certain important alternatives in the network configuration. In this work, a flexible stage‐wise superstructure is proposed to control the solution performance and search space efficiently. At each stage of the superstructure, with or without stream splits is determined at random or by the experience of designers. In this way, various candidate series and split network designs featuring the lowest annual cost can be found. Moreover, an efficient two‐level optimisation algorithm is employed for solving the presented model utilising genetic algorithm and particle swarm optimisation algorithm. Three case studies are presented to show the applicability of the proposed methodology. In addition, the results show that the new approach is able to find more economical networks than those generated by other methods. © 2012 Canadian Society for Chemical Engineering  相似文献   

19.
《Sequential Analysis》2013,32(3):413-426
Abstract

In this paper, methodology necessary do design group sequential trials based on ordinal categorical data based on the Mann–Whitney–Wilcoxon test is presented and illustrated in two practical examples. Curtailment for futility is an important component in improving the performance of the methods in terms of expected sample sizes under the null hypothesis. Simulation can also be employed to verify the asymptotic properties claimed by the method, and if necessary be used to tweak the design to better approximate the desired operating characteristics.  相似文献   

20.
Quantifying the effect of exogenous parameters regulating megakaryopoiesis would enhance the design of robust and efficient protocols to produce platelets. We developed a computational model based on time-dependent ordinary differential equations (ODEs) which decoupled expansion and differentiation kinetics of cells using a subpopulation dynamic model. The model described umbilical cord blood (UCB)-derived cell's behavior in response to the external stimuli during expansion and megakaryocytic differentiation ex vivo. We observed that the rate of expansion of Mk progenitors and production of mature Mks were higher when TPO was included in the expansion stage and cytokines were added during differentiation stage. Our computational approach suggests that the Mk progenitors were an important intermediate population that their dynamic should be optimized in order to establish an efficient protocol. This model provides important insights into dynamics of cell subpopulations during megakaryopoiesis process and could potentially contribute toward the rational design of cell-based therapy bioprocesses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号