首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
We present the creation and use of a generalized cost function methodology based on costlets for automated optimization for conformal and intensity modulated radiotherapy treatment plans. In our approach, cost functions are created by combining clinically relevant “costlets”. Each costlet is created by the user, using an “evaluator” of the plan or dose distribution which is incorporated into a function or “modifier” to create an individual costlet. Dose statistics, dose-volume points, biological model results, non-dosimetric parameters, and any other information can be converted into a costlet. A wide variety of different types of costlets can be used concurrently. Individual costlet changes affect not only the results for that structure, but also all the other structures in the plan (e.g., a change in a normal tissue costlet can have large effects on target volume results as well as the normal tissue). Effective cost functions can be created from combinations of dose-based costlets, dose-volume costlets, biological model costlets, and other parameters. Generalized cost functions based on costlets have been demonstrated, and show potential for allowing input of numerous clinical issues into the optimization process, thereby helping to achieve clinically useful optimized plans. In this paper, we describe and illustrate the use of the costlets in an automated planning system developed and used clinically at the University of Michigan Medical Center. We place particular emphasis on the flexibility of the system, and its ability to discover a variety of plans making various trade-offs between clinical goals of the treatment that may be difficult to meet simultaneously.  相似文献   

2.
This paper develops an automated approach to plan for mass tactical airborne operations. This proposed tool enables the user to properly load aircraft according to the mission and user specifications, so that the minimum amount of time is required to seize all assigned objectives. The methodology is based on a hybrid approach in which the first portion is a mathematical model that provides the optimal manifest under “perfect conditions”. This mathematical model is represented by a transportation network, and can be optimized using a transportation algorithm. The optimum solution from the mathematical model is input to a simulation model that introduces the inherent variability induced by wind conditions, drift, aircraft location and speed, and delays between jumper exit times. The simulation returns the expected, best, and worst arrival times to the assigned objectives. This hybrid approach allows a large problem to be solved efficiently with a great deal of time saving.  相似文献   

3.
The aim of this study was to determine whether it is possible to obtain better characterization of materials in order to find out if these one are suitable in Quality Assurance for direct tableting. We tried to show that a methodological approach combining chemical, physical and technological aspects could control the direct compression process. We chose orthoboric acid as a study model for the direct compression. From a chemical point of view, our findings show only one crystalline molecular structure (RX, DSC and Pycnometry) which means an homogeneous chemical system. Concerning the particular state (Sieving and Microscopic approach), granularity is very different between the two forms, “crystalline” ABC and “powder” ABP.

Technological studies show a rheological and mechanical difference, as it is demonstrated, on the one hand by the behaviour of the bulk powder (Volumenometer), on the other hand by the feasibility on the machine (Alternative EKO). We explain this difference of behaviour by only the granularity aspect. Consequently, we think that in this case, controling the granularity means controling this direct tableting process.  相似文献   

4.
5.
A primary function of production management is the control of inventories. The typical manner of economic inventory control calls for a “pre-stocking” approach in which the focus is on the reorder quantities which should be put into inventory. A second manner of controlling inventories is what can be called a “post-stocking” analysis. Here the focus is on how much of the present inventory already in stock should be declared as surplus and disposed of. This paper describes the development and implementation of an analytical procedure for this second, excess inventory issue. The procedure is described in the context of an application to a General Motors carburetor assembly process in which product structure interactions play a significant role.  相似文献   

6.
Process-oriented tolerancing for multi-station assembly systems   总被引:4,自引:0,他引:4  
In multi-station manufacturing systems, the quality of final products is significantly affected by both product design as well as process variables. Historically, however, tolerance research has primarily focused on allocating tolerances based on the product design characteristics of each component. Currently, there are no analytical approaches to optimally allocate tolerances to integrate product and process variables in multi-station manufacturing processes at minimum costs. The concept of process-oriented tolerancing expands the current tolerancing practices, which bound errors related to product variables, to explicitly include process variables. The resulting methodology extends the concept of “part interchangeability” into “process interchangeability,” which is critical due to increasing requirements related to the selection of suppliers and benchmarking. The proposed methodology is based on the development and integration of three models: (i) the tolerance-variation relation; (ii) variation propagation; and (iii) process degradation. The tolerance-variation model is based on a pin-hole fixture mechanism in multi-station assembly processes. The variation propagation model utilizes a state space representation but uses a station index instead of a time index. Dynamic process effects such as tool wear are also incorporated into the framework of process-oriented tolerancing, which provides the capability to design tolerances for the whole life-cycle of a production system. The tolerances of process variables are optimally allocated through solving a nonlinear constrained optimization problem. An industry case study is used to illustrate the proposed approach.  相似文献   

7.
提出三种过程质量指数(PQI)的过程质量指数系统,基于过程质量指数的统计公差提供了一个过程质量要求和控制图设计之间的标准化界面.通过基于过程质量指数的统计公差带增加对x--R或x--s控制图中线的约束,建立一种保证预设质量和过程稳态的统计过程控制新方法.这不仅增强了控制图的功能,也为过程质量规划、统计公差和保证预设质量的SPC相关参数的并行设计提供了指导.  相似文献   

8.
The freezing process is widely used in the food industry. In the 70s, French regulation authorities have created in collaboration with the food industry the concept of «surgélation» process with the objective of improving the image of high quality frozen foods. The process of “surgélation” which could be translated as “super freezing” corresponds to a freezing process for which a final temperature of −18 °C must be reached “as fast as possible”. This concept was proposed in opposition to a conventionally “freezing” process for which no specific freezing rate is expected and the final storage temperature can be of −12 °C only. The objective of this work is to propose a methodology to evaluate the mean amount of frozen ice in a complex food as a function of temperature and to deduce a target temperature that must be considered as the temperature for which the food may be considered as “frozen”. Based on the definition proposed by the IIF-IIR red book, this target temperature has been defined as the temperature for which 80% of the freezable water is frozen. A case study is proposed with a model food made of two constituents.  相似文献   

9.
Macrosegregation in direct-chill casting of aluminium alloys   总被引:2,自引:0,他引:2  
Semi-continuous direct-chill (DC) casting holds a prominent position in commercial aluminium alloy processing, especially in production of large sized ingots. Macrosegregation, which is the non-uniform chemical composition over the length scale of a casting, is one of the major defects that occur during this process. The fact that macrosegregation is essentially unaffected by subsequent heat treatment (hence constitutes an irreversible defect) leaves us with little choice but to control it during the casting stage. Despite over a century of research in the phenomenon of macrosegregation in castings and good understanding of underlying mechanisms, the contributions of these mechanisms in the overall macrosegregation picture; and interplay between these mechanisms and the structure formation during solidification are still unclear. This review attempts to fill this gap based on the published data and own results. The following features make this review unique: results of computer simulations are used in order to separate the effects of different macrosegregation mechanisms. The issue of grain refining is specifically discussed in relation to macrosegregation. This report is structured as follows. Macrosegregation as a phenomenon is defined in the Introduction. In “Direct-chill casting – process parameters, solidification and structure patterns” section, direct-chill casting, the role of process parameters and the evolution of structural features in the as-cast billets are described. In “Macrosegregation in direct-chill casting of aluminium alloys” section, macrosegregation mechanisms are elucidated in a historical perspective and the correlation with DC casting process parameters and structural features are made. The issue of how to control macrosegregation in direct-chill casting is also dealt with in the same section. In “Role of grain refining” section, the effect of grain refining on macrosegregation is introduced, the current understanding is described and the contentious issues are outlined. The review is finished with conclusion remarks and outline for the future research.  相似文献   

10.
A general and complete methodology is presented to facilitate systematic modeling and design of polymer processes during the early development period. To capture and handle the subjective type of uncertainty, embedded in the preliminary process development, fuzzy theories are used as a basis to model and design the process in the presence of ambiguity and vagueness. Physical membership functions are developed for mapping the relation between process variables and the associated fuzzy uncertainties. Based on the qualitative results generated using our previously proposed “linguistic based preliminary design method,” the process modeling can be followed even in the absence of any process governing equations. The modeling is carried out by establishing an appropriate fuzzy reasoning system which provides a specific functional mapping that relates input process variables to one (or more than one) output performance parameter(s). A reduced yet feasible domain is generated by our qualitative design scheme to constrain the process variables. Now, any optimization routine can then be employed to search for a proper process design. We demonstrate the effectiveness of the proposed methodology by its application to a typical compression molding process.  相似文献   

11.
The structure and mechanical properties of new types of non-crystalline metallic composites, namely “glass-quasi-crystal”, “glass-disclinated nanocrystal” and “quasi-crystal(-glass)-disclinated nanocrystal” composites are theoretically examined. In particular, a theoretical model is proposed which effectively describes the relationship between plastic deformation and the growth of the glassy phase in metallic “glass-quasi-crystal” composite materials. Here also basic features of both the structure and the mechanical properties of the “glass-disclinated nanocrystal” and “quasi-crystal(-glass)-disclinated nanocrystal” composites are theoretically examined. It is shown that such composites are characterized by a very high yield stress.  相似文献   

12.
Quenching of a liquid Pb droplet containing 8217 atoms to T 0.65 Tm is studied on the nanosecond time scale using molecular dynamics and a “glue” force model. Crystallization takes place, and analysis of the final structure reveals an icosahedral-like shape, even if cuboctahedral single crystal structures are energetically favoured by our potential for all sizes. This result appears to be a consequence of the rapid formation of {111} crystallization fronts at the liquid surface and moving inward, and provides an explanation for the rapid structural fluctuations observed in electron microscopy experiments.  相似文献   

13.
14.
15.
In May 1987 the United States Food and Drug Administration published the final version of a guideline for process validation for pharmaceutical manufacturing. The document incorporated the comments from the pharmaceutical industry gathered after the publication of three draft versions in 1983, 1984 and 1986.

The presentation will cover the current definition of process validation as well as terms such as “worse case” and “installation qualification”.

The stages of process validation will be discussed including the written plan (protocol): records to be maintained; suitability of raw materials; equipment performance qualification; the number of runs required; and acceptance criteria.

Specifics for solid dosage forms will be presented along with details on batch record in instructions and establishment of acceptable range limits.

Circumstances and requirements for revalidation will be discussed as well as the validation of current finished dosage forms by retrospective validation.  相似文献   

16.
17.
A novel technique is presented for indirectly monitoring threshold exceedance in a sparsely-instrumented structure represented by a linear dynamic model subject to uncertain excitation modeled as a Gaussian process. The goal is to answer the following question: given incomplete output data from a structure excited by uncertain dynamic loading, what is the probability that any particular unobserved response of the structure exceeds a prescribed threshold? It is assumed that a good linear dynamic model of the target structure has previously been identified using dynamic test data. The technique is useful for monitoring the serviceability limit states of a structure subject to unmeasured “small-amplitude” ambient excitation (e.g. wind excitation or non-damaging earthquake ground motions), or for monitoring the damage status of equipment housed in the structure that is vulnerable to such excitation. The ISEE algorithm developed by Au and Beck in 2000 is used to efficiently estimate the threshold exceedance (first-passage) probability by stochastic simulation. To improve computational efficiency for the monitoring problem, a new state-space version of ISEE is developed that incorporates state-estimation and a newly-developed state-sampling technique. The computational efficiency of the proposed technique is demonstrated through two numerical examples that show that it is vastly superior to Monte Carlo simulation in estimating the first-passage probability. Moreover, the approach produces useful by-products, including estimates for the model state and the uncertain excitation.  相似文献   

18.
Statistical process modeling is widely used in industry for forecasting the production outcomes, for process control and for process optimization. Applying a prediction model in a production process allows the user to calibrate/predict the mean of the distribution of the process outcomes and to partition the overall variation in the distribution of the process outcomes into explained (by the model) and unexplained (residuals) variations; thus, reducing the unexplained variability. The additional information about the process behavior can be used prior to the sampling procedure and may help to reduce the required sample size to classify a lot. This research focuses on the development of a model‐based sampling plan based ontextitCpk (process capability index). It is an extension of a multistage acceptance sampling plan also based on Cpk (Negrin et al., Quality Engineering 2009; 21 :306–318; Quality and Reliability Engineering International 2011; 27 :3–14). The advantage of this sampling plan is that the sample size needed depends directly and quantitatively on the quality of the process (Cpk), whereas other sampling plans such as MIL‐STD‐414 (Sampling Procedures and Tables for Inspection by Variables for Percent Defective, Department of Defense, Washington, DC, 1957.) use only qualitative measures. The objective of this paper is to further refine the needed sample size by using a predictive model for the lot's expectation. We developed model‐based sample size formulae which depend directly on the quality of the prediction model (as measured by R2) and adjust the ‘not model‐based’ multistage sampling plan developed in Negrin et al. (Quality Engineering 2009; 21 :306–318; Quality and Reliability Engineering International 2011; 27 :3–14) accordingly. A simulation study was conducted to compare between the model‐based and the ‘not model‐based’ sampling plans. It is found that when R2 = 0, the model‐based and ‘not model‐based’ sampling plans require the same sample sizes in order to classify the lots. However, as R2 becomes larger, the sample size required by the model‐based sampling plan becomes smaller than the one required by the ‘not model‐based’ sampling plan. In addition, it is found that the reduction of the sample size achieved by the model‐based sampling plan becomes more significant as Cpk tends to 1 and can be achieved without increasing the proportion of the classification errors. Finally, the suggested sampling plan was applied with areal data set from a chemicals manufacturing process for illustration. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
Material heterogeneities and discontinuities such as porosity, second phase particles, and other defects at meso/micro/nano scales, determine fatigue life, strength, and fracture behavior of aluminum castings. In order to achieve better performance of these alloys, a design-centered computer-aided renovative approach is proposed. Here, the term “design-centered” is used to distinguish the new approach from the traditional trial-and-error design approach by formulating a clear objective, offering a scientific foundation, and developing a computer-aided effective tool for the alloy development. A criterion for tailoring “child” microstructure, obtained by “parent” microstructure through statistical correlation, is proposed for the fatigue design at the initial stage. A dislocations pileup model has been developed. This dislocation model, combined with an optimization analysis, provides an analytical-based solution on a small scale for silicon particles and dendrite cells to enhance both fatigue performance and strength for pore-controlled castings. It can also be used to further tailor microstructures. In addition, a conceptual damage sensitivity map for fatigue life design is proposed. In this map there are critical pore sizes, above which fatigue life is controlled by pores; otherwise it is controlled by other mechanisms such as silicon particles and dendrite cells. In the latter case, the proposed criteria and the dislocation model are the foundations of a guideline in the design-centered approach to maximize both the fatigue life and strength of Al-Si-based light-weight alloy.  相似文献   

20.
《Quality Engineering》2007,19(2):93-100
Difficulties can occur in the operation of traditional control charts. A principal reason for this is that the data coming from a typical operating process do not vary about a fixed mean. It is shown how by using a nonstationary model a continuously updated local mean level is provided. This can be used to produce (a) a bounded adjustment chart that tells you when to adjust the process to achieve maximum economy and (b) a Shewhart monitoring chart seeking assignable causes of trouble applied to the deviations from the local mean. Estimation of the mean and “standard deviation” are not required.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号