首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An algorithm is developed for computing proton dose distributions in the therapeutic energy range (100-250 MeV). The goal is to provide accurate pencil beam dose distributions for two-dimensional or three-dimensional simulations of possible intensity-modulated proton therapy delivery schemes. The algorithm is based on Molière's theory of lateral deflections, which accurately describes the distribution of lateral deflections suffered by incident charged particles. The theory is applied to nonuniform targets through the usual pencil beam approximation which assumes that all protons from a given pencil beam pass through the same material at each depth. Fluence-to-dose conversion is made via Monte Carlo calculated broad-field central-axis depth-dose curves, which accounts for attenuation due to nuclear collisions and range straggling. Calculation speed is enhanced by using a best-fit Gaussian approximation of the radial distribution function at depth. Representative pencil beam and spread-out Bragg-peak computations are presented at 250 MeV and 160 MeV in water. Computed lateral full-widths-at-half-maximum's in water, at the Bragg peak, agree with the expected theoretical lateral values to within 1% at 160 MeV and to within 3% at 250 MeV. This algorithm differs from convolution methods in that the effect of the depth of any inhomogeneities in density or atomic composition are accounted for in a rigorous fashion. The algorithm differs from Fermi-Eyges based methods by accounting in a rigorous way for the effect of nonsmall-angle scattering and screening due to atomic electrons. The computational burden is only slightly greater than that expected using the less-rigorous Fermi-Eyges theory.  相似文献   

2.
The FE-lspd model is a two-component electron beam model that distinguishes between electrons that can be described by small-angle transport theory and electrons that are too widely scattered for small-angle transport theory to be applicable. The two components are called the primary beam and the laterally scattered primary distribution (lspd). The primary beam component incorporates a simple version of the Fermi-Eyges model and dominates dose calculations at therapeutic depths. The lspd component corrects erros in the lateral spreading of the primary beam component, thereby improving the accuracy by which the FE-lspd model calculates dose distribution in blocked fields. Comparisons were made between dose profiles and central-axis depth dose distributions in small fields calculated by the FE-lspd, Fermi-Eyges and EGS4 Monte Carlo models for a 10 MeV beam in a homogeneous water phantom. The maximum difference between the dose calculated using the FE-lspd model and EGS4 Monte Carlo is about 6% at a field diameter of about 1 cm, and less than 2% for field sizes greater than 3 cm diameter. The maximum difference between the Fermi-Eyges and Monte Carlo calculations is about 18% at a field diameter of about 2.5 cm. A comparison was made with the central-axis depth dose distribution measured in water for a 3 cm diameter field in a 10 MeV clinical electron beam. The errors in the dose distribution were found to be less than 2% using the FE-lspd model but almost 18% using the Fermi-Eyges model. A comparison was also made with pencil beam profiles calculated using the second-order Fermi-Eyges transport model.  相似文献   

3.
Characteristics of dual-foil scattered electron beams shaped with a multileaf collimator (MLC) (instead of an applicator system) were studied. The electron beams, with energies between 10 and 25 MeV, were produced by a racetrack microtron using a dual-foil scattering system. For a range of field sizes, depth dose curves, profiles, penumbra width, angular spread in air, and effective and virtual source positions were compared. Measurements were made when the MLC alone provided collimation and when an applicator provided collimation. Identical penumbra widths were obtained at a source-to-surface distance of 85 cm for the MLC and 110 cm for the applicator. The MLC-shaped beams had characteristics similar to other machines which use trimmers or applicators to collimate scanned or scattered electron beams. Values of the effective source position and the angular spread parameter for the MLC beams were similar to those of the dual-foil scattered beams of the Varian Clinac 2100 CD and the scanned beams of the Sagittaire linear accelerators. A model, based on Fermi-Eyges multiple scattering theory, was adapted and applied successfully to predict penumbra width as a function of collimator-surface distance.  相似文献   

4.
5.
Reflectometric measurements provide an objective assessment of the directionality of the photoreceptors in the human retina. Measurements are obtained by imaging the distribution at the pupil plane of light reflected off the human fundus in a bleached condition. We propose that scattering as well as waveguides must be included in a model of the intensity distribution at the pupil plane. For scattering, the cone-photoreceptor array is treated as a random rough surface, characterized by the correlation length T (related to the distance between scatterers, i.e., mean cone spacing) and the roughness standard deviation sigma (assuming random length variations of the cone outer-segment lengths that produce random phase differences). For realistic values of T and sigma we can use the Kirchhoff approximation for computing the scattering distribution. The scattered component of the distribution can be fitted to a Gaussian function whose width depends only on T and lambda. Actual measurements vary with experimental conditions (exposure time, retinal eccentricity, and lambda) in a manner consistent with the scattering model. However, photoreceptor directionality must be included in the model to explain the actual location of the peak of the intensity distribution in the pupil plane and the total angular spread of light.  相似文献   

6.
We analyse the limits of the diffusion approximation to the time-independent equation of radiative transfer for homogeneous and heterogeneous biological media. Analytical calculations and finite-difference simulations based on diffusion theory are compared with discrete-ordinate, finite-difference transport calculations. The influence of the ratio of absorption and transport scattering coefficient (mu(a)/mu'(s)) on the accuracy of the diffusion approximation are quantified and different definitions for the diffusion coefficient, D, are discussed. We also address effects caused by void-like heterogeneities in which absorption and scattering are very small compared with the surrounding medium. Based on results for simple homogeneous and heterogeneous systems, we analyse diffusion and transport calculation of light propagation in the human brain. For these simulations we convert density maps obtained from magnetic resonance imaging (MRI) to optical-parameter maps (mu(a) and mu'(s)) of the brain. We show that diffusion theory fails to describe accurately light propagation in highly absorbing regions, such as haematoma, and void-like spaces, such as the ventricles and the subarachnoid space.  相似文献   

7.
In diffraction tomography, the spatial distribution of the scattering object is reconstructed from the measured scattered data. For a scattering object that is illuminated with plane-wave radiation, under the condition of weak scattering one can invoke the Born (or the Rytov) approximation to linearize the equation for the scattered field (or the scattered phase) and derive a relationship between the scattered field (or the scattered phase) and the distribution of the scattering object. Reconstruction methods such as the Fourier domain interpolation methods and the filtered backpropagation method have been developed previously. However, the underlying relationship among and the noise properties of these methods are not evident. We introduce the concepts of ideal and modified sinograms. Analysis of the relationships between, and the noise properties of the two sinograms reveals infinite classes of methods for image reconstruction in diffraction tomography that include the previously proposed methods as special members. The methods in these classes are mathematically identical, but they respond to noise and numerical errors differently.  相似文献   

8.
The dislocation-network theory of Harper-Dorn (H-D) creep is reformulated using a new equation for the kinetics of growth of individual dislocation links in the network. The new kinetic equation has no impact on the scaled differential equation derived previously, which predicts the distribution of link lengths. However, the new theory predicts slightly different behavior for the kinetics of static recovery and leads to a new equation for the strain rate, which is expressed in terms of parameters that can be evaluated independently. This equation is valid not only for steady-state H-D creep, but is also valid for primary creep, provided the instantaneous value of the dislocation density is known. Using data on the variation of dislocation density with time, calculated values of the creep rates for Al deformed in the H-D regime agree with experimentally measured values to within a factor of 2. Creep curves for Al are calculated with the same degree of accuracy. These calculations involve no adjustable parameters. Steady-state creep rates for many materials presumably deformed in the H-D creep regime are compared with the predictions of the new equation for the strain rate. The calculated values agree with experimentally measured data to within a factor of about 150, which compares well with the predictions of other equations proposed in the literature. This article is based on a presentation made in the workshop entitled “Mechanisms of Elevated Temperature Plasticity and Fracture,” which was held June 27–29, 2001, in San Diego, CA, concurrent with the 2001 Joint Applied Mechanics and Materials Summer Conference. The workshop was sponsored by Basic Energy Sciences of the United States Department of Energy.  相似文献   

9.
Clinical dose calculations are often performed by scaling distances from a dose distribution measured in one medium to calculate the dose in another. These perturbation calculations have the mathematical form of a mapping. In this paper we identify five conditions required for particle transport to reduce to this form and develop a new mapping for electrons which approximately satisfies these conditions. This continuous scattering mapping is based on two parameters, the scattering power of the medium which determines the shape of the scaling paths, and the stopping power of the medium which determines where the energy is deposited along these paths. Pencil beam dose distributions are calculated with EGS4 in one medium and mapped to other media. The resultant distributions are compared with EGS4 calculations done directly in the second medium. The accuracy of the mapping algorithm is shown to be superior to both linear density scaling and the MDAH electron pencil beam algorithm [Kenneth R. Hogstrom, Michael D. Mills, and Peter R. Almond, "Electron beam dose calculations," Phys. Med. Biol. 26, 445-459 (1981)] for pencil beams in homogeneous media and inhomogeneous phantoms (both slab and nonslab geometries) for a variety of materials of clinical interest.  相似文献   

10.
Longitudinal Dispersion Coefficient in Straight Rivers   总被引:4,自引:0,他引:4  
An analytical method is developed to determine the longitudinal dispersion coefficient in Fischer's triple integral expression for natural rivers. The method is based on the hydraulic geometry relationship for stable rivers and on the assumption that the uniform-flow formula is valid for local depth-averaged variables. For straight alluvial rivers, a new transverse profile equation for channel shape and local flow depth is derived and then the lateral distribution of the deviation of the local velocity from the cross-sectionally averaged value is determined. The suggested expression for the transverse mixing coefficient equation and the direct integration of Fischer's triple integral are employed to determine a new theoretical equation for the longitudinal dispersion coefficient. By comparing with 73 sets of field data and the equations proposed by other investigators, it is shown that the derived equation containing the improved transverse mixing coefficient predicts the longitudinal dispersion coefficient of natural rivers more accurately.  相似文献   

11.
A new asymptotic expansion is applied to approximate reliability integrals. The asymptotic approximation reduces the problem of evaluating a multidimensional probability integral to solving an unconstrained minimization problem. Approximations are developed in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of the second-order reliability method integrals. In many cases, it may be computationally expensive to transform to normal variables, and an approximation using the probability distribution for the original variables can be used. Examples are presented illustrating the accuracy of the approximations, and results are compared with some existing approximations of reliability integrals.  相似文献   

12.
The sensitivity theory developed for nuclear engineering applications is applied to radiotherapy planning. After emphasizing the mathematical equivalence of solving the Boltzmann equation in forward and adjoint spaces both mathematical approaches are implemented to calculate the sensitivities of the dose distributions in a mathematical phantom to changes in the source of radiation. The tagging of the functionals of the radiation field with the origin of the source (forward or adjoint) makes possible the calculation of the sensitivity of the dose to the position, angular distribution, intensity, and spectra of the source, in a very efficient way by using present day available codes and hardware. There is then a potential for a new, accurate and potentially faster method that does not rely on a trial and error methodology.  相似文献   

13.
Recent advances in neutron and X-ray sources and instrumentation, new and improved scattering techniques, and molecular biology techniques, which have permitted facile preparation of samples, have each led to new opportunities in using small-angle scattering to study the conformations and interactions of biological macromolecules in solution as a function of their properties. For example, new instrumentation on synchrotron sources has facilitated time-resolved studies that yield insights into protein folding. More powerful neutron sources, combined with molecular biology tools that isotopically label samples, have facilitated studies of biomolecular interactions, including those involving active enzymes.  相似文献   

14.
15.
New Point Estimates for Probability Moments   总被引:2,自引:0,他引:2  
There are many areas of structural safety and structural dynamics in which it is often desirable to compute the first few statistical moments of a function of random variables. The usual approximation is by the Taylor expansion method. This approach requires the computation of derivatives. In order to avoid the computation of derivatives, point estimates for probability moments have been proposed. However, the accuracy is quite low, and sometimes, the estimating points may be outside the region in which the random variable is defined. In the present paper, new point estimates for probability moments are proposed, in which increasing the number of estimating points is easier because the estimating points are independent of the random variable in its original space and the use of high-order moments of the random variables is not required. By using this approximation, the practicability and accuracy of point estimates can be much improved.  相似文献   

16.
The theory behind ideal sedimentation tanks assumes that the fluid moves in uniform flow. Numerous studies have shown numerous nonuniform flow patterns, which explains why the solids removal efficiency of real clarifiers does not match theory predictions. This problem gets worse when the influent flow rate exceeds what the clarifier was designed to handle. This research shows that introducing a highly porous bed of “dendrite” fibers into clarifiers designed for the pulp and paper industry removed some of the nonuniformities as shown in a residence time distribution (RTD). These clarifiers have RTDs that are similar to their waste treatment counterparts. So, it is expected that the new technology will have similar effects in waste treatment systems. The bed acts as a resistor to nonaxial flow, reducing radial and angular components of velocity. It is also shown that the greatest effect on the bulk flow patterns occurs when the bed is positioned such that all of the overflow passes through it. Increasing the bed thickness also increases the effect. Analysis of these results was performed with a new model for RTDs based on the Weibull distribution, which is mathematically similar to the equation for a mixed flow RTD.  相似文献   

17.
In this study, a theoretical method for predicting the longitudinal dispersion coefficient is developed based on the transverse velocity distribution in natural streams. Equations of the transverse velocity profile for irregular cross sections of the natural streams are analyzed. Among the velocity profile equations tested in this study, the beta distribution equation, which is a probability density function, is considered to be the most appropriate model for explaining the complex behavior of the transverse velocity structure of irregular natural streams. The new equation for the longitudinal dispersion coefficient that is based on the beta function for the transverse velocity profile is developed. A comparison of the proposed equation with existing equations and the observed longitudinal dispersion coefficient reveals that the proposed equation shows better agreement with the observed data compared to other existing equations.  相似文献   

18.
 In this paper we propose a new theoretical thermo-mechanical explanation of the uneven transverse temperature distribution, along the width, for thin and wide hot rolled produced strip. In particular, we base our reasoning starting from the irregular pressure/friction distribution, that lead to an uneven heat generation. A 2-D mathematical model to calculate the transverse temperature distribution is presented, both to give a physical explanation to our problem, and to be used as an essential basis to build a correspondent FEM simulation model, in which heat loss and generation are both considered. Deformation and friction heat are both described in details, having a paramount importance in our reasoning. The heat generation problem is split into two parts, for a clearer and more logical analysis: one for the strip centre, and one for the sides, in correspondence of the temperature peak points at 100mm from the strip edge. Finally it is shown how our new theoretical model can lead to the exact interpretation of the measured uneven temperature distribution.  相似文献   

19.
《Acta Metallurgica》1987,35(11):2671-2678
A theory of isothermal grain growth in polycrystalline solids, which treats grain growth as a statistical or stochastic process, is presented. In this treatment deterministic equation for the rate of grain growth is made stochastic by the addition of a “noise” term. The noise or fluctuations are used to model the effect of complex topologically connected structure of the specimen on grain boundary motion, in addition to such motion directed by surface tension forces. Such considerations lead to a second order partial differential equation (Fokker-Planck equation) for the grain size distribution. Many of the major attributes of grain growth are shown to be a natural consequence of this equation. The solution obtained for this equation is a modified form of Rayleigh distribution which in many respects is similar of log normal distribution. Grain size distribution is also obtained from independent statistical consideration and is shown to be approximately log normal. Extension of the mathematical analysis to the case of Ostwald ripening is indicated.  相似文献   

20.
分割分布函数法是以x光小角散射数据计算超细粉末粒度分布的方法之一。本文通过优化系数矩阵、另加阻尼因子以及最小二乘法处理,对分割分布函数法求解稳定性进行了研究。全部演算在计算机上完成。结果表明,当满足相应条件时,所求粒度分布的平均偏差不大于所测散射强度的误差。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号