首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The fit of tumor multiplicity data from 93 mouse skin, lung, and liver carcinogenicity experiments to Poisson, negative binomial, and normal distributions was studied. The data were fitted well by the negative binomial distribution. This distribution has two parameters, the mean tumor multiplicity and an exponent determined by the interanimal homogeneity of tumor response. The value of the latter parameter was related to animal strain and the target tissue studied in the carcinogenicity experiments. The null distribution of the two-sample likelihood ratio test based on the negative binomial with common exponent model for tumor multiplicity data was shown by simulation studies to be approximately chi 2 with 1 d.f. Simulation also indicated that the likelihood ratio test has sufficiently better performance when the negative binomial model is valid to make its use more attractive than the more commonly used Wilcoxon test or Student t test. Charts for estimating the number of animals per group that are required to detect specified differences in tumor multiplicities are provided for several commonly used assays.  相似文献   

2.
3.
The paper deals with the effect of a lowered speed limit on the number of accidents in which there are fatalities, injuries and vehicle damage on Swedish motorways. Two models extending the Poisson and negative binomial count data models are used for estimation. The extended models account for both overdispersion and potential dependence between successive counts. The inferences of the parameters depend on the assumed form of overdispersion. It is found that the speed limit reduction has decreased the number of accidents involving minor injuries and vehicle damage. Furthermore, the models allowing for serial correlation are shown to have the best ex ante forecasting performance.  相似文献   

4.
Some models of the process by which individuals seek medical care suggest the negative binomial as the underlying distribution of the frequencies of consultations in a given practice. Data from the 1970-71 National Morbidity Survey of General Practice are used to test these competing models. It is shown that the negative binomial distribution successfully fits consultation frequencies in aggregate and in subdivisions according to age, sex, and duration of registration. In this article is is assumed that the consultation process has two components: the patient's decision to visit his doctor for a new illness and the follow-up visits that results from this new problem. Supplementing previous evidence that the distribution of episodes of new illnesses follows a negative binomial distribution, this article shows that consultation frequencies among individuals presenting with one new illness also follow a negative binomial distribution. A unifying model is required to synthesize these findings.  相似文献   

5.
Used simulation techniques to study the accuracy of estimates of the mean, variance, and lower credibility value of true validities produced by the independent multiplicative model and by a modified dependent model that takes the correlation between range restriction and criterion reliability artifacts into account. Sample sizes (n?=?50 or 100) and the number of studies/analyses (50) were selected to be consistent with the typical parameters found in 129 validity generalization analyses. The mean, standard deviation, and shape of the true validity distributions were systematically varied. It is concluded that the independent and modified dependent estimates were typically accurate, although the credibility estimates were affected by the extremely skewed distributions. The mean and variance estimates from both models were not affected by distribution shape. Results support the applicability of the 1st 2 authors' (see record 1981-27033-001) models and F. L. Schmidt and J. E. Hunter's (see record 1978-11448-001) noninteractive model. Some limitations of the "bare-bones" sampling-error-only approach are noted. (9 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

6.
The mirror effect refers to findings from studies of recognition memory consistent with the idea that the underlying "strength" distributions are symmetric around their midpoint separating studied and nonstudied items. Attention-likelihood theory assumes underlying binomial distributions of marked features and claims that old-item differences result from differential attention across conditions during study. The symmetry arises because subjects use the likelihood ratio as the basis for decision. The author analyzes the model and argues that one of the main criticisms (the complexity of the likelihood-ratio decision rule) is unwarranted. A further analysis shows that other distributions (the Poisson and the hypergeometric) can also produce a mirror effect. Even with the binomial distribution, a variety of parameter values can produce a mirror effect, and with the right combination of parameter values, differential attention across conditions is not necessary for a mirror effect to occur. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

7.
We consider two and 50 compartment lung models for use with two techniques used to investigate the efficiency of the lungs: the Multiple Breath Nitrogen Washout (MBNW) technique used for investigating the ventilation-volume distribution; and the Multiple Inert Gas Elimination Technique (MIGET) used for investigating the ventilation-perfusion distribution. In each of these techniques pulmonary respiratory gas exchange is described by conservation of mass equations which may be written in identical form, and in each the underlying distributions of ventilation to volume and ventilation to perfusion are assumed to be continuous functions (usually assumed to be a linear sum of log-normal distributions). The mathematical models used to describe the lung have predominantly used a collection of discrete compartments to approximate these continuous distributions. The most commonly used models have used one, two or 50 compartments. In this paper, we begin by showing that in the limit as the width of the peaks of the distribution tend to zero, the continuous distributions may be replaced by a single discrete compartment placed at each peak of the distribution. We investigate the various methods used previously for parameter recovery, and show that one commonly used method for the MBNW is not suitable and suggest a modification to this recovery technique. Using simulated error-free data, we show that both the two compartment model and the 50 compartment model contain information about the ventilation-volume (or ventilation-perfusion) distribution, and investigate the extent to which this information can be used to recover the parameters which define these distributions. We go on to use Monte-Carlo methods to investigate the stability of the recovery process.  相似文献   

8.
Observations from a study of the development of ovulations into embryos for Texel sheep are analysed with a model for count data that are under- or overdispersed relative to binomial variation. The analysis is based on maximum quasi-likelihood (McCullagh and Nelder, 1989, Generalized Linear Models, 2nd edition, London: Chapman and Hall), following an approach suggested by Williams (1982, Applied Statistics 31, 144-148). The dispersion parameter is developed as a combination of a variance component representing shared maternal effects and a correlation, typically negative, between ovulations within ewes. The number of ovulations (the binomial denominator) is included as an explanatory variable.  相似文献   

9.
Stochastic weather generators are statistical models that produce random numbers that resemble the observed weather data on which they have been fitted; they are widely used in meteorological and hydrological simulations.For modeling daily precipitation in weather generators,first-order Markov chain-dependent exponential,gamma,mixed-exponential,and lognormal distributions can be used.To examine the performance of these four distributions for precipitation simulation,they were fitted to observed data collected at 10 stations in the watershed of Yishu River.The parameters of these models were estimated using a maximum-likelihood technique performed using genetic algorithms.Parameters for each calendar month and the Fourier series describing parameters for the whole year were estimated separately.Bayesian information criterion,simulated monthly mean,maximum daily value,and variance were tested and compared to evaluate the fitness and performance of these models.The results indicate that the lognormal and mixed-exponential distributions give smaller BICs,but their stochastic simulations have overestimation and underestimation respectively,while the gamma and exponential distributions give larger BICs,but their stochastic simulations produced monthly mean precipitation very well.When these distributions were fitted using Fourier series,they all underestimated the above statistics for the months of June,July and August.  相似文献   

10.
The shape invariant model is a semi-parametric approach to estimating a functional relationship from clustered data (multiple observations on each of a number of individuals). The common response curve shape over individuals is estimated by adjusting for individual scaling differences while pooling shape information. In practice, the common response curve is restricted to some flexible family of functions. This paper introduces the use of a free-knot spline shape function and reduces the number of parameters in the shape invariant model by assuming a random distribution on the parameters that control the individual scaling of the shape function. New graphical diagnostics are presented, parameter identifiability and estimation are discussed, and an example is presented.  相似文献   

11.
It has been known for some time that regional blood flows within an organ are not uniform. Useful measures of heterogeneity of regional blood flows are the standard deviation and coefficient of variation or relative dispersion of the probability density function (PDF) of regional flows obtained from the regional concentrations of tracers that are deposited in proportion to blood flow. When a mathematical model is used to analyze dilution curves after tracer solute administration, for many solutes it is important to account for flow heterogeneity and the wide range of transit times through multiple pathways in parallel. Failure to do so leads to bias in the estimates of volumes of distribution and membrane conductances. Since in practice the number of paths used should be relatively small, the analysis is sensitive to the choice of the individual elements used to approximate the distribution of flows or transit times. Presented here is a method for modeling heterogeneous flow through an organ using a scheme that covers both the high flow and long transit time extremes of the flow distribution. With this method, numerical experiments are performed to determine the errors made in estimating parameters when flow heterogeneity is ignored, in both the absence and presence of noise. The magnitude of the errors in the estimates depends upon the system parameters, the amount of flow heterogeneity present, and whether the shape of the input function is known. In some cases, some parameters may be estimated to within 10% when heterogeneity is ignored (homogeneous model), but errors of 15-20% may result, even when the level of heterogeneity is modest. In repeated trials in the presence of 5% noise, the mean of the estimates was always closer to the true value with the heterogeneous model than when heterogeneity was ignored, but the distributions of the estimates from the homogeneous and heterogeneous models overlapped for some parameters when outflow dilution curves were analyzed. The separation between the distributions was further reduced when tissue content curves were analyzed. It is concluded that multipath models accounting for flow heterogeneity are a vehicle for assessing the effects of flow heterogeneity under the conditions applicable to specific laboratory protocols, that efforts should be made to assess the actual level of flow heterogeneity in the organ being studied, and that the errors in parameter estimates are generally smaller when the input function is known rather than estimated by deconvolution.  相似文献   

12.
Methods for fitting radiation survival curves to data obtained from endpoint-dilution assays are described. It is shown that for functional forms such as the linear-quadratic model the problem can be recast as a generalized linear model (GLM) and the data fitted using standard software. For functional forms which are not capable of being linearized, such as the multitarget model, the direct maximum likelihood (DML) techniques of Thames et al. (1986) can be used. Both these techniques produce exact maximum likelihood parameter estimates. Compared with the weighted least-squares (WLS) approach traditionally employed, these approaches avoid the need to approximate the binomial distribution of the number of negative wells by a normal distribution, and avoid the biases introduced by the need for arbitrary treatment of data points with 0 or 100% negative wells. The results of fittings using the novel GLM and DML approaches are compared with those obtained using the WLS method on a large series of datasets. For most datasets the WLS method performs well, compared with the exact method, but in a small number of cases the WLS predicted parameter estimates can be in error by as much as their estimated standard errors. A method for the use of a concurrent control to correct for interexperimental variation is outlined. The methods have been implemented in a Fortran computer program using the NAG subroutine library.  相似文献   

13.
14.
深度神经网络近年在计算机视觉以及自然语言处理等任务上不断刷新已有最好性能,已经成为最受关注的研究方向.深度网络模型虽然性能显著,但由于参数量巨大、存储成本与计算成本过高,仍然难以部署到硬件受限的嵌入式或移动设备上.相关研究发现,基于卷积神经网络的深度模型本身存在参数冗余,模型中存在对最终结果无用的参数,这为深度网络模型压缩提供了理论支持.因此,如何在保证模型精度条件下降低模型大小已经成为热点问题.本文对国内外学者近几年在模型压缩方面所取得的成果与进展进行了分类归纳并对其优缺点进行评价,并探讨了模型压缩目前存在的问题以及未来的发展方向.   相似文献   

15.
We developed and applied pharmacokinetic-pharmacodynamic (PK-PD) models to characterize in vitro bacterial rate of killing as a function of ceftazidime concentrations over time. For PK-PD modeling, data obtained during continuous and intermittent infusion of ceftazidime in Pseudomonas aeruginosa killing experiments with an in vitro pharmacokinetic model were used. The basic PK-PD model was a maximum-effect model which described the number of viable bacteria (N) as a function of the growth rate (lambda) and killing rate (epsilon) according to the equation dN/dt = [lambda - epsilon x [Cgamma(EC50gamma + Cgamma)]] N, where gamma is the Hill factor, C is the concentration of antibiotic, and EC50 is the concentration of antibiotic at which 50% of the maximum effect is obtained. Next, four different models with increasing complexity were analyzed by using the EDSIM program (MediWare, Groningen, The Netherlands). These models incorporated either an adaptation rate factor and a maximum number of bacteria (Nmax) factor or combinations of the two parameters. In addition, a two-population model was evaluated. Model discrimination was by Akaike's information criterion. The experimental data were best described by the model which included an Nmax term and a rate term for adaptation for a period up to 36 h. The absolute values for maximal growth rate and killing rate in this model were different from those in the original experiment, but net growth rates were comparable. It is concluded that the derived models can describe bacterial growth and killing in the presence of antibiotic concentrations mimicking human pharmacokinetics. Application of these models will eventually provide us with parameters which can be used for further dosage optimization.  相似文献   

16.
The population-dependent concept of reliability is used in test score models such as classical test theory and the binomial error model, whereas in item response models, the population-independent concept of information is used. Reliability and information apply to both test score and item response models. Information is a conditional definition of precision, that is, the precision for a given subject; reliability is an unconditional definition, that is, the precision for a population of subjects. Information and reliability do not distinguish test score and item response models. The main distinction is that the parameters are specific for the test and the subject in test score models, whereas in item response models, the item parameters are separated from the subject parameters. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
The regression models appropriate for counted data have seen little use in psychology. This article describes problems that occur when ordinary linear regression is used to analyze count data and presents 3 alternative regression models. The simplest, the Poisson regression model, is likely to be misleading unless restrictive assumptions are met because individual counts are usually more variable ("overdispersed") than is implied by the model. This model can be modified in 2 ways to accommodate this problem. In the overdispersed model, a factor can be estimated that corrects the regression model's inferential statistics. In the second alternative, the negative binomial regression model, a random term reflecting unexplained between-subject differences is included in the regression model. The authors compare the advantages of these approaches. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

18.
The current-voltage relations obtained by integrating the Nernst-Planck equations for a variety of energy profiles are obtained. A simple and approximate method for comparing these relations is described. The method is based on using a linearized transform of current-voltage relations for an Eyring single barrier model. A parameter, gamma, related to the location of the single barrier in the Eyring model, and to the shape of the barrier in other models, is readily obtained from the slopes of the linearized relations. It is then a simple matter to determine whether a given current-voltage relation allows discrimination between any particular energy profiles. The results show that the equivalent Eyring model does not always place the peak energy barrier in the same position as other models and that quite large errors in the assignment of position may be made if such a model is used. The results are also used to test the ability of some experimental current-voltage diagrams to discriminate between various energy profiles.  相似文献   

19.
We present and analyze a model for the interaction of human immunodeficiency virus type 1 (HIV-1) with target cells that includes a time delay between initial infection and the formation of productively infected cells. Assuming that the variation among cells with respect to this 'intracellular' delay can be approximated by a gamma distribution, a high flexible distribution that can mimic a variety of biologically plausible delays, we provide analytical solutions for the expected decline in plasma virus concentration after the initiation of antiretroviral therapy with one or more protease inhibitors. We then use the model to investigate whether the parameters that characterize viral dynamics can be identified from biological data. Using non-linear least-squares regression to fit the model to simulated data in which the delays conform to a gamma distribution, we show that good estimates for free viral clearance rates, infected cell death rates, and parameters characterizing the gamma distribution can be obtained. For simulated data sets in which the delays were generated using other biologically plausible distributions, reasonably good estimates for viral clearance rates, infected cell death rates, and mean delay times can be obtained using the gamma-delay model. For simulated data sets that include added simulated noise, viral clearance rate estimates are not as reliable. If the mean intracellular delay is known, however, we show that reasonable estimates for the viral clearance rate can be obtained by taking the harmonic mean of viral clearance rate estimates from a group of patients. These results demonstrate that it is possible to incorporate distributed intracellular delays into existing models for HIV dynamics and to use these refined models to estimate the half-life of free virus from data on the decline in HIV-1 RNA following treatment.  相似文献   

20.
Hybrid Approach for Addressing Uncertainty in Risk Assessments   总被引:3,自引:0,他引:3  
Parameter uncertainty is a major aspect of the model-based estimation of the risk of human exposure to pollutants. The Monte Carlo method, which applies probability theory to address model parameter uncertainty, relies on a statistical representation of available information. In recent years, other uncertainty theories have been proposed as alternative approaches to address model parameter uncertainty in situations where available information is insufficient to identify statistically representative probability distributions, due in particular to data scarcity. The simplest such theory is possibility theory, which uses so-called fuzzy numbers to represent model parameter uncertainty. In practice, it may occur that certain model parameters can be reasonably represented by probability distributions, because there are sufficient data available to substantiate such distributions by statistical analysis, while others are better represented by fuzzy numbers (due to data scarcity). The question then arises as to how these two modes of representation of model parameter uncertainty can be combined for the purpose of estimating the risk of exposure. This paper proposes an approach (termed a hybrid approach) which combines Monte Carlo random sampling of probability distribution functions with fuzzy calculus. The approach is applied to a real case of estimation of human exposure, via vegetable consumption, to cadmium present in the surficial soils of an industrial site located in the north of France. The application illustrates the potential of the proposed approach, which allows the uncertainty affecting model parameters to be represented in a way that is consistent with the information at hand. Also, because the hybrid approach takes advantage of the “rich” information provided by probability distributions, while retaining the conservative character of fuzzy calculus, it is believed to hold value in terms of a “reasonable” application of the precautionary principle.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号