首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We introduce, for the first time, a new class of Birnbaum–Saunders nonlinear regression models potentially useful in lifetime data analysis. The class generalizes the regression model described by Rieck and Nedelman [Rieck, J.R., Nedelman, J.R., 1991. A log-linear model for the Birnbaum–Saunders distribution. Technometrics 33, 51–60]. We discuss maximum-likelihood estimation for the parameters of the model, and derive closed-form expressions for the second-order biases of these estimates. Our formulae are easily computed as ordinary linear regressions and are then used to define bias corrected maximum-likelihood estimates. Some simulation results show that the bias correction scheme yields nearly unbiased estimates without increasing the mean squared errors. Two empirical applications are analysed and discussed.  相似文献   

2.
In this paper, several diagnostics measures are proposed based on case-deletion model for log-Birnbaum-Saunders regression models (LBSRM), which might be a necessary supplement of the recent work presented by Galea et al. [2004. Influence diagnostics in log-Birnbaum-Saunders regression models. J. Appl. Statist. 31, 1049-1064] who studied the influence diagnostics for LBSRM mainly based on the local influence analysis. It is shown that the case-deletion model is equivalent to the mean-shift outlier model in LBSRM and an outlier test is presented based on mean-shift outlier model. Furthermore, we investigate a test of homogeneity for shape parameter in LBSRM, which is a problem mentioned by both Rieck and Nedelman [1991. A log-linear model for the Birnbaum-Saunders distribution. Technometrics 33, 51-60] and Galea et al. [2004. Influence diagnostics in log-Birnbaum-Saunders regression models. J. Appl. Statist. 31, 1049-1064]. We obtain the likelihood ratio and score statistics for such test. Finally, a numerical example is given to illustrate our methodology and the properties of likelihood ratio and score statistics are investigated through Monte Carlo simulations.  相似文献   

3.
Smoothing spline ANOVA (SSANOVA) provides an approach to semiparametric function estimation based on an ANOVA type of decomposition. Wahba et al. (1995) decomposed the regression function based on a tensor sum decomposition of inner product spaces into orthogonal subspaces, so the effects of the estimated functions from each subspace can be viewed independently. Recent research related to smoothing spline ANOVA focuses on either frequentist approaches or a Bayesian framework for variable selection and prediction. In our approach, we seek “objective” priors especially suited to estimation. The prior for linear terms including level effects is a variant of the Zellner–Siow prior (Zellner and Siow, 1980), and the prior for a smooth effect is specified in terms of effective degrees of freedom. We study this fully Bayesian SSANOVA model for Gaussian response variables, and the method is illustrated with a real data set.  相似文献   

4.
This paper proposes a new method and algorithm for predicting multivariate responses in a regression setting. Research into the classification of high dimension low sample size (HDLSS) data, in particular microarray data, has made considerable advances, but regression prediction for high-dimensional data with continuous responses has had less attention. Recently Bair et al. (2006) proposed an efficient prediction method based on supervised principal component regression (PCR). Motivated by the fact that using a larger number of principal components results in better regression performance, this paper extends the method of Bair et al. in several ways: a comprehensive variable ranking is combined with a selection of the best number of components for PCR, and the new method further extends to regression with multivariate responses. The new method is particularly suited to addressing HDLSS problems. Applications to simulated and real data demonstrate the performance of the new method. Comparisons with the findings of Bair et al. (2006) show that for high-dimensional data in particular the new ranking results in a smaller number of predictors and smaller errors.  相似文献   

5.
This note points out that the framework proposed in (Wang et al., 2012) is equivalent to the conventional de-coupling framework introduced in some textbooks; see e.g. (Bar-Shalom et al., 2001).  相似文献   

6.
Modeling the dependence of credit ratings is an important issue for portfolio credit risk analysis. Multivariate Markov chain models are a feasible mathematical tool for modeling the dependence of credit ratings. Here we develop a flexible multivariate Markov chain model for modeling the dependence of credit ratings. The proposed model provides a parsimonious way to capture both the cross-sectional and temporal associations among ratings of individual entities. The number of model parameters is of the magnitude O(sm 2 + s 2 m), where m is the number of ratings categories and s is the number of entities in a credit portfolio. The proposed model is also easy to implement. The estimation method is formulated as a set of s linear programming problems and the estimation algorithm can be implemented easily in a Microsoft EXCEL worksheet, see Ching et al. Int J Math Educ Sci Eng 35:921–932 (2004). We illustrate the practical implementation of the proposed model using real ratings data. We evaluate risk measures, such as Value at Risk and Expected Shortfall, for a credit portfolio using the proposed model and compare the risk measures with those arising from Ching et al. IMRPreprintSeries (2007), Siu et al. Quant Finance 5:543–556 (2005).  相似文献   

7.
This note is motivated by recent works of Xie et al. (2009) and Xiang et al. (2007). Herein, we simplify the score statistic presented by Xie et al. (2009) to test overdispersion in the zero-inflated generalized Poisson (ZIGP) mixed model, and discuss an extension to test overdispersion in zero-inflated Poisson (ZIP) mixed models. Examples highlight the application of the extended results. The extensive simulation study for testing overdispersion in the Poisson mixed model indicates that the proposed score statistics maintain the nominal level reasonably well. In practice, the appropriate model is chosen based on the approximate mean-variance relationship in the data, and a formal score test based on asymptotic standard normal distribution can be employed for testing overdispersion. A case study is provided to illustrate procedures for data analysis.  相似文献   

8.
Sylvester double sums, introduced first by Sylvester (see [Sylvester, 1840] and [Sylvester, 1853]), are symmetric expressions of the roots of two polynomials, while subresultants are defined through the coefficients of these polynomials (see Apery and Jouanolou (2006) and Basu et al. (2003) for references on subresultants). As pointed out by Sylvester, the two notions are very closely related: Sylvester double sums and subresultants are equal up to a multiplicative non-zero constant in the ground field. Two proofs are already known: that of Lascoux and Pragacz (2003), using Schur functions, and that of d’Andrea et al. (2007), using manipulations of matrices. The purpose of this paper is to give a new simple proof using similar inductive properties of double sums and subresultants.  相似文献   

9.
Code OK1 is a fast and precise three-dimensional computer program designed for simulations of heavy ion beam (HIB) irradiation on a direct-driven spherical fuel pellet in heavy ion fusion (HIF). OK1 provides computational capabilities of a three-dimensional energy deposition profile on a spherical fuel pellet and the HIB irradiation non-uniformity evaluation, which are valuables for optimizations of the beam parameters and the fuel pellet structure, as well for further HIF experiment design. The code is open and complete, and can be easily modified or adapted for users' purposes in this field.

Program summary

Title of program: OK1Catalogue identifier: ADSTProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSTProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer: PC (Pentium 4, ∼1 GHz or more recommended)Operating system: Windows or UNIXProgram language used: C++Memory required to execute with typical data: 911 MBNo. of bits in a word: 32No. of processors used: 1 CPUHas the code been vectorized or parallelized: NoNo. of bytes in distributed program, including test data: 16 557Distribution format: tar gzip fileKeywords: Heavy ion beam, inertial confinement fusion, energy deposition, fuel pelletNature of physical problem: Nuclear fusion energy may have attractive features as one of our human energy resources. In this paper we focus on heavy ion inertial confinement fusion (HIF). Due to a favorable energy deposition behavior of heavy ions in matter [J.J. Barnard et al., UCRL-LR-108095, 1991; C. Deutsch et al., J. Plasma Fusion Res. 77 (2001) 33; T. Someya et al., Fusion Sci. Tech. (2003), submitted] it is expected that heavy ion beam (HIB) would be one of energy driver candidates to operate a future inertial confinement fusion power plant. For a successful fuel ignition and fusion energy release, a stringent requirement is imposed on the HIB irradiation non-uniformity, which should be less than a few percent [T. Someya et al., Fusion Sci. Tech. (2003), submitted; M.H. Emery et al., Phys. Rev. Lett. 48 (1982) 253; S. Kawata et al., J. Phys. Soc. Jpn. 53 (1984) 3416]. In order to meet this requirement we need to evaluate the non-uniformity of a realistic HIB irradiation and energy deposition pattern. The HIB irradiation and non-uniformity evaluations are sophisticated and difficult to calculate analytically. Based on our code one can numerically obtain a three-dimensional profile of energy deposition and evaluate the HIB irradiation non-uniformity onto a spherical target for a specific HIB parameter value set in HIF.Method of solution: OK1 code is based on the stopping power of ions in matter [J.J. Barnard et al., UCRL-LR-108095, 1991; C. Deutsch et al., J. Plasma Fusion Res. 77 (2001) 33; T. Someya et al., Fusion Sci. Tech. (2003), submitted; M.H. Emery et al., Phys. Rev. Lett. 48 (1982) 253; S. Kawata et al., J. Phys. Soc. Jpn. 53 (1984) 3416; T. Mehlhorn, SAND80-0038, 1980; H.H. Andersen, J.F. Ziegler, Pergamon Press, 1977, p. 3]. The code simulates a multi-beam irradiation, obtains the 3D energy deposition profile of the fuel pellet and evaluates the deposition non-uniformity.Restrictions on the complexity of the problem: NoTypical running time: The execution time depends on the number of beams in the simulated irradiation and its characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost of the practical running tests performed, the typical running time for one beam deposition is less than 2 s on a PC with a CPU of Pentium 4, 2.2 GHz (e.g., in Test 2 when the number of beams is 600, the running time is about 18 minutes).Unusual features of the program: No  相似文献   

10.
The fourth-order compact approximation for the spatial second-derivative and several linearized approaches, including the time-lagging method of Zhang et al. (1995), the local-extrapolation technique of Chang et al. (1999) and the recent scheme of Dahlby et al. (2009), are considered in constructing fourth-order linearized compact difference (FLCD) schemes for generalized NLS equations. By applying a new time-lagging linearized approach, we propose a symmetric fourth-order linearized compact difference (SFLCD) scheme, which is shown to be more robust in long-time simulations of plane wave, breather, periodic traveling-wave and solitary wave solutions. Numerical experiments suggest that the SFLCD scheme is a little more accurate than some other FLCD schemes and the split-step compact difference scheme of Dehghan and Taleei (2010). Compared with the time-splitting pseudospectral method of Bao et al. (2003), our SFLCD method is more suitable for oscillating solutions or the problems with a rapidly varying potential.  相似文献   

11.
Correlated or clustered failure time data often occur in medical studies, among other fields ( [Cai and Prentice, 1995] and [Kalbfleisch and Prentice, 2002]), and sometimes such data arise together with interval censoring (Wang et al., 2006). Furthermore, the failure time of interest may be related to the cluster size. For example, Williamson et al. (2008) discussed such an example arising from a lymphatic filariasis study. A simple and common approach to the analysis of these data is to simplify or convert interval-censored data to right-censored data due to the lack of proper inference procedures for direct analysis of these data. In this paper, two procedures are presented for regression analysis of clustered failure time data that allow both interval censoring and informative cluster size. Simulation studies are conducted to evaluate the presented approaches and they are applied to a motivating example.  相似文献   

12.
Identifying effective literacy instruction programs has been a focal point for governments, educators and parents over the last few decades (Ontario Ministry of Education, 2004, 2006; Council of Ontario Directors of Education, 2011). Given the increasing use of computer technologies in the classroom and in the home, a variety of information communication technology (ICT) interventions for learning have been introduced. Meta-analyses comparing the impact of these programs on learning, however, have yielded inconsistent findings (Andrews et al., 2007, Torgerson and Zhu, 2003, Slavin et al., 2008, Slavin et al., 2009). The present tertiary meta-analytic review re-assesses outcomes presented in three previous meta-analyses. Four moderator variables assessed the impact of the systematic review from which they were retrieved, training and support, implementation fidelity and who delivered the intervention (teacher versus researcher). Significant results were found when training and support was entered as a moderator variable with the small overall effectiveness of the ICTs (ES = 0.18), similar to those found in previous research, increasing significantly (ES = 0.57). These findings indicate the importance of including implementation factors such as training and support, when considering the relative effectiveness of ICT interventions.  相似文献   

13.
In this paper, we study a new iteration process for a finite family of nonself asymptotically nonexpansive mappings with errors in Banach spaces. We prove some weak and strong convergence theorems for this new iteration process. The results of this paper improve and extend the corresponding results of Chidume et al. (2003) [10], Osilike and Aniagbosor (2000) [3], Schu (1991) [4], Takahashi and Kim (1998) [9], Tian et al. (2007) [18], Wang (2006) [11], Yang (2007) [17] and others.  相似文献   

14.
We consider a nonlinear discrete-time population model for the dynamics of an age-structured species. This model has the form of a Lure feedback system (well-known in control theory) and is a particular case of the system studied by Townley et al. in Townley et al. (2012). The main objective is to show that, in this case, the range of nonlinearities for which the existence of globally asymptotically stable non-zero equilibrium can be guaranteed is considerably larger than that in the main result in Townley et al. (2012). We illustrate our results with several biologically meaningful examples.  相似文献   

15.
Big data are a prominent source of value capable of generating competitive advantage and superior business performance. This paper represents the first empirical investigation of the theoretical model proposed by Grover et al. (2018), considering the mediating effects of four value creation mechanisms on the relationship between big data analytics capabilities (BDAC) and four value targets. The four value creation mechanisms investigated (the source of the value being pursued) are transparency, access, discovery, and proactive adaptation, while the four value targets (the impacts of the value creation process) are organization performance, business process improvement, customer experience and market enhancement, and product and service innovation. The proposed empirical validation of Grover et al.’s (2018) model adopts an econometric analysis applied to data gathered through a survey involving 256 BDA experts. The results reveal that transparency mediates the relationship for all the value targets, while access and proactive adaptation mediate only in case of some value targets, and discovery does not have any mediating effect. Theoretical and practical implications are discussed at the end of the paper.  相似文献   

16.
An emerging trend in DNA computing consists of the algorithmic analysis of new molecular biology technologies, and in general of more effective tools to tackle computational biology problems. An algorithmic understanding of the interaction between DNA molecules becomes the focus of some research which was initially addressed to solve mathematical problems by processing data within biomolecules. In this paper a novel mechanism of DNA recombination is discussed, that turned out to be a good implementation key to develop new procedures for DNA manipulation (Franco et al., DNA extraction by cross pairing PCR, 2005; Franco et al., DNA recombination by XPCR, 2006; Manca and Franco, Math Biosci 211:282–298, 2008). It is called XPCR as it is a variant of the polymerase chain reaction (PCR), which was a revolution in molecular biology as a technique for cyclic amplification of DNA segments. A few DNA algorithms are proposed, that were experimentally proven in different contexts, such as, mutagenesis (Franco, Biomolecular computing—combinatorial algorithms and laboratory experiments, 2006), multiple concatenation, gene driven DNA extraction (Franco et al., DNA extraction by cross pairing PCR, 2005), and generation of DNA libraries (Franco et al., DNA recombination by XPCR, 2006), and some related ongoing work is outlined.  相似文献   

17.
In this paper, we introduce a new modified Ishikawa iterative process for computing fixed points of an infinite family nonexpansive mapping in the framework of Banach spaces. Then, we establish the strong convergence theorem of the proposed iterative scheme under some mild conditions which solves a variational inequality. The results obtained in this paper extend and improve on the recent results of Qin et al. [Strong convergence theorems for an infinite family of nonexpansive mappings in Banach spaces, Journal of Computational and Applied Mathematics 230 (1) (2009) 121–127], Cho et al. [Approximation of common fixed points of an infinite family of nonexpansive mappings in Banach spaces, Computers and Mathematics with Applications 56 (2008) 2058–2064] and many others.  相似文献   

18.
The aim of this paper is to derive diagnostic procedures based on case-deletion model for symmetrical nonlinear regression models, which complements Galea et al. (2005) that developed local influence diagnostics under some perturbation schemes. This class of models includes all symmetric continuous distributions for errors covering both light- and heavy-tailed distributions such as Student-t, logistic-I and -II, power exponential, generalized Student-t, generalized logistic and contaminated normal, among others. Thus, these models can be checked for robustness to outliers in the response variable and diagnostic methods may be a useful tool for an appropriate choice. First, an iterative process for the parameter estimation as well as some inferential results are presented. Besides, we present the results of a simulation study in which the characteristics of heavy-tailed models are evaluated in the presence of outliers. Then, we derive some diagnostic measures such as Cook distance, W-K statistic, one-step approach and likelihood displacement, generalizing results obtained for normal nonlinear regression models. Also, we present simulation studies that illustrate the behavior of diagnostic measures proposed. Finally, we consider two real data sets previously analyzed under normal nonlinear regression models. The diagnostic analysis indicates that a Student-t nonlinear regression model seems to fit the data better than the normal nonlinear regression model as well as other symmetrical nonlinear models in the sense of robustness against extreme observations.  相似文献   

19.
We present a revised version of the BilKristal tool of Okuyan et al. (2007). We converted the development environment into Microsoft Visual Studio 2005 in order to resolve compatibility issues. We added multi-core CPU support and improvements are made to graphics functions in order to improve performance. Discovered bugs are fixed and exporting functionality to a material visualization tool is added.  相似文献   

20.
We present a Brownian Dynamics model of biological molecule separation in periodic nanofilter arrays. The biological molecules are modeled using the Worm-Like-Chain model with Hydrodynamic Interactions. We focus on short dsDNA molecules; this places the separation process either in the Ogston sieving regime or the transition region between Ogston sieving and entropic trapping. Our simulation results are validated using the experimental results of Fu et al. (Phys Rev Lett 97:018103, 2006); particular attention is paid to the model’s ability to quantitatively capture experimental results using realistic values of all physical parameters. Our simulation results showed that molecule mobility is sensitive to the device geometry. Moreover, our model is used for validating the theoretical prediction of Li et al. (Anal Bioanal Chem 394:427–435, 2009) who proposed a separation process featuring an asymmetric device and an electric field of alternating polarity. Good agreement is found between our simulation results and the predictions of the theoretical model of Li et al.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号