首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Describes the log-linear model as a framework for analyzing effects in multidimensional contingency tables, i.e., tables of frequencies formed by 2 or more variables of classification. Variables are considered to have nominal categories. A general purpose analysis is proposed for such tables, similar to analysis of variance. 2 test procedures are considered: (a) maximum likelihood estimation of expected cell frequencies and associated chi-square tests, and (b) chi-square tests based on logarithms of adjusted cell frequencies. In addition, 2 multiple-comparison methods, related to the latter approach, are considered as supplementary or alternative procedures. (28 ref.) (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
Kraemer and Jacklin (1979) proposed a method of analysis of univariate dyadic social interactions or relational data, and Mendoza and Graziano (1982) extended this method to multivariate relations. Their approach is based on an analysis-of-variance-type model that contains parameters characterizing the behavior of actors and partners and their interactions on each relation. The techniques presented in this article offer an alternative approach to the multivariate analysis of social interactions by realizing that many relations yield discrete-valued data and thus are better modeled by using methods designed for categorical data. This alternative approach is also more general because it allows more types of models to be fit. We illustrate, using the same data analyzed by the earlier methods. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
The methodology of discrete-event simulation provides a promising alternative solution to designing and analyzing dynamic, complicated, and interactive construction systems. In this paper, an attempt is made to extend the previous work of simplifying construction simulation by delving into the fundamental approaches for discrete-event simulation. A new simplified discrete-event simulation approach (SDESA) is presented through extracting the constructive features from the existing event/activity-based simulation methods; both the algorithm and the model structure of simulation are streamlined such that simulating construction systems is made as easy as applying the critical path method (CPM). Two applications based on real road construction projects in Hong Kong serve as case studies to illustrate the methodology of simulation modeling with SDESA and reveal the simplicity and effectiveness of SDESA in modeling complex construction systems and achieving the preset objectives of such modeling. They are a granular base-course construction system featuring both cyclic and linear processes and an asphalt paving construction system with complicated technological/logical constraints. As a general-purpose method for construction planning, SDESA enables practitioners to deal with what the CPM-based network analysis method fails to solve by offering discrete-event simulation capabilities. Furthermore, the SDESA can potentially be adapted to special-purpose simulation tools to tackle large and complicated construction systems of practical size that have yet to find convenient solutions with existing simulation methods.  相似文献   

4.
5.
The mode of within-locus gene action in most genomic regions is termed as the major genomic mode, i.e., it is the within-locus allelic effects in most regions of the genome. Determining whether dominance or overdominance is the major genomic mode is important for two long-standing evolutionary genetics issues: 1. How is the genetic variation in most genomic regions maintained? 2. What is the major mechanism for heterosis? Many efforts have been made, but almost all of them suffer some explanational difficulties. Here we propose an alternative inference approach. It is based on the existent theoretical results on the correlation of the recombination rate and the level of neutral variation in different genomic regions. Positive and negative correlation suggest dominance and overdominance, respectively, as the major genomic mode. Zero correlations imply either few selected sites or about equal composition and distribution of dominant and overdominant regions in the genome, depending on the data distribution. This approach not only avoids all the problems associated with earlier approaches, but it is also particularly useful in organisms where controlled breeding is difficult. Well-corroborated data in Drosophila and recently emerging data in mice and humans all suggest dominance as the major genomic mode.  相似文献   

6.
Suppose the number of 2 x 2 tables is large relative to the average table size, and the observations within a given table are dependent, as occurs in longitudinal or family-based case-control studies. We consider fitting regression models to the odds ratios using table-level covariates. The focus is on methods to obtain valid inferences for the regression parameters beta when the dependence structure is unknown. In this setting, Liang (1985, Biometrika 72, 678-682) has shown that inference based on the noncentral hypergeometric likelihood is sensitive to misspecification of the dependence structure. In contrast, estimating functions based on the Mantel-Haenszel method yield consistent estimators of beta. We show here that, under the estimating function approach, Wald's confidence interval for beta performs well in multiplicative regression models but unfortunately has poor coverage probabilities when an additive regression model is adopted. As an alternative to Wald inference, we present a Mantel-Haenszel quasi-likelihood function based on integrating the Mantel-Haenszel estimating function. A simulation study demonstrates that, in medium-sized samples, the Mantel-Haenszel quasi-likelihood approach yields better inferences than other methods under an additive regression model and inferences comparable to Wald's method under a multiplicative model. We illustrate the use of this quasi-likelihood method in a study of the familial risk of schizophrenia.  相似文献   

7.
Missing data commonly exist in operational records of wastewater treatment plants, such as influent and effluent water quality data. To deal with missing data, time series models that characterize trend, lag, and seasonality may be applied. In this paper, two-time series model-based methods, i.e., the two-directional exponential smoothing (TES) and TES with white noise (TESWN) added methods, are developed to replace missing data. Comparisons with traditional missing-data-replacement methods are also evaluated in the context of predicting missing values from influent data and the subsequent effect when the resulting influent time series are used as an input to process simulation models. The TES method is shown to be most appropriate when the goal is to minimize the average error associated with the missing value. The TESWN method is shown to be better suited for characterizing the amount of uncertainty that may be associated with the missing values.  相似文献   

8.
An important, frequent, and unresolved problem in treatment research is deciding how to analyze outcome data when some of the data are missing. After a brief review of alternative procedures and the underlying models on which they are based, an approach is presented for dealing with the most common situation—comparing the outcome results in a 2-group, randomized design in the presence of missing data. The proposed analysis is based on the concept of "modeling our ignorance" by examining all possible outcomes, given a known number of missing results with a binary outcome, and then describing the distribution of those results. This method allows the researcher to define the range of all possible results that could have resulted had the missing data been observed. Extensions to more complex designs are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
This paper constitutes a review of the methodical approaches allowing analysis of the mechanisms underlying development and differentiation. Progress in investigation of the mechanisms underlying embryogenesis is related to the discovery of genic families in the Drosophila genome, which are responsible for different periods of embryogenesis. The true revolution in studies of developmental mechanisms began with the application of molecular-genetic methods for analysis of Drosophila mutant lines. The clarification and analysis of the genes controlling regeneration is one of the most effective paths toward an understanding of the mechanisms underlying regeneration. No mutations affecting regeneration are, and the development of alternative (i.e., not based on mutation analysis) methods of discovery of the genes controlling regeneration is necessary for investigation of the genetic mechanisms of regeneration. The advantages and drawbacks of the two main approaches for discovery of the genes controlling regeneration are considered. The first approach is based on the production of a bank of sequences expressed in the regenerating structures and subsequent screening of the bank by the known probes. This approach also involves analysis of the structure, function, and expression pattern of the obtained homologs. The second approach is based on subtractive hybridization, which allows identification of the genes specifically expressed in the regenerating structures. This approach was made it possible to identify, for the first time, new genes specifically expressed during lens and retina regeneration in amphibians.  相似文献   

10.
Highway construction often causes an additional road user cost (RUC) to motorists due to traffic flow interruption and congestion in work zones. Consequently, facility owners, such as the Florida Department of Transportation (FDOT), are often interested in using alternative contracting methods such as A+B contracting to expedite construction. Although many of these contracting methods rely on the RUC to determine incentives or disincentives, no standard method for RUC calculation is available to FDOT district engineers. In addition, existing methods are neither practical nor user-friendly for determining incentives or disincentives. This study intends to develop a RUC calculation procedure for the FDOT that focuses on using data that are easily accessible to FDOT district engineers, such as drawings and maintenance of traffic plans. The procedure is developed based on traffic analysis methods published in the Highway Capacity Manual, previous studies on user benefit analysis and work zones, and empirical data specific to Florida. Case studies are used to illustrate the procedure and to compare it with two other existing models, the Arizona model and the queue and user cost evaluation of work zone model, through correlation analysis, comparison of calculation assumptions, and data input analysis. This study shows that the suggested procedure produces consistent RUC estimates.  相似文献   

11.
Evaluation of the impact of nosocomial infection on duration of hospital stay usually relies on estimates obtained in prospective cohort studies. However, the statistical methods used to estimate the extra length of stay are usually not adequate. A naive comparison of duration of stay in infected and non-infected patients is not adequate to estimate the extra hospitalisation time due to nosocomial infections. Matching for duration of stay prior to infection can compensate in part for the bias of ad hoc methods. New model-based approaches have been developed to estimate the excess length of stay. It will be demonstrated that statistical models based on multivariate counting processes provide an appropriate framework to analyse the occurrence and impact of nosocomial infections. We will propose and investigate new approaches to estimate the extra time spent in hospitals attributable to nosocomial infections based on functionals of the transition probabilities in multistate models. Additionally, within the class of structural nested failure time models an alternative approach to estimate the extra stay due to nosocomial infections is derived. The methods are illustrated using data from a cohort study on 756 patients admitted to intensive care units at the University Hospital in Freiburg.  相似文献   

12.
It is difficult to assess hypothetical models in poorly measured domains such as neuroendocrinology. Without a large library of observations to constrain inference, the execution of such incomplete models implies making assumptions. Mutually exclusive assumptions must be kept in separate worlds. We define a general abductive multiple-worlds engine that assesses such models by (i) generating the worlds and (ii) tests if these worlds contain known behaviour. World generation is constrained via the use of relevant envisionment. We describe QCM, a modeling language for compartmental models that can be processed by this inference engine. This tool has been used to find faults in theories published in international refereed journals; i.e. QCM can detect faults which are invisible to other methods. The generality and computational limits of this approach are discussed. In short, this approach is applicable to any representation that can be compiled into an and-or graph, provided the graphs are not too big or too intricate (fanout < 7).  相似文献   

13.
Conditional inference methods are proposed for the odds ratio between binary exposure and disease variables when only the probability of exposure is known for each study subject. We develop a conditional likelihood approach that removes nuisance parameters and permits inferences to be made about important parameters in log odds ratio regression models. We also discuss a heuristic procedure based on estimating the (unknown) number of truly exposed individuals; this procedure provides a simple framework for interpreting our likelihood-based statistics, and leads to a Mantel-Haenszel-type estimator and a goodness-of-fit test. As an example of the use of this methodology, we present an analysis of some genetic data of Swift et al. (1976, Cancer Research 36, 209-215).  相似文献   

14.
Paul E. Meehl's work on the clinical versus statistical prediction controversy is reviewed. His contributions included the following: putting the controversy center stage in applied psychology; clarifying concepts underpinning the debate (especially his crucial distinction between ways of gathering data and ways of combining them) as well as establishing that the controversy was real and not concocted, analyzing clinical inference from both theoretical and probabilistic points of view, and reviewing studies that compared the accuracy of these 2 methods of data combination. Meehl's (1954/1996) conclusion that statistical prediction consistently outperforms clinical judgment has stood up extremely well for half a century. His conceptual analyses have not been significantly improved since he published them in the 1950s and 1960s. His work in this area contains several citation classics, which are part of the working knowledge of all competent applied psychologists today. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

15.
The analysis of the human genome is one of the most significant topics in both biology and medical science. There is a growing need for a well-designed database system for searching and analyzing the human genome data. We developed a deductive database system to search and analyze nucleotide sequence data derived from the GenBank primates data. A deductive database system is a next generation one and it contains an inference mechanism that can handle problems beyond the capabilities of classical database systems. Database queries are described in logical rules. These rules are simple even for molecular biologists who are not experts in computer programs because they are declarative and do not require the procedural commands that are usually used in computer programs. Furthermore, queries based on logical rules are powerful enough to express complicated biological problems. Particularly, recursive rules are suitable for examining secondary structures of nucleotide sequences. In our analysis of TfR's IRE, we noted five stem-and-loop structures.  相似文献   

16.
Deformable models in medical image analysis: a survey   总被引:6,自引:0,他引:6  
This article surveys deformable models, a promising and vigorously researched computer-assisted medical image analysis technique. Among model-based techniques, deformable models offer a unique and powerful approach to image analysis that combines geometry, physics and approximation theory. They have proven to be effective in segmenting, matching and tracking anatomic structures by exploiting (bottom-up) constraints derived from the image data together with (top-down) a priori knowledge about the location, size and shape of these structures. Deformable models are capable of accommodating the significant variability of biological structures over time and across different individuals. Furthermore, they support highly intuitive interaction mechanisms that, when necessary, allow medical scientists and practitioners to bring their expertise to bear on the model-based image interpretation task. This article reviews the rapidly expanding body of work on the development and application of deformable models to problems of fundamental importance in medical image analysis, including segmentation, shape representation, matching and motion tracking.  相似文献   

17.
Terrestrial laser scanning (TLS) provides a rapid, remote sensing technique to model 3D objects. Previous work applying TLS to structural analysis has demonstrated its effectiveness in capturing simple beam deflections and modeling existing structures. This paper extends TLS to the application of damage detection and volumetric change analysis for a full-scale structural test specimen. Importantly, it provides a framework necessary for such applications, in combination with an analysis approach that does not require tedious development of complex surfaces. Intuitive slicing analysis methods are presented, which can be automated for rapid generation of results. In comparison with conventional photographic and surface analysis methods, the proposed approach proved consistent. Furthermore, the TLS data provided additional insight into geometric change not apparent using conventional methods. As with any digital record, a key benefit to the proposed approach is the resulting virtual test specimen, which is available for posttest analysis long after the original specimen is demolished. Uncertainties that can be introduced from large TLS data sets, mixed pixels and parallax in the TLS analysis are also discussed.  相似文献   

18.
Formal stochastic simulation study has been recognized as a remedy for the shortcomings inherent to classic critical path method (CPM) project evaluation and review technique (PERT) analysis. An accurate and efficient method of identifying critical activities is essential for conducting PERT simulation. This paper discusses the derivation of a PERT simulation model, which incorporates the discrete event modeling approach and a simplified critical activity identification method. This has been done in an attempt to overcome the limitations and enhance the computing efficiency of classic CPM∕PERT analysis. A case study was conducted to validate the developed model and compare it to classic CPM∕PERT analysis. The developed model showed marked enhancement in analyzing the risk of project schedule overrun and determination of activity criticality. In addition, the beta distribution and its subjective fitting methods are discussed to complement the PERT simulation model. This new solution to CPM network analysis can provide project management with a convenient tool to assess alternative scenarios based on computer simulation and risk analysis.  相似文献   

19.
Metro systems usually offer an attractive alternative for mass transit transportation system in most large cities. Such infrastructures require proper maintenance and rehabilitation (M&R) programs to maintain them within an acceptable level of operational and safety performance. The Markov decision process (MDP) has been widely used to find the optimal M&R decision policy for situations that deal with uncertainties. A drawback of the traditional MDP approach is that it uses discrete number of states in the analysis as well as a stationary transition probability matrix (TPM). Also the MDP is based on the Markovian or “memory-less” property, which is not necessarily the fact for all aging infrastructures. This research presents a case study on a deteriorating slab in a Montreal metro. The traditional MDP is employed with linear programming to determine the optimal rehabilitation profile. Three different methods are employed for calculating life-cycle cost: (1) the average expected discount cost per time period that is normally used with the traditional MDP; (2) continuous rating approach; and (3) dynamic or time-dependent TPM. Results revealed that the continuous rating approach provides lower values compared to the traditional approach. Dynamic TPM reflects better the infrastructure behavior but necessitate additional data gathering. This research mainly benefits metro management agencies and enhances the MDP practice by overcoming some downsides of the traditional methodology.  相似文献   

20.
Pattern-mixture models stratify incomplete data by the pattern of missing values and formulate distinct models within each stratum. Pattern-mixture models are developed for analyzing a random sample on continuous variables y(1), y(2) when values of y(2) are nonrandomly missing. Methods for scalar y(1) and y(2) are here generalized to vector y(1) and y(2) with additional fixed covariates x. Parameters in these models are identified by alternative assumptions about the missing-data mechanism. Models may be underidentified (in which case additional assumptions are needed), just-identified, or overidentified. Maximum likelihood and Bayesian methods are developed for the latter two situations, using the EM and SEM algorithms, direct and interactive simulation methods. The methods are illustrated on a data set involving alternative dosage regimens for the treatment of schizophrenia using haloperidol and on a regression example. Sensitivity to alternative assumptions about the missing-data mechanism is assessed, and the new methods are compared with complete-case analysis and maximum likelihood for a probit selection model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号