共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper we present the results of a simulation study to explore the ability of Bayesian parametric and nonparametric models to provide an adequate fit to count data of the type that would routinely be analyzed parametrically either through fixed-effects or random-effects Poisson models. The context of the study is a randomized controlled trial with two groups (treatment and control). Our nonparametric approach uses several modeling formulations based on Dirichlet process priors. We find that the nonparametric models are able to flexibly adapt to the data, to offer rich posterior inference, and to provide, in a variety of settings, more accurate predictive inference than parametric models. 相似文献
2.
A case study comparison is made of four declarative programming languages: Miranda, 1 Miranda is a trademark of Research Software Ltd. Fp, Equations and Prolog. The case study is used as a vehicle to assess both expressiveness and performance aspects of these languages. In expressive power, Miranda is judged to be superior, Equations and Prolog are rated about even in second place, and Fp is judged least expressive. Performance was quite poor for all the available implementations. Reasons for this are explored and the implications are discussed. 相似文献
3.
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LP τ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones. 相似文献
4.
An experiment of the cover set induction method in RRL is presented with a mechanical proof of Ramsey's theorem in graph theory. The proof is similar to the proof obtained by Kaufmann using the Boyer-Moore theorem prover. We show that this similarity is not unusual, because there is a close relationship between the Boyer-Moore logic and the algebraic specification of abstract data types on which the cover set induction method is based. (This implies that many proofs done by the Boyer-Moore theorem prover can be reproduced by RRL.) Our experiment shows that RRL can automatically prove all the lemmas in Ramsey's theorem, while the Boyer-Moore theorem prover needs several user's hints and takes much longer (CPU time) to finish.Partially supported by National Science Foundation Grants Nos. CCR-9202838 and INT-9016100. 相似文献
5.
This work examines the sensitivity of the different channels of the HSB (Humidity Sensor for Brazil), on board the AQUA satellite, for the purpose of retrieving surface rainfall over land. The analysis is carried out in two steps: (a) a theoretical study performed using two radiative transfer models, RTTOV and the so‐called Eddington method; and (b) the determination of the correlation between coincident measurements of HSB brightness temperatures and radar rainfall estimates during the DRY‐TO‐WET/AMC/LBA field campaign held in the Amazon region during September and October 2002. Theoretical results indicate the sensitivity of the HSB to water vapour content and cloud liquid water in the precipitation estimation. Theoretical and experimental analyses show that the channels 150 and 183±7 GHz are more adapted to estimate precipitation than the 183±1 and 183±3 GHz channels. The simulation analyses clearly show a hierarchy in physical effects that determine the brightness temperature of these channels. The rain and ice scattering dominate over the absorption of liquid water, and the liquid water absorption effect dominates over the absorption of water vapour. The results show that the 150 and 183±7 channels are more sensitive to the variation of liquid water and ice than the 183±1 and 183±3 channels. For the precipitation estimation using these channels, it was found that it is best adapted to the low precipitation rate situations, since the brightness temperature is rapidly saturated in the presence of high intense precipitation. A case study to estimate precipitation using the radar data has shown that it is possible to adjust a curve that relates the precipitation rate to the brightness temperature of the 150 GHz channel with a good level of accuracy for low precipitation rates. 相似文献
6.
Numerical characteristics of various Kalman filter algorithms are illustrated with a realistic orbit determination study. The case study of this paper highlights the numerical deficiencies of the conventional and stabilized Kalman algorithms. Computational errors associated with these algorithms are found to be so large as to obscure important mismodeling effects and thus cause misleading estimates of filter accuracy. The positive result of this study is that the U-D covariance factorization algorithm has excellent numerical properties and is computationally efficient, having CPU costs that differ negligibly from the conventional Kalman costs. Accuracies of the U-D filter using single precision arithmetic consistently match the double precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity to variations in the a priori statistics. 相似文献
7.
In recent years, steering a quality-management system (QMS) has become a key strategic consideration in businesses. Indeed, companies constantly need to optimize their industrial tools to increase their productivity and to permanently improve the effectiveness and efficiency of their systems. To solve such problems, two approaches were developed: the Pareto Analytical-Hierarchy Process (PAHP) and the Multichoice Goal Programming (MCGP) methods. The first integrates the Pareto concept and Analytical-Hierarchy Process (AHP) methods and the second combines the MCGP model with AHP methods. The goal was to determine the best solution while simultaneously verifying multiobjective-optimization functions and satisfying different constraints for a real-world case study. The latter was chosen because it presents a major problem for controlling the quality levels of production lines. A comparative study between the two approaches provides a path for designing a tool for decision support to ensure the effectiveness of a corporate QMS. 相似文献
8.
This paper is the case study of a compiler for a very large language implemented for a minicomputer. Although the language is special purpose and perhaps not of general interest, the techniques used and the performance obtained make the compiler itself somewhat interesting. 相似文献
10.
The focus of this paper is the methodology for testing ellipsoidal symmetry, which was recently proposed by Koltchinskii and Sakhanenko [Koltchinskii, V., Sakhanenko, L. 2000. Testing for ellipsoidal symmetry of a multivariate distribution. In: Giné, E., Mason, D., Wellner, J. (Eds.), High Dimensional Probability II. In: Progress in Probability, Birkhäuser, Boston, pp. 493-510]. It is a class of omnibus bootstrap tests that are affine invariant and consistent against any fixed alternative. First, we study their behavior under a sequence of local alternatives. Secondly, a finite sample comparison study of this new class of tests with other popular methods given by Beran, Manzotti et al., and Huffer et al. is carried out. We find that the new tests outperform other methods in preserving the level and have superior power for the most of the chosen alternatives. We also suggest a tool for identifying periods of financial instability and crises when these tests are applied to the distribution of the return rates of stock market indices. These tests can be used in place of tests for normality of asset return distributions since ellipsoidally symmetric distributions are the natural extensions of multivariate normal distributions, so that the capital asset pricing model holds. 相似文献
11.
Software performance engineering (SPE) provides an approach to constructing systems to meet performance objectives. The authors illustrate the application of SPE to an example with some real-time properties and demonstrate how to compare performance characteristics of design alternatives. They show how SPE can be integrated with design methods and demonstrate that performance requirements can be achieved without sacrificing other desirable design qualities such as understandability, maintainability, and reusability 相似文献
12.
This paper seeks to test and to determine a suitable aggregation method to represent a set of rankings made by individual decision makers (DMs). A case study for triage prioritization is used to test the aggregation methods. The triage is a decision-making process with which patients are prioritized according to their medical condition and chance of survival on arrival at the emergency department (ED). There is a lot of subjective decision-making in the process which leads to discrepancies among nurses. Four rank aggregation methods are applied to the prioritization data and then an expert evaluates the results and judges them on practicality and acceptability. The proposed recommendation for preference aggregation is the method of the estimation of utility intervals. Expert opinion is highly valued in a decision-making environment such as this, where experience and intuition are key to successful job performance and outcomes. 相似文献
13.
Abstract A model developed for radiative transfer in the atmosphere-sea system is applied to remotely-sensed radiances to evaluate its applicability in the determination of atmospheric turbidity. Data measured at different atmospheric turbidities over clear sea water by a multispectral scanner installed on an aeroplane have been used. Comparisons of aerosol optical thicknesses obtained from remotely-sensed data and from ground-based atmospheric transmittancc measurements are given and discussed. 相似文献
14.
VME is a new high performance standard bus for multimicoprocessor systems. Its characteristics originate in the 68000 microprocessor's interface signals. Processors with other interface characteristics can, however, also be used in VME systems. The case study of the interfacing of a 6809-based subsystem to the VME bus is presented. A mixture of hardware and software has been used to implement at low cost the matching functions. The 6809-based subsystem can perform all VME-related functions: bus requester in multiprocessor environments, bus master, interrupt handler, bus slave and interrupter. The last two functions are performed by a dual-port memory included in the subsystem, which is designed to be used as the main processor in small VME systems or as the intelligent peripheral controller in larger VME systems. 相似文献
15.
This paper presents lessons learned from an experiment to reverse engineer a program. A reverse engineering process was used as part of a project to develop an Ada implementation of a Fortran program and upgrade the existing documentation. To accomplish this, design information was extracted from the Fortran source code and entered into a software development environment. The extracted design information was used to implement a new version of the program written in Ada. This experiment revealed issues about recovering design information, such as, separating design details from implementation details, dealing with incomplete or erroneous information, traceability of information between implementation and recovered design, and re-engineering. The reverse engineering process used to recover the design, and the experience gained during the study are reported. 相似文献
16.
In on attempt to improve the representation used for software design, this paper describes the representation used on an actual project. Improvement through the use of case studies such as this one requires explicit analysis; this is presented in the following paper, so that the two should be read together. 相似文献
17.
Based on our hand-on experiences in developing a service-based problem-solving environment for bioinformatics research, we present in this paper a particular approach called VINCA4Science to business-oriented modeling of service functionalities, to Web services virtualization and to user-centric utilization of modeled artifacts. Some practical effects regarding the improvement of domain stability, efficiency and scalability are illustrated. 相似文献
18.
This paper describes work undertaken as part of a three-stranded project. The main deliverable will be a suite of software tools, classes and components, as well as teaching resources and methodologies to facilitate the production of intelligent multimedia tutoring systems. The three strands of this project explore differing aspects of multimedia, and while they stand as separate pieces of work in their own right, they also form the building blocks of the final deliverable, which is an intelligent tutoring system. Intermediate outcomes include a multimedia shell and the Manley Group Demonstrator. This paper focuses on the design and development of the “true multimedia” demonstrator that has now reached the stage of being a fairly mature prototype. It is used commercially by a communications company to illustrate the varying levels of sophistication which can be achieved in presentation systems and provides an exemplar demonstrating the varying degrees of functionality possible using current technology. The main requirement in the design and implementation has been usability both for end users and content developers. Real-time control of the multimedia devices is achieved using sophisticated software which is tightly integrated with dedicated control hardware. The software can be used to produce canned presentations or highly interactive systems. The system is installed in a dedicated presentation suite, thus allowing the true nature of multimedia to be demonstrated. It is scaleable in that it can be used to produce stand-alone portable solutions, highly interactive touch-screen controlled applications or complete multi-device high end presentations. This is achieved using the appropriate combination of software tools and hardware for a specific application. Macromedia Director is used to produce visual and creative material for the system, while Macromedia Authorware can be used to provide interactivity. © 1997 Elsevier Science B.V. 相似文献
19.
The author's previous work (1986, 1987) utilized the particular structure of manipulator dynamics to develop a simple, globally convergent adaptive controller for manipulator trajectory control problems. After summarizing the basic algorithm, they demonstrate the approach on a high-speed two-degree-of-freedom semi-direct-drive robot. They show that the dynamic parameters of the manipulator, assumed to be initially unknown, can be estimated within the first half second of a typical run, and that accordingly, the manipulator trajectory can be precisely controlled. These experimental results demonstrate that the adaptive controller enjoys essentially the same level of robustness to unmodeled dynamics as a PD (proportional and differential) controller, yet achieves much better tracking accuracy than either PD or computed-torque schemes. Its superior performance for high-speed operations, in the presence of parametric and nonparametric uncertainties, and its relative computational simplicity, make it an attractive option both for addressing complex industrial tasks, and for simplifying high-level programming of more standard operations 相似文献
20.
Three sampling designs — simple random, stratified random, and systematic sampling — are compared on the basis of precision of estimated loss of intact humid tropical forest area in the Brazilian Legal Amazon from 2000 to 2005. MODIS-derived deforestation is used to partition the study area into strata to intensify sampling within forest clearing hotspots. The precision of the estimator of deforestation area for each design is calculated from a population of wall-to-wall PRODES deforestation data available for the study area. Both systematic and stratified sampling yield smaller standard errors than simple random sampling, and the stratified design has smaller standard errors than the systematic design at each sample size evaluated. The results of this case study demonstrate the utility of a stratified design based on MODIS-derived deforestation data to improve precision of the estimated loss of intact forest area as estimated from sampling Landsat imagery. 相似文献
|