首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We discuss the program, TWINAN90, which can perform several different types of analysis of twin data. TWINAN90 incorporates the ANOVA-based twin analyses from the TWINAN twin analysis program, and also includes maximum likelihood estimation of parameters from three path models. Another feature of TWINAN90 is the optional output of a pedigree file which can be read by the quantitative genetics package FISHER. The diagnostic features of the program make TWINAN90 useful also for preliminary analyses prior to the use of more sophisticated modeling procedures which are available in packages such as LISREL and FISHER. An annotated printout from TWINAN90 is presented to illustrate the statistical analyses performed in the program.  相似文献   

2.
Using data collected throughout a major project, the authors apply common statistical methods to quantitatively assess and evaluate improvements in a large contractor's software-maintenance process. Results show where improvements are needed; examining the change in statistical results lets you quantitatively evaluate the effectiveness of the improvements. We selected a process-assessment methodology developed by J.E. Henry (1993) that follows Total Quality Management principles and is based on Watts Humphrey's Process Maturity Framework. It lets you use a process modeling technique based on control-flow diagrams to define an organization's maintenance process. After collecting process and product data throughout the maintenance process, you analyze it using parametric and nonparametric statistical techniques. The statistical-analysis results and the process model help you assess and guide improvements in the organization's maintenance process. The method uses common statistical tests to quantify relationships among maintenance activities and process and product characteristics. The relationships, in turn, tell you more about the maintenance process and how requirements changes affect the product  相似文献   

3.
Procedures for comparing and evaluating aspects of the user interface of statistical computer packages are described. These procedures are implemented in a study of three packages. SPSS. BMDP and Minitab, by a class of 21 students with some statistical background. It was found that most participants exhibited consistent personal preferences among the packages. In selecting packages to solve specific problems, however, their choice was determined more by issues of good statistical practice than by personal preference for overall package features.  相似文献   

4.
With the advent of increasingly integrated, powerful and inexpensive digital electronics, relatively powerful computers have become available to the general public. Along with this technological boom there has been a concomitant increase in the availability of over-the-counter software packages which can be used by research scientists for program development. In the past, the development of computer programs for the collection of large amounts of time-based data was expensive and time consuming; however, the introduction of the current generation of 16-bit microcomputers and associated hardware and software packages has enabled investigators with only a rudimentary knowledge of computers and interfacing to begin to design programs. The schemes and algorithms, developed using BASICA on an IBM-Personal Computer, which are described in this article can serve other investigators as models for the assembly of their own programs for the collection, manipulation and plotting of time-based data. The incorporation of inexpensive computer graphics hardware and software, which provided a simple solution to the problem of analysis and presentation of large amounts of data, will also be discussed.  相似文献   

5.
When using statistical computer packages in general, we rely on the results they produce. We are aware that numerical approximations are made and trust that the best algorithms are chosen to do them. Most manuals give us instructions about precision of calculations and some report how missing values are administered. What we are unaware of is that some packages can invent results when creating atomic formulas and compounding complex formulas out of atomic ones, what inflates sample sizes, and possibly leads us to incorrect statistical decisions. Two simple indicator variables, with missing values positioned so the results are always missing values, were tested as numerical, as logical and as character variables, by compounding them through connective ‘and’ (&) and ‘or’ (|) to form new indicator variables. The results show that one of the three very known packages does not, statistically, correctly handle missing values, and the three make atomic formulas out of character variables assigning the value false (0) for missing value, what can be said an statistical error. The conclusion is that statisticians and users of statistics must be aware of the capabilities of logically operating missing values of the statistical packages they use, otherwise wrong statistical decisions can be made. And that programmers of statistical packages should correct their algorithms in order to not permit their packages invent non-existing values.  相似文献   

6.
7.
The tremendous increase in recent years in the demand for, and availability of, computer software packages, has led researchers to investigate better ways of evaluating them for selection. The selection of software packages is confounded because criteria for evaluation are dependent on the system type. For example, decision support systems should be judged largely on the basis of subjective criteria, while transaction processing systems can be evaluated primarily on quantifiable factors. We discuss the use of a well-known model used in site selection, as a means to evaluate software packages according to subjective or objective factors. We illustrate application of the model with a small example.  相似文献   

8.
Abstract: The use of diverse features in detecting variability of electroencephalogram (EEG) signals is presented. The classification accuracies of the modified mixture of experts (MME), which was trained on diverse features, were obtained. Eigenvector methods (Pisarenko, multiple signal classification – MUSIC, and minimum-norm) were selected to generate the power spectral density estimates. The features from the power spectral density estimates and Lyapunov exponents of the EEG signals were computed and statistical features were calculated to depict their distribution. The statistical features, which were used for obtaining the diverse features of the EEG signals, were then input into the implemented neural network models for training and testing purposes. The present study demonstrated that the MME trained on the diverse features achieved high accuracy rates (total classification accuracy of the MME is 98.33%).  相似文献   

9.
Environment for statistical computing   总被引:1,自引:0,他引:1  
This paper is a short exposition on the current state of art as far as statistical software is concerned. The main aims are to take a look at current tendencies in information technologies for statistics and data analysis, especially for describing selected programs and systems.We start with statistical packages, i.e. a suite of computer programs that are specialized in statistical analysis, to enable people to obtain the results of standard statistical procedures without requiring low-level numerical programming, and to provide facilities of data management. A big surprise for many statisticians is that the most typical representative in this domain is Microsoft Excel. Aside from that, we touch upon a few commercial packages, a few general public license packages, and a few analysis packages with statistics add-ons.An integrated environment for statistical computing and graphics is essential for developing and understanding new techniques in statistics. Such an environment must essentially be a programming language. Therefore, we take a closer look at several typical representatives of these types of programmes, and on a few general purpose languages with statistics libraries.However, there exists quite a clear distinction between practical and theoretical approaches to most statistical work. The majority of software products for statistics are on the practical side, using numerical and graphical methods to provide the user access to existing methods. On the other hand, software packages specifically designed just for pure statistical–mathematical modelling do not exist. Nevertheless, all available computer algebra and/or mathematical systems offer tools for theoretical statistical work. Therefore, we take a look at some possibilities in this area.Finally, we summarize several major driving forces that will influence, according to our strong belief, the statistical software development process in the near future. Due to limited space, these discussions are cursory in nature for the most part. This paper is based on the personal experience of the author as described in [J. Antoch, Series of papers on statistical software and environments for statistical computing (in Czech for the Czech Statistical Society Newsletter and other publications). [1]] and on the information available on Internet. Very good and interesting source of information is especially Google search machine [Google search machine. [12]], Wikipedia [Wikipedia, a multilingual web-based, free content encyclopedia project. [25]] and the journal Scientific Computing World [Scientific Computing World Journal. [22]].  相似文献   

10.
11.
Out-of-core Data Management for Path Tracing on Hybrid Resources   总被引:1,自引:0,他引:1  
We present a software system that enables path-traced rendering of complex scenes. The system consists of two primary components: an application layer that implements the basic rendering algorithm, and an out-of-core scheduling and data-management layer designed to assist the application layer in exploiting hybrid computational resources (e.g., CPUs and GPUs) simultaneously. We describe the basic system architecture, discuss design decisions of the system's data-management layer, and outline an efficient implementation of a path tracer application, where GPUs perform functions such as ray tracing, shadow tracing, importance-driven light sampling, and surface shading. The use of GPUs speeds up the runtime of these components by factors ranging from two to twenty, resulting in a substantial overall increase in rendering speed. The path tracer scales well with respect to CPUs, GPUs and memory per node as well as scaling with the number of nodes. The result is a system that can render large complex scenes with strong performance and scalability.  相似文献   

12.
In an iterative design process, a large amount of engineering data needs to be processed. Owing to the limitations of traditional software, the engineering data cannot be handled simultaneously and are usually divided into geometric and non-geometric data in order to be managed by separate systems. In the spring industry, which requires repeated definition of complicated shapes, design engineers need special interfaces for efficient product design and drafting. In this paper, the CAD-integrated engineering-data-management system is developed and implemented for spring design, in order to simplify the drafting and data-management processes. This research focuses on three main issues that can be also applied to other applications, particularly for component designs. These issues include: (1) product model definition, (2) CAD-database communication, and (3) human-machine interface development. With the definition of product model, the system identifies which data should be accessed from data files to generate the proper drawings, and which database structure should be constructed for the application domain. By the use of CAD-database communication, when engineers modify the geometric or non-geometric parameters of a product design, these parametric values can be simultaneously updated in the database. Furthermore, the support of human-machine interface enhances the efficiency in routine manipulation of support engineering data management and design/redesign processes.  相似文献   

13.
This paper considers the boundary between classical statistical packages on the one hand, and programming languages on the other. The former are easy to use, but relatively inflexible, while the latter are very flexible, but have steep learning curves. A new class of software is now being developed that allows both flexibility, and ease-of-use. This concept is illustrated by the program GAUSSX, which provides an econometric shell for GAUSS.  相似文献   

14.
Various receptor methodologies have been developed in the last decades to investigate the geographical origins of atmospheric pollution, based either on wind data or on backtrajectory analyses. To date, only few software packages exist to make use of one or the other approach. We present here ZeFir, an Igor-based package specifically designed to achieve a comprehensive geographical origin analysis using a single statistical tool. ZeFir puts the emphasis on a user-friendly experience in order to facilitate and speed up working time. Key parameters can be easily controlled, and unique innovative features bring geographical origins work to another level.  相似文献   

15.
The recent availability of spreadsheets has provided new opportunities for pupil and student use of microcomputers. Whilst much has been written on the use of database and word processing packages, the educational uses of spreadsheets is relatively undocumented. In this paper reference is made to exploratory work aimed at providing opportunities for additional experience in problem solving.  相似文献   

16.
A further investigation of our intelligent machine vision system for pattern recognition and texture image classification is discussed in this paper. A data set of 335 texture images is to be classified into several classes, based on their texture similarities, while no a priori human vision expert knowledge about the classes is available. Hence, unsupervised learning and self-organizing maps (SOM) neural networks are used for solving the classification problem. Nevertheless, in some of the experiments, a supervised texture analysis method is also considered for comparison purposes. Four major experiments are conducted: in the first one, classifiers are trained using all the extracted features without any statistical preprocessing; in the second simulation, the available features are normalized before being fed to a classifier; in the third experiment, the trained classifiers use linear transformations of the original features, received after preprocessing with principal component analysis; and in the last one, transforms of the features obtained after applying linear discriminant analysis are used. During the simulation, each test is performed 50 times implementing the proposed algorithm. Results from the employed unsupervised learning, after training, testing, and validation of the SOMs, are analyzed and critically compared with results from other authors.  相似文献   

17.
Texture analysis has been used extensively in the computer-assisted interpretation of digital imagery. A popular texture feature extraction approach is the grey level co-occurrence probability (GLCP) method. Most investigations consider the use of the GLCP texture features for classification purposes only, and do not address segmentation performance. Specifically, for segmentation, the pixels in an image located near texture boundaries have a tendency to be misclassified. Boundary preservation when using the GLCP texture features for image segmentation is important. An advancement which exploits spatial relationships has been implemented. The generated features are referred to as weighted GLCP (WGLCP) texture features. In addition, an investigation for selecting suitable GLCP parameters for improved boundary preservation is presented. From the tests, WGLCP features provide improved boundary preservation and segmentation accuracy at a computational cost. As well, the GLCP correlation statistical parameter should not be used when segmenting images with high contrast texture boundaries.  相似文献   

18.
The paper describes CAL packages developed for use in a degree course in civil engineering; all are used in the interactive mode through a terminal with a keyboard and visual display unit. They relate to the subjects of hydraulics and structures, although the application of CAL is being extended more widely. Common features in their development are emphasized; each is described in outline and an indication is given of ways in which they may be employed.The hydraulics packages are arranged both to explain methods of calculation and to permit rapid computation of otherwise lengthy examples. The first open channel package covers uniform flow and, so, the calculation of normal depths; the second non-uniform flow and surface profiles. The surge shaft package deals with oscillations of level in a surge shaft on a pipeline supplying a turbine to which the flow is abruptly changed.The structural design packages cover the topic of steel beams. They emphasize principles and concepts and the construction of bending moment and shear force diagrams. Moreover they are so devised as to give the student concentrated design experience enabling him rapidly to establish what section is appropriate to meet the requirements of deflection, shear and bending for given lengths, loading and grade of steel. Both rolled and welded beams are covered.All programs are written in FORTRAN IV with routines to provide for such matters as free-format input/output and data range-testing. Graphical displays are made possible by the use of a microprocessor-based graphics-option controller used in conjunction with a standard alphanuneric terminal and VDU.  相似文献   

19.
This paper presents the design and development of an object-oriented framework for computational mechanics. The framework has been designed to address some of the major deficiencies in existing computational mechanics software packages. The framework addresses the deficiencies of existing computational mechanics software packages by (a) having a sound design using the state of the art in software engineering, and (b) providing model manipulation features that are common to a large set of computational mechanics problems. The framework provides features that are essential to a large set of computational mechanics problems. The domainspecific features provided by the framework are a geometry sub-system specifically designed for computational mechanics, an interpreted Computational Mechanics Language (CML), a structure for management of analysis projects, a comprehensive data model, model development, model query and analysis management. The domain independent features provided by the framework are a drawing subsystem for data visualization, a database server, a quantity subsystem, a simple GUI and an online help server. It is demonstrated that the framework can be used to develop applications that can: (a) extend or modify important parts of the framework to suit their own needs; (b) use CML for rapid prototyping and extending the functionality of the framework; (c) significantly ease the task of conducting parametric studies; (d) significantly ease the task of modeling evolutionary problems; (e) be easily interfaced with existing analysis programs; and (f) be used to carry out basic computational mechanics research. It is hoped that the framework will substantially ease the task of creating families of software applications that apply existing and upcoming theories of computational mechanics to solve both academic and real world interdisciplinary simulation problems.  相似文献   

20.
The development of automated or semi-automated data from patients has increased because of the growing availability of computers to clinicians. Slack and his colleagues [1] demonstrated the value of this technique. The computer used could elicit medical histories which at times were fuller than those obtained during a clinical interview, and it was found that such computer interviews were well tolerated by patients. These findings have subsequently been confirmed in psychiatric practice [2–7]. The use of automated interviews in psychiatry has been reviewed by Mizutani [8], who draws attention to the potential saving of clinician's time if computer interviews are used for screening purposes, as originally noted by Coddington and King [3], and by Carr and Ghosh [6]. The widespread availability and increasing power of the microcomputer will encourage further exploration of this method.The clinical interview in child and adolescent psychiatric practice has been in a state of change. There is increasing use of family-centred interviewing techniques, which often have, as a central objective, the promotion of change at a relatively early stage in assessment. In comparison with orthodox techniques which depend on full assessment prior to the introduction of therapy, these techniques may make it more difficult at the outset to clearly and thoroughly establish which features of the child are causing concern, and their significance.A possible solution to this problem is to precede the initial interview with a data gathering phase which does not involve the clinician. This would be possible if a computer was used to elicit appropriate information before the initial interview with the family. If this method were acceptable to the family, it is possible that the more ‘neutral’ seeming aspect of an automated questionnaire would augment the clinician's first contact with them in a useful manner.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号