首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nowadays, typical software and system engineering projects in various industrial sectors (automotive, telecommunication, etc.) involve hundreds of developers using quite a number of different tools. Thus, the data of a project as a whole is distributed over these tools. Therefore, it is necessary to make the relationships of different tool data repositories visible and keep them consistent with each other. This still is a nightmare due to the lack of domain-specific adaptable tool and data integration solutions which support maintenance of traceability links, semi-automatic consistency checking as well as update propagation. Currently used solutions are usually hand-coded one-way transformations between pairs of tools. In this article we present a rule-based approach that allows for the declarative specification of data integration rules. It is based on the formalism of triple graph grammars and uses directed graphs to represent MOF-compliant (meta) models. As a result we give an answer to OMG's request for proposals for a MOF-compliant “queries, views, and transformation” (QVT) approach from the “model driven application development” (MDA) field.  相似文献   

2.
The aim of this work is to define computer-aided optimum operation and tool sequences that are to be used in Generative Process Planning System developed for rotational parts. The software developed for this purpose has a modular structure. Cutting tools are selected automatically using the machinability data, workpiece feature information, machine tool data, workholding method and the set-up number. An optimum tool sequence is characterised by a minimum number of tool changes and minimum tool travel time. Tool and operation sequence for minimum tool change are optimised with a developed optimisation method that is based on “Rank Order Clustering”.  相似文献   

3.
For the two-sample censored data problem, Pepe and Fleming [Pepe, M.S., Fleming, T.R., 1989. Weighted Kaplan-Meier statistics: A class of distance tests for censored survival data. Biometrics 45, 497-507] introduced the weighted Kaplan-Meier (WKM) statistics. From these statistics we define stochastic processes which can be approximated by zero-mean martingales. Conditional distributions of the processes, given data, can be easily approximated through simulation techniques. Based on comparison of these processes, we construct a supremum test to assess the model adequacy. Monte Carlo simulations are conducted to evaluate and compare the size and power properties of the proposed test to the WKM and the log-rank tests. The procedures are illustrated using real data.  相似文献   

4.
The development of complex models can be greatly facilitated by the utilization of libraries of reusable model components. In this paper we describe an object-oriented module specification formalism (MSF) for implementing archivable modules in support of continuous spatial modeling. This declarative formalism provides the high level of abstraction necessary for maximum generality, provides enough detail to allow a dynamic simulation to be generated automatically, and avoids the “hard-coded” implementation of space-time dynamics that makes procedural specifications of limited usefulness for specifying archivable modules. A set of these MSF modules can be hierarchically linked to create a parsimonious model specification, or “parsi-model”. The parsi-model exists within the context of a modeling environment (an integrated set of software tools which provide the computer services necessary for simulation development and execution), which can offer simulation services that are not possible in a loosely-coupled “federated” environment, such as graphical module development and configuration, automatic differentiation of model equations, run-time visualization of the data and dynamics of any variable in the simulation, transparent distributed computing within each module, and fully configurable space-time representations. We believe this approach has great potential for bringing the power of modular model development into the collaborative simulation arena.  相似文献   

5.
Optimizing disk storage to support statistical analysis operations   总被引:2,自引:0,他引:2  
Data stored in spreadsheets and relational database tables can be viewed as “worksheets” consisting of rows and columns, with rows corresponding to records. Correspondingly, the typical practice is to store the data on disk in row major order. While this practice is reasonable in many cases, it is not necessarily the best practice when computation is dominated by column-based statistics. This short note discusses the performance tradeoffs between row major and column major storage of data in the context of statistical data analysis. A comparison of a software package utilizing column major storage and one using row major storage confirms our results.  相似文献   

6.
Higher-order asymptotics is an active area of development in theoretical statistics. However, most existing work in higher-order asymptotics is directed to the theoretical aspects. This paper attempts to incorporate higher-order inference procedures to S-plus, a widely used software in statistics. Algorithm is developed in the settings of generalized linear models and nonlinear regression models. The proposed algorithm generalizes the standard S-plus functions “glim” and “nls” in the sense that both the first-order and higher-order p-values are provided, and its manipulation is straightforward.  相似文献   

7.
Environmental (spatial) monitoring of different variables often involves left-censored observations falling below the minimum detection limit (MDL) of the instruments used to quantify them. Several methods to predict the variables at new locations given left-censored observations of a stationary spatial process are compared. The methods use versions of kriging predictors, being the best linear unbiased predictors minimizing the mean squared prediction errors. A semi-naive method that determines imputed values at censored locations in an iterative algorithm together with variogram estimation is proposed. It is compared with a computationally intensive method relying on Gaussian assumptions, as well as with two distribution-free methods that impute the MDL or MDL divided by two at the locations with censored values. Their predictive performance is compared in a simulation study for both Gaussian and non-Gaussian processes and discussed in relation to the complexity of the methods from a user’s perspective. The method relying on Gaussian assumptions performs, as expected, best not only for Gaussian processes, but also for other processes with symmetric marginal distributions. Some of the (semi-)naive methods also work well for these cases. For processes with skewed marginal distributions (semi-)naive methods work better. The main differences in predictive performance arise for small true values. For large true values no difference between methods is apparent.  相似文献   

8.
This paper presents the findings of an investigation into the role and use of landscape visualization software for landscape and environmental planning in Germany. It examines the challenges and requirements of 3D visualization technology and its potential for application in landscape and environmental planning. Relevant literature and comparable surveys are reviewed in order to determine the current state of affairs, and the general and international relevance of the results is assessed.In 2000, a survey of user requirements for 3D landscape simulation software, including the demand for specific features, was conducted within the framework of a feasibility study for a visualization tool. As part of the German-wide survey, comprehensive questionnaires were sent to 1044 respondents from a pool of private landscape planning consultancies, freelance landscape architects, and public planning and environmental authorities.The survey showed that 3D landscape visualization has a positive image in Germany, both among user and non-user groups of visualization tools. Twenty-eight percent of private consultancies and freelance landscape architects, as well as 7% of public authorities, stated that they already used 3D simulation software. Those respondents who did not use 3D simulation software cited insufficient computer equipment, lack of technical expertise of planners and cost-related aspects as reasons for not yet having adopted the technology. “Ease of learning” and “interoperability” are deemed to be the most important features of 3D simulation software, whereas factors such as “high interactivity”, “representability of ecological processes” and “photo-realism” are, surprisingly, regarded as much less important.Users of 3D visualization software are particularly concerned by insufficient representation of plants and habitats in simulations. Looking to the future, the vast majority of respondents (91%) expect increased benefits for landscape planning from 3D visualization software, are convinced of the advantages of the technology, and are eager to integrate 3D landscape visualizations into their working practices.  相似文献   

9.
This paper presents a method of assigning letter grades which are based on five preliminary grading routines, and include both absolute grading and relative or “adaptive-level” grading. The routines are programmed in a microcomputer spreadsheet software program. The teacher then chooses the final grade distribution either from the suggested routine distributions, or by making an appropriate adjustment of them. It is hoped that, with the help of microcomputers, this method may be the first step toward a uniform grading process, which may be standardized among different instructors and over time for each instructor.  相似文献   

10.
Design by Contract is a software engineering practice that allows semantic information to be added to a class or interface to precisely specify the conditions that are required for its correct operation. The basic constructs of Design by Contract are method preconditions and postconditions, and class invariants.This paper presents a detailed design and implementation overview of jContractor, a freely available tool that allows programmers to write “contracts” as standard Java methods following an intuitive naming convention. Preconditions, postconditions, and invariants can be associated with, or inherited by, any class or interface. jContractor performs on-the-fly bytecode instrumentation to detect violation of the contract specification during a program's execution. jContractor's bytecode engineering technique allows it to specify and check contracts even when source code is not available. jContractor is a pure Java library providing a rich set of syntactic constructs for expressing contracts without extending the Java language or runtime environment. These constructs include support for predicate logic expressions, and referencing entry values of attributes and return values of methods. Fine grain control over the level of monitoring is possible at runtime. Since contract methods are allowed to use unconstrained Java expressions, in addition to runtime verification they can perform additional runtime monitoring, logging, and testing.  相似文献   

11.
“Fuzzy Functions” are proposed to be determined by the least squares estimation (LSE) technique for the development of fuzzy system models. These functions, “Fuzzy Functions with LSE” are proposed as alternate representation and reasoning schemas to the fuzzy rule base approaches. These “Fuzzy Functions” can be more easily obtained and implemented by those who are not familiar with an in-depth knowledge of fuzzy theory. Working knowledge of a fuzzy clustering algorithm such as FCM or its variations would be sufficient to obtain membership values of input vectors. The membership values together with scalar input variables are then used by the LSE technique to determine “Fuzzy Functions” for each cluster identified by FCM. These functions are different from “Fuzzy Rule Base” approaches as well as “Fuzzy Regression” approaches. Various transformations of the membership values are included as new variables in addition to original selected scalar input variables; and at times, a logistic transformation of non-scalar original selected input variables may also be included as a new variable. A comparison of “Fuzzy Functions-LSE” with Ordinary Least Squares Estimation (OLSE)” approach show that “Fuzzy Function-LSE” provide better results in the order of 10% or better with respect to RMSE measure for both training and test cases of data sets.  相似文献   

12.
Stochastic programming brings together models of optimum resource allocation and models of randomness to create a robust decision-making framework. The models of randomness with their finite, discrete realisations are called scenario generators. In this paper, we investigate the role of such a tool within the context of a combined information and decision support system. We explain how two well-developed modelling paradigms, decision models and simulation models can be combined to create “business analytics” which is based on ex-ante decision and ex-post evaluation. We also examine how these models can be integrated with data marts of analytic organisational data and decision data. Recent developments in on-line analytical processing (OLAP) tools and multidimensional data viewing are taken into consideration. We finally introduce illustrative examples of optimisation, simulation models and results analysis to explain our multifaceted view of modelling. In this paper, our main objective is to explain to the information systems (IS) community how advanced models and their software realisations can be integrated with advanced IS and DSS tools.  相似文献   

13.
Stochastic simulations are becoming increasingly important in numerous engineering applications. The solution to the governing equations are complicated due to the high-dimensional spaces and the presence of randomness. In this paper we present libMoM (), a software library to solve various types of Stochastic Differential Equations (SDE) as well as estimate statistical distributions from the moments. The library provides a suite of tools to solve various SDEs using the method of moments (MoM) as well as estimate statistical distributions from the moments using moment matching algorithms. For a large class of problems, MoM provide efficient solutions compared with other stochastic simulation techniques such as Monte Carlo (MC). In the physical sciences, the moments of the distribution are usually the primary quantities of interest. The library enables the solution of moment equations derived from a variety of SDEs, with closure using non-standard Gaussian quadrature. In engineering risk assessment and decision making, statistical distributions are required. The library implements tools for fitting the Generalized Lambda Distribution (GLD) with the given moments. The objectives of this paper are (1) to briefly outline the theory behind moment methods for solving SDEs/estimation of statistical distributions; (2) describe the organization of the software and user interfaces; (3) discuss use of standard software engineering tools for regression testing, aid collaboration, distribution and further development. A number of representative examples of the use of libMoM in various engineering applications are presented and future areas of research are discussed.  相似文献   

14.
Cyclostratigraphic studies of rhythmically bedded pelagic/hemipelagic carbonate sequences, which are important for understanding paleo-oceanographic conditions and paleoclimatic controls, can be facilitated by computer-assisted grayscale analysis. Application of a simple yet flexible algorithm, “docore”, written for the MATLAB software package and its associated image processing toolbox, can be used to generate detailed grayscale curves or time series that closely approximate carbonate time series from rapidly scanned images. The potential of “docore”-derived grayscale data as proxies for carbonate is demonstrated through comparative analyses of carbonate, organic carbon and grayscale data from a core drilled within the Upper Cretaceous Demopolis Chalk of western Alabama.  相似文献   

15.
The threat of cyber attacks motivates the need to monitor Internet traffic data for potentially abnormal behavior. Due to the enormous volumes of such data, statistical process monitoring tools, such as those traditionally used on data in the product manufacturing arena, are inadequate. “Exotic” data may indicate a potential attack; detecting such data requires a characterization of “typical” data. We devise some new graphical displays, including a “skyline plot,” that permit ready visual identification of unusual Internet traffic patterns in “streaming” data, and use appropriate statistical measures to help identify potential cyberattacks. These methods are illustrated on a moderate-sized data set (135,605 records) collected at George Mason University.  相似文献   

16.
17.
Environmental modelling is done more and more by practising ecologists rather than computer scientists or mathematicians. This is because there is a broad spectrum of development tools available that allows graphical coding of complex models of dynamic systems and help to abstract from the mathematical issues of the modelled system and the related numerical problems for estimating solutions. In this contribution, we study how different modelling tools treat a test system, a highly non-linear predator–prey model, and how the numerical solutions vary. We can show that solutions (a) differ if different development tools are chosen but the same numerical procedure is selected; (b) depend on undocumented implementation details; (c) vary even for the same tool but for different versions; and (d) are generated but with no notifications on numerical problems even if these could be identified. We conclude that improved documentation of numeric methods used in the modelling software is essential to make sure that process based models formulated in terms of these modelling packages do not become “black box” models due to uncertainty in integration methods.  相似文献   

18.
Designers of computer-based material are currently forced by the available design tools to express interactivity with concepts derived from the logical-mathematical paradigm of computer science. For designers without special training as programmers this represents a barrier. The three psychological experiments presented here indicate that it is possible to express interactive behavior in a more direct fashion by letting the designers compose software from interaction elements with built-in behavior. The resulting “kinaesthetic thinking” of the software designers shows similarities with visual and musical thinking. To be able to support this style of design it might be necessary to rebuild from scratch parts of today's software using simple interactive building blocks. As an illustration, a design tool is presented, based on a pixel-level agent architecture.  相似文献   

19.
Agile software development (ASD) is an emerging approach in software engineering, initially advocated by a group of 17 software professionals who practice a set of “lightweight” methods, and share a common set of values of software development. In this paper, we advance the state-of-the-art of the research in this area by conducting a survey-based ex-post-facto study for identifying factors from the perspective of the ASD practitioners that will influence the success of projects that adopt ASD practices. In this paper, we describe a hypothetical success factors framework we developed to address our research question, the hypotheses we conjectured, the research methodology, the data analysis techniques we used to validate the hypotheses, and the results we obtained from data analysis. The study was conducted using an unprecedentedly large-scale survey-based methodology, consisting of respondents who practice ASD and who had experience practicing plan-driven software development in the past. The study indicates that nine of the 14 hypothesized factors have statistically significant relationship with “Success”. The important success factors that were found are: customer satisfaction, customer collaboration, customer commitment, decision time, corporate culture, control, personal characteristics, societal culture, and training and learning.  相似文献   

20.
Gordon  Franz  Paul   《Journal of Systems and Software》2009,82(9):1403-1418
The use of model checkers for automated software testing has received some attention in the literature: It is convenient because it allows fully automated generation of test suites for many different test objectives. On the other hand, model checkers were not originally meant to be used this way but for formal verification, so using model checkers for testing is sometimes perceived as a “hack”. Indeed, several drawbacks result from the use of model checkers for test case generation. If model checkers were designed or adapted to take into account the needs that result from the application to software testing, this could lead to significant improvements with regard to test suite quality and performance. In this paper we identify the drawbacks of current model checkers when used for testing. We illustrate techniques to overcome these problems, and show how they could be integrated into the model checking process. In essence, the described techniques can be seen as a general road map to turn model checkers into general purpose testing tools.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号