首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Workflow management systems are becoming a relevant support for a large class of business applications, and many workflow models as well as commercial products are currently available. While the large availability of tools facilitates the development and the fulfilment of customer requirements, workflow application development still requires methodological guidelines that drive the developers in the complex task of rapidly producing effective applications. In fact, it is necessary to identify and model the business processes, to design the interfaces towards existing cooperating systems, and to manage implementation aspects in an integrated way. This paper presents the WIRES methodology for developing workflow applications under a uniform modelling paradigm – UML modelling tools with some extensions – that covers all the life cycle of these applications: from conceptual analysis to implementation. High-level analysis is performed under different perspectives, including a business and an organisational perspective. Distribution, interoperability and cooperation with external information systems are considered in this early stage. A set of “workflowability” criteria is provided in order to identify which candidate processes are suited to be implemented as workflows. Non-functional requirements receive particular emphasis in that they are among the most important criteria for deciding whether workflow technology can be actually useful for implementing the business process at hand. The design phase tackles aspects of concurrency and cooperation, distributed transactions and exception handling. Reuse of component workflows, available in a repository as workflow fragments, is a distinguishing feature of the method. Implementation aspects are presented in terms of rules that guide in the selection of a commercial workflow management system suitable for supporting the designed processes, coupled with guidelines for mapping the designed workflows onto the model offered by the selected system.  相似文献   

2.
段雷  万建成 《计算机科学》2004,31(Z1):161-164
随着Web应用程序的复杂度逐渐增加,人们对系统化开发Web应用程序的方法的要求日益迫切.本文提出了一种基于MDA的Web应用程序开发方法--MDHDM,利用MDA的特性弥补了已有开发方法的缺陷.本文中,我们介绍了MDHDM的设计步骤并说明了整体结构、相关模型和模型间映射规则.最后通过分析一个开发实例,进一步说明了MDHDM.  相似文献   

3.
《Computers & chemistry》1996,20(4):403-418
A computer code serving for an automatic translation of user-written source texts of electrochemical reaction mechanisms into corresponding target texts of mathematical equations that govern the kinetics of electrochemical systems under transient conditions is reported. The rules of the language enabling symbolic specification of the reaction mechanisms, the compiler options, and conventions regarding the target formulae are outlined and illustrated by examples. A considerable diversity of reaction mechanisms involving equilibrium, non-equilibrium reversible or irreversible reactions that can be electrochemical, heterogeneous non-electrochemical or homogeneous, is permitted. The reactions may involve bulk species (distributed in the electrolyte volume) and interfacial species (localized at the electrodes) of variable or constant concentrations, and electrons. The transient conditions may correspond to a number of electrochemical techniques, including potential-step method, linear potential scan voltammetry and chronopotentiometry. For kinetic problems in one-dimensional space geometry the generated governing equations take the general form of the reaction-advection-diffusion partial differential equations for the concentrations of bulk species (with initial and boundary conditions), optionally coupled with algebraic, ordinary differential or differential-algebraic equations for the concentrations of interfacial species. The governing equations can be obtained in the form of ELSIM problem definitions, enabling further solution by means of this simulation program.  相似文献   

4.
In this paper, a new procedure to continuously adjust weights in a multi-layered neural network is proposed. The network is initially trained by using a traditional backpropagation algorithm. After this first step, a non-linear programming technique is used to properly calculate the new weights sets online. This methodology is tailored to be used in time varying (non-stationary) models, eliminating the necessity for retraining. Numerical results for a controlled experiment and for real data are presented.  相似文献   

5.
This paper presents an extension of the AAA rapid prototyping methodology for the optimized implementation of real-time applications onto reconfigurable circuits. This extension is based on an unified model of factorized data dependence graphs as well to specify the application algorihtm, as to deduce the possible implementations onto reconfigurable hardware. This is formalized in terms of graphs transformations. This seamless transformation flow has been implemented in a CAD software tool called SynDEx-IC.  相似文献   

6.
相对于刚体,柔性体的建模和分析比较复杂,点分布模型是目前最有效的柔性体建模方法之一。然而,在许多条件下,需要依据研究对象的时间变化特性构建多个即时模型或者任意时刻的点分布模型。针对这种情况提出一种插值算法,根据两个相邻的已知点分布模型生成某个中间时刻的过渡模型。以线性插值为例,可以在任意给定时刻生成一个插入模型的平均形状和特征表达。其中,平均形状可由前后相邻模型的线性组合计算得到,而特征表达则需要根据相应协方差矩阵的特征向量分析得到。经过了计算机仿真和实际图像验证,结果表明,这种模型插值技术具有良好的可行性和实用价值。  相似文献   

7.
Optical models for direct volume rendering   总被引:18,自引:0,他引:18  
This tutorial survey paper reviews several different models for light interaction with volume densities of absorbing, glowing, reflecting, and/or scattering material. They are, in order of increasing realism, absorption only, emission only, emission and absorption combined, single scattering of external illumination without shadows, single scattering with shadows, and multiple scattering. For each model the paper provides the physical assumptions, describes the applications for which it is appropriate, derives the differential or integral equations for light transport, presents calculation methods for solving them, and shows output images for a data set representing a cloud. Special attention is given to calculation methods for the multiple scattering model  相似文献   

8.
A truly functional Bayesian method for detecting temporally differentially expressed genes between two experimental conditions is presented. The method distinguishes between two biologically different set ups, one in which the two samples are interchangeable, and one in which the second sample is a modification of the first, i.e. the two samples are non-interchangeable. This distinction leads to two different Bayesian models, which allow more flexibility in modeling gene expression profiles. The method allows one to identify differentially expressed genes, to rank them and to estimate their expression profiles. The proposed procedure successfully deals with various technical difficulties which arise in microarray time-course experiments, such as small number of observations, non-uniform sampling intervals and presence of missing data or repeated measurements. The procedure allows one to account for various types of error, thus offering a good compromise between nonparametric and normality assumption based techniques. In addition, all evaluations are carried out using analytic expressions, hence the entire procedure requires very little computational effort. The performance of the procedure is studied using simulated and real data.  相似文献   

9.
Fluorescence energy transfer (FRET) experiments of site-specifically labelled proteins allow one to determine distances between residues at the single molecule level, which provide information on the three-dimensional structural dynamics of the biomolecule. To systematically extract this information from the experimental data, we describe a program that generates an ensemble of configurations of residues in space that agree with the experimental distances between these positions. Furthermore, a fluctuation analysis allows to determine the structural accuracy from the experimental error.

Program summary

Title of program: FRETsgCatalogue identifier: ADTUProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTUComputer: SGI Octane, Pentium II/III, Athlon MP, DEC AlphaOperating system: Unix, Linux, Windows98/NT/XPProgramming language used: ANSI CNo. of bits in a word: 32 or 64No. of processors used: 1No. of bytes in distributed program, including test data, etc.: 11407No. of lines in distributed program, including test data, etc.: 1647Distribution format: gzipped tar fileNature of the physical problem: Given an arbitrary number of distance distributions between an arbitrary number of points in three-dimensional space, find all configurations (set of coordinates) that obey the given distances.Method of solution: Each distance is described by a harmonic potential. Starting from random initial configurations, their total energy is minimized by steepest descent. Fluctuations of positions are chosen to generate distance distribution widths that best fit the given values.  相似文献   

10.
An algorithm for the rectification of uncalibrated images is presented and applied to a variety of cases. The algorithm generates the rectifying transformations directly from the geometrical relationship between the images, using any three correspondences in the images to define a reference plane. A small set of correspondences is used to calculate an initial rectification. Additional correspondences are introduced semi-automatically, by correlating regions of the rectified images. Since the rectified images of surfaces in the reference plane have no relative distortion, features can be matched very accurately by correlation, allowing small changes in disparity to be detected. In the 3-d reconstruction of an architectural scene, differences in depth are resolved to about 0.001 of the distance from camera to subject.  相似文献   

11.
Several development approaches have been proposed to cope with the increasing complexity of embedded system design. The most widely used approaches are those using models as the main artifacts to be constructed and maintained. The desired role of models is to ease, systematize and standardize the approach to the construction of software-based systems. To enforce reuse and interconnect the process of model specification and system development with models, we promote a model-based approach coupled with a model repository. In this paper, we propose a model-driven engineering methodological approach for the development of a model repository and an operational architecture for development tools. In addition, we provide evidence of the benefits and feasibility of our approach by reporting on a preliminary prototype that provides a model-based repository of security and dependability (S&D) pattern models. Finally, we apply the proposed approach in practice to a use case from the railway domain with strong S&D requirements.  相似文献   

12.
On common processors, integer multiplication is many times faster than integer division. Dividing a numerator n by a divisor d is mathematically equivalent to multiplication by the inverse of the divisor (n/d=n∗1/d). If the divisor is known in advance, or if repeated integer divisions will be performed with the same divisor, it can be beneficial to substitute a less costly multiplication for an expensive division. Currently, the remainder of the division by a constant is computed from the quotient by a multiplication and a subtraction. However, if just the remainder is desired and the quotient is unneeded, this may be suboptimal. We present a generally applicable algorithm to compute the remainder more directly. Specifically, we use the fractional portion of the product of the numerator and the inverse of the divisor. On this basis, we also present a new and simpler divisibility algorithm to detect nonzero remainders. We also derive new tight bounds on the precision required when representing the inverse of the divisor. Furthermore, we present simple C implementations that beat the optimized code produced by state-of-the-art C compilers on recent x64 processors (eg, Intel Skylake and AMD Ryzen), sometimes by more than 25%. On all tested platforms, including 64-bit ARM and POWER8, our divisibility test functions are faster than state-of-the-art Granlund-Montgomery divisibility test functions, sometimes by more than 50%.  相似文献   

13.
Traditional statistical models for speech recognition have mostly been based on a Bayesian framework using generative models such as hidden Markov models (HMMs). This paper focuses on a new framework for speech recognition using maximum entropy direct modeling, where the probability of a state or word sequence given an observation sequence is computed directly from the model. In contrast to HMMs, features can be asynchronous and overlapping. This model therefore allows for the potential combination of many different types of features, which need not be statistically independent of each other. In this paper, a specific kind of direct model, the maximum entropy Markov model (MEMM), is studied. Even with conventional acoustic features, the approach already shows promising results for phone level decoding. The MEMM significantly outperforms traditional HMMs in word error rate when used as stand-alone acoustic models. Preliminary results combining the MEMM scores with HMM and language model scores show modest improvements over the best HMM speech recognizer.  相似文献   

14.
The design of sifting experiments is considered. The properties of superoptimal designs discovered by Meshalkin [3, 4] are investigated.Translated from Kibernetika, No. 3, pp. 91–97, May–June, 1991.  相似文献   

15.
A software model can be analysed for non-functional requirements by extending it with suitable annotations and transforming it into analysis models for the corresponding non-functional properties. For quantitative performance evaluation, suitable annotations are standardized in the “UML Profile for Modeling and Analysis of Real-Time Embedded systems” (MARTE) and its predecessor, the “UML Profile for Schedulability, Performance and Time”. A range of different performance model types (such as queueing networks, Petri nets, stochastic process algebra) may be used for analysis. In this work, an intermediate “Core Scenario Model” (CSM) is used in the transformation from the source software model to the target performance model. CSM focuses on how the system behaviour uses the system resources. The semantic gap between the software model and the performance model must be bridged by (1) information supplied in the performance annotations, (2) in interpretation of the global behaviour expressed in the CSM and (3) in the process of constructing the performance model. Flexibility is required for specifying sets of alternative cases, for choosing where this bridging information is supplied, and for overriding values. It is also essential to be able to trace the source of values used in a particular performance estimate. The performance model in turn can be used to verify responsiveness and scalability of a software system, to discover architectural limitations at an early stage of development, and to develop efficient performance tests. This paper describes how the semantic gap between software models in UML+MARTE and performance models (based on queueing or Petri nets) can be bridged using transformations based on CSMs, and how the transformation challenges are addressed.  相似文献   

16.
有不少软件公司在软件开发过程中同时使用了面向功能的和面向对象的技术,例如在开发某一系统时,在采用了面向功能的分析模式后又使用了面向对象的设计方法,因此找到一种方法来实现从一种模式向另一种模式的转换是相当有必要的.提出了一种灵活可行的从面向功能的分析模型到面向对象的设计模型的转换策略.  相似文献   

17.
《Computers & chemistry》1987,11(3):153-158
This paper describes a data acquisition and test monitoring system designed to manage electrochemical tests of secondary batteries and of their cathodic materials. The software developed allows various protocols for galvanostatic cycling as a function of time or of voltage, as well as for charge-discharge steps associated with relaxation periods. The software package has been used on a time-sharing mini-computer. Thus, while data collection is in progress, one can examine graphically in real time the voltage evolution and the relaxation kinetics of the material under study.  相似文献   

18.
Web applications can be classified as hybrids between hypermedia and information systems. They have a relatively simple distributed architecture from the user viewpoint, but a complex dynamic architecture from the designer viewpoint. They need to respond to operation by an unlimited number of heterogeneously skilled users, address security and privacy concerns, access heterogeneous, up-to-date information sources, and exhibit dynamic behaviors that involve such processes as code transferring. Common system development methods can model some of these aspects, but none of them is sufficient to specify the large spectrum of Web application concepts and requirements. This paper introduces OPM/Web, an extension to the Object-Process Methodology (OPM) that satisfies the functional, structural and behavioral Web-based information system requirements. The main extensions of OPM/Web are adding properties of links to express requirements, such as those related to encryption; extending the zooming and unfolding facilities to increase modularity; cleanly separating declarations and instances of code to model code transferring; and adding global data integrity and control constraints to express dependence or temporal relations among (physically) separate modules. We present a case study that helps evaluate OPM/Web and compare it to an extension of the Unified Modeling Language (UML) for the Web application domain.  相似文献   

19.
In this paper, we will discuss a methodology developed and applied in the European ITERATE project with the objective of designing experiments that will provide data to seed the numerical model of operator behaviour in different surface transport modes: road vehicles, rail transport and ships. The experiments aim to investigate how new technologies support different types of operators in different contexts. A structured approach was adopted. Firstly, an initial selection of the systems to be investigated was made, describing the support they provide for operators. Hypotheses were formulated on the effects of operator parameters on the interaction with the systems. A final selection of systems for the experiments was made, focusing on systems providing support for collision avoidance and speed management. The operator parameters (culture, attitude and personality, experience, driver state (such as fatigue) and the demands of the task) were operationalised and piloted. The next step was the development of scenarios to be implemented in a driving simulator. In the last step, the final experiments were designed and detailed.  相似文献   

20.
针对半间歇聚合反应强非线性,模型参数的不确定性以及过程存在路径约束和终端约束等特点,将多阶非线性模型预测控制方法应用于半间歇聚合反应控制器的设计。该方法使用离散的场景树表示不确定量对系统的影响,使得未来的控制输入取决于先前的不确定量实现,从而构成闭环的鲁棒性控制方法。为了提高求解效率,使用基于有限元正交配置的联立法对问题求解,通过选取合适的radau配置点对控制变量和状态变量同时离散化,将动态优化问题转化为NLP问题,并使用内点法求解器IPOPT求解。在BASF SE公司提供的半间歇聚合反应模型上验证了提出方法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号