首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 614 毫秒
1.
We have developed novel techniques for component-based specification of programming languages. In our approach, the semantics of each fundamental programming construct is specified independently, using an inherently modular framework such that no reformulation is needed when constructs are combined. A language specification consists of an unrestricted context-free grammar for the syntax of programs, together with an analysis of each language construct in terms of fundamental constructs. An open-ended collection of fundamental constructs is currently being developed. When supported by appropriate tools, our techniques allow a more agile approach to the design, modelling, and implementation of programming and domain-specific languages. In particular, our approach encourages language designers to proceed incrementally, using prototype implementations generated from specifications to test tentative designs. The components of our specifications are independent and highly reusable, so initial language specifications can be rapidly produced, and can easily evolve in response to changing design decisions. In this paper, we outline our approach, and relate it to the practices and principles of agile modelling.  相似文献   

2.
3.
The modelling of complex workflows is an important problem-solving technique within healthcare settings. However, currently most of the workflow models use a simplified flow chart of patient flow obtained using on-site observations, group-based debates and brainstorming sessions, together with historic patient data. This paper presents a systematic and semi-automatic methodology for knowledge acquisition with detailed process representation using sequential interviews of people in the key roles involved in the service delivery process. The proposed methodology allows the modelling of roles, interactions, actions, and decisions involved in the service delivery process. This approach is based on protocol generation and analysis techniques such as: (i) initial protocol generation based on qualitative interviews of radiology staff, (ii) extraction of key features of the service delivery process, (iii) discovering the relationships among the key features extracted, and, (iv) a graphical representation of the final structured model of the service delivery process. The methodology is demonstrated through a case study of a magnetic resonance (MR) scanning service-delivery process in the radiology department of a large hospital. A set of guidelines is also presented in this paper to visually analyze the resulting process model for identifying process vulnerabilities. A comparative analysis of different workflow models is also conducted.  相似文献   

4.
Today’s interconnected socio-economic and environmental challenges require the combination and reuse of existing integrated modelling solutions. This paper contributes to this overall research area, by reviewing a wide range of currently available frameworks, systems and emerging technologies for integrated modelling in the environmental sciences. Based on a systematic review of the literature, we group related studies and papers into viewpoints and elaborate on shared and diverging characteristics. Our analysis shows that component-based modelling frameworks and scientific workflow systems have been traditionally used for solving technical integration challenges, but ultimately, the appropriate framework or system strongly depends on the particular environmental phenomenon under investigation. The study also shows that – in general – individual integrated modelling solutions do not benefit from components and models that are provided by others. It is this island (or silo) situation, which results in low levels of model reuse for multi-disciplinary settings. This seems mainly due to the fact that the field as such is highly complex and diverse. A unique integrated modelling solution, which is capable of dealing with any environmental scenario, seems to be unaffordable because of the great variety of data formats, models, environmental phenomena, stakeholder networks, user perspectives and social aspects. Nevertheless, we conclude that the combination of modelling tools, which address complementary viewpoints – such as service-based combined with scientific workflow systems, or resource-modelling on top of virtual research environments – could lead to sustainable information systems, which would advance model sharing, reuse and integration. Next steps for improving this form of multi-disciplinary interoperability are sketched.  相似文献   

5.
Although Entity-Relationship (ER) modelling techniques are commonly used for information modelling, Object-Role Modelling (ORM) techniques are becoming increasingly popular, partly because they include detailed design procedures providing guidelines for the modeller. As with the ER approach, a number of different ORM techniques exist. In this paper, we propose an integration of two theoretically well founded ORM techniques: FORM and PSM. Our main focus is on a common terminological framework, and on the notion of subtyping. Subtyping has long been an important feature of semantic approaches to conceptual schema design. It is also the concept in which FORM and PSM differ the most in their formalization. The subtyping issue is discussed from three different viewpoints covering syntactical, identification, and population issues. Finally, a wider comparison of approaches to subtyping is made, which encompasses other ER-based and ORM-based information modelling techniques, and highlights how formal subtype definitions facilitate a comprehensive specification of subtype constraints.  相似文献   

6.
There are as yet no fully comprehensive techniques for specifying, verifying and testing unconventional computations. In this paper, we propose a generally applicable and designer-friendly specification strategy based on a generalized variant of Eilenberg’s X-machine model of computation. Our approach, which extends existing approaches to SXM test-based verification, is arguably capable of modelling very general unconventional computations, and would allow implementations to be verified fully against their specifications.  相似文献   

7.
Nowadays, one of the major challenges for enterprises is to stay competitive in today's changing market environment. This can be supported by business process models which on one hand are consistent and adequate (requirement #1), and on the other hand can be enacted and operated in an easy, fast, straightforward and integrated way (requirement #2). The CIMOSA architecture provides the basis for business process modelling to fulfil both of the above requirements. It supports the creation of consistent process models and allows to identify almost all the information required for the development of a workflow model. These models can be implemented using one of the commercial workflow management systems. In this paper we present the methodology based on the CIMOSA architecture that we have developed to build a workflow model in ©Lotus Notes for the forecasting and production planning processes in a tiles manufacturing enterprise. The CIMOSA approach has also been used to design the necessary software applications for processing the information of the resulting workflow system.  相似文献   

8.
Workflow management systems (WfMS) are widely used by business enterprises as tools for administrating, automating and scheduling the business process activities with the available resources. Since the control flow specifications of workflows are manually designed, they entail assumptions and errors, leading to inaccurate workflow models. Decision points, the XOR nodes in a workflow graph model, determine the path chosen toward completion of any process invocation. In this work, we show that positioning the decision points at their earliest points can improve process efficiency by decreasing their uncertainties and identifying redundant activities. We present novel techniques to discover the earliest positions by analyzing workflow logs and to transform the model graph. The experimental results show that the transformed model is more efficient with respect to its average execution time and uncertainty, when compared to the original model.  相似文献   

9.
An adaptive optimal scheduling and controller design is presented that attempts to improve the performance of beer membrane filtration over the ones currently obtained by operators. The research was performed as part of a large European research project called EU Cafe with the aim to investigate the potential of advanced modelling and control to improve the production and quality of food. Significant improvements are demonstrated in this paper through simulation experiments. Optimal scheduling and control comprises a mixed integer non-linear programming problem (MINLP). By making some suitable assumptions that are approximately satisfied in practice, we manage to significantly simplify the problem by turning it into an ordinary non-linear programming problem (NLP) for which solution methods are readily available. The adaptive part of our scheduler and controller performs model parameter adaptations. These are also obtained by solving associated NLP problems. During cleaning stages in between membrane filtrations enough time is available to solve the NLP problems. This allows for real-time implementation.  相似文献   

10.
The aim of this paper is to present a modelling scheme for programming methods and to illustrate it on Jackson's programming method. We first give a formal semantics to the objects of this method and we model the basic strategy of matching trees in order to build a program structure. In the next section we study how to support a formal development, its automatization, and the building of a formal specification within the scope of our model. Then an example is developed. The last section addresses alternative strategies suggested by the method in order to solve clash problems, where the basic strategy fails. Boundary and ordering clash situations are presented and their strategies are modelled.  相似文献   

11.
Summary Inverting the adage that a data type is just a simple programming language, we take the position that a programming language is, semantically, just a complex data type; evaluation of a program is just another operation in the data type. The algebraic approach to data types may then be applied. We make a distinction between specification and modelling, and we emphasize the use of first-order identities as a specification language rather than as a tool for model-building. Denotational and operational semantics are discussed. Techniques are introduced for proving the equivalence of specifications. Reynolds' lambda-calculus interpreter is analyzed as an example.  相似文献   

12.
Current provenance stores associated with workflow management systems (WfMSs) capture enough coarse-grained information to describe which datasets were used and which processes were run. While this information is enough to rebuild a workflow run, it is not enough to facilitate user understanding. Because the data is manipulated via a series of black boxes, it is often impossible for a human to understand what happened to the data. In this work, we highlight the missing information that can assist user understanding. Unfortunately, provenance information is already very complex and difficult for a user to comprehend, which can be exacerbated by adding the extra information needed for deeper blackbox understanding. In order to alleviate this, we develop a model of provenance answers that follow a “roll up”, “drill down” strategy. We evaluate these techniques to determine if users have better understanding of provenance information. We show how this information can be captured by workflow management systems, and that the structures and information needed for this model are a negligible addition to standard provenance stores. Finally, we implement these techniques in a real provenance system, and evaluate implementation feasibility.  相似文献   

13.
Using predicate transformers as a basis, we give semantics and refinement rules for mixed specifications that allow UNITY style specifications to be written as a combination of abstract program and temporal properties. From the point of view of the programmer, mixed specifications may be considered a generalization of the UNITY specification notation to allow safety properties to be specified by abstract programs in addition to temporal properties. Alternatively, mixed specifications may be viewed as a generalization of the UNITY programming notation to allow arbitrary safety and progress properties in a generalized ‘always section’. The UNITY substitution axiom is handled in a novel way by replacing it with a refinement rule. The predicate transformers foundation allows known techniques for algorithmic and data-refinement for weakest precondition based programming to be applied to both safety and progress properties. In this paper, we define the predicate transformer based specifications, specialize the refinement techniques to them, demonstrate soundness, and illustrate the approach with a substantial example. Received: 1 April 1996 / 6 March 1997  相似文献   

14.
15.
16.
ContextLearning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations.ObjectivesThe current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose.MethodAn experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement.ResultThe research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications.ConclusionThe current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.  相似文献   

17.
We describe a behavioural modelling approach based on the concept of a “Protocol Machine”, a machine whose behaviour is governed by rules that determine whether it accepts or refuses events that are presented to it. We show how these machines can be composed in the manner of mixins to model object behaviour and show how the approach provides a basis for defining reusable fine-grained behavioural abstractions. We suggest that this approach provides better encapsulation of object behaviour than traditional object modelling techniques when modelling transactional business systems. We relate the approach to work going on in model driven approaches, specifically the Model Driven Architecture initiative sponsored by the Object Management Group. Communicated by August-Wilhelm Scheer Ashley McNeile is a practitioner with over 25 years of experience in systems development and IT related management consultancy. His main areas of interest are requirements analysis techniques and model execution and in 2001 he founded Metamaxim Ltd. to pioneer new techniques in these areas. He has published and presented widely on object oriented development methodology and systems architecture. Nicholas Simons has been working with formal methods of system specification since their introduction, and has over 20 years experience in building tools for system design, code generation and reverse engineering. In addition, he lectures on systems analysis and design, Web programming and project planning. He is a co-founder and director of Metamaxim Ltd.  相似文献   

18.
As an art form between drawing and sculpture, relief has been widely used in a variety of media for signs, narratives, decorations and other purposes. Traditional relief creation relies on both professional skills and artistic expertise, which is extremely time‐consuming. Recently, automatic or semi‐automatic relief modelling from a 3D object or a 2D image has been a subject of interest in computer graphics. Various methods have been proposed to generate reliefs with few user interactions or minor human efforts, while preserving or enhancing the appearance of the input. This survey provides a comprehensive review of the advances in computer‐assisted relief modelling during the past decade. First, we provide an overview of relief types and their art characteristics. Then, we introduce the key techniques of object‐space methods and image‐space methods respectively. Advantages and limitations of each category are discussed in details. We conclude the report by discussing directions for possible future research.  相似文献   

19.
Most process modelling techniques exist without any firm theoretical foundation. This results in a lack of model validation, which can be in-terms of model consistency, feasibility and goal compliance. Moreover, these techniques are mostly deterministic in nature and not applicable to stochastic systems. In this paper, we propose an ontology-based stochastic process modelling framework that further provides a specialization to failure and reliability issues. The framework is notation independent, and is primarily rooted in Bunge’s ontology. The well-established theory of reliability constructs are also mapped to facilitate the modelling of failure prone systems.  相似文献   

20.
In many industries structural change through E-Commerce is challenging firms to re-align their strategies as well as re-engineer their business processes with new competitive environments while taking advantage of technological opportunities. This article presents E-MEMO, a method for multi-perspective enterprise modelling with special emphasis on processes and technologies for E-Commerce. It serves to analyse and design corporate information systems that are balanced with a company’s E-Commerce strategy and its organisation. E-MEMO offers specific languages for modelling strategies, business processes, and related resources. In addition to that, it provides a library of reference models including strategy networks to guide strategic planning and models of business processes. In order to further support the implementation of information systems a transformation has been defined that allows for generating workflow schemata from business process models. Since design-oriented research is not predominant in the Information Systems field, the epistemological challenges of the chosen research approach are discussed, too.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号