共查询到20条相似文献,搜索用时 46 毫秒
1.
A minimized automaton representation of reachable states 总被引:1,自引:0,他引:1
Gerard J. Holzmann Anuj Puri 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):270-278
We consider the problem of storing a set S⊂Σkas a deterministic finite automaton (DFA). We show that inserting a new string σ∈Σk or deleting a string from the set S represented as a minimized DFA can be done in expected time O(k|Σ|), while preserving
the minimality of the DFA. The method can be applied to reduce the memory requirements of model checkers that are based on
explicit state enumeration. As an example, we discuss an implementation of the method for the model checker Spin. 相似文献
2.
Fabio Casati Maria Grazia Fugini Isabelle Mirbel Barbara Pernici 《Requirements Engineering》2002,7(2):73-106
Workflow management systems are becoming a relevant support for a large class of business applications, and many workflow
models as well as commercial products are currently available. While the large availability of tools facilitates the development
and the fulfilment of customer requirements, workflow application development still requires methodological guidelines that
drive the developers in the complex task of rapidly producing effective applications. In fact, it is necessary to identify
and model the business processes, to design the interfaces towards existing cooperating systems, and to manage implementation
aspects in an integrated way. This paper presents the WIRES methodology for developing workflow applications under a uniform
modelling paradigm – UML modelling tools with some extensions – that covers all the life cycle of these applications: from
conceptual analysis to implementation. High-level analysis is performed under different perspectives, including a business and an organisational perspective. Distribution, interoperability and cooperation with external information systems are considered in this early
stage. A set of “workflowability” criteria is provided in order to identify which candidate processes are suited to be implemented
as workflows. Non-functional requirements receive particular emphasis in that they are among the most important criteria for
deciding whether workflow technology can be actually useful for implementing the business process at hand. The design phase
tackles aspects of concurrency and cooperation, distributed transactions and exception handling. Reuse of component workflows,
available in a repository as workflow fragments, is a distinguishing feature of the method. Implementation aspects are presented
in terms of rules that guide in the selection of a commercial workflow management system suitable for supporting the designed
processes, coupled with guidelines for mapping the designed workflows onto the model offered by the selected system. 相似文献
3.
Round-Trip Prototyping Based on Integrated Functional and User Interface Requirements Specifications
Requirements engineering in the new millennium is facing an increasing diversity of computerised devices comprising an increasing
diversity of interaction styles for an increasing diversity of user groups. Thus the incorporation of user interface requirements
into software requirements specifications becomes more and more mandatory. Validating these requirements specifications with
hand-made, throw-away prototypes is not only expensive, but also bears the danger that validation results are not accurately
fed back into the requirements specification. In this paper, we propose an enhancement of the requirements specification method
SCORES for an explicit capturing of user interface requirements. The advantages of the approach are threefold. First, the
user interface requirements specification is UML-compliant and integrated into the functional requirements specification.
Second, prototypes for validation purposes can semi-automatically be generated. Third, the model-based generation of prototypes
allows for ‘round-trip prototyping’ such that manual changes of the prototype during the validation process are automatically
fed back into the requirements specification. 相似文献
4.
This paper looks from an ethnographic viewpoint at the case of two information systems in a multinational engineering consultancy.
It proposes using the rich findings from ethnographic analysis during requirements discovery. The paper shows how context
– organisational and social – can be taken into account during an information system development process. Socio-technical
approaches are holistic in nature and provide opportunities to produce information systems utilising social science insights,
computer science technical competence and psychological approaches. These approaches provide fact-finding methods that are
appropriate to system participants’ and organisational stakeholders’ needs.
The paper recommends a method of modelling that results in a computerised information system data model that reflects the
conflicting and competing data and multiple perspectives of participants and stakeholders, and that improves interactivity
and conflict management. 相似文献
5.
Why do the business requirements and the final software product often have little in common? Why are stakeholders, developers
and managers reluctant to embrace a full requirements process? Why does everybody say, ‘We don’t have time for requirements’?
Why is the potentially most beneficial part of the development process ignored or short-changed?
Following are some observations about why the real requirements for the product often go undiscovered. We will address this
by focusing on the different concerns of the people involved in requirements. 相似文献
6.
Edward E. Cobb 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):173-190
Businesses today are searching for information solutions that enable them to compete in the global marketplace. To minimize
risk, these solutions must build on existing investments, permit the best technology to be applied to the problem, and be
manageable. Object technology, with its promise of improved productivity and quality in application development, delivers
these characteristics but, to date, its deployment in commercial business applications has been limited. One possible reason
is the absence of the transaction paradigm, widely used in commercial environments and essential for reliable business applications.
For object technology to be a serious contender in the construction of these solutions requires:
– technology for transactional objects. In December 1994, the Object Management Group adopted a specification for an object
transaction service (OTS). The OTS specifies mechanisms for defining and manipulating transactions. Though derived from the X/Open distributed
transaction processing model, OTS contains additional enhancements specifically designed for the object environment. Similar
technology from Microsoft appeared at the end of 1995.
– methodologies for building new business systems from existing parts. Business process re-engineering is forcing businesses
to improve their operations which bring products to market. Workflow computing, when used in conjunction with “object wrappers” provides tools to both define and track execution of business processes which leverage existing applications and infrastructure.
– an execution environment which satisfies the requirements of the operational needs of the business. Transaction processing
(TP) monitor technology, though widely accepted for mainframe transaction processing, has yet to enjoy similar success in
the client/server marketplace. Instead the database vendors, with their extensive tool suites, dominate. As object brokers
mature they will require many of the functions of today's TP monitors. Marrying these two technologies can produce a robust
execution environment which offers a superior alternative for building and deploying client/server applications.
Edited by Andreas Reuter, Received February 1995 / Revised August 1995 / Accepted May 1996 相似文献
7.
Yonit Kesten Amir Pnueli 《International Journal on Software Tools for Technology Transfer (STTT)》2000,2(4):328-342
In spite of the impressive progress in the development of the two main methods for formal verification of reactive systems
– Symbolic Model Checking and Deductive Verification, they are still limited in their ability to handle large systems. It
is generally recognized that the only way these methods can ever scale up is by the extensive use of abstraction and modularization,
which break the task of verifying a large system into several smaller tasks of verifying simpler systems.
In this paper, we review the two main tools of compositionality and abstraction in the framework of linear temporal logic.
We illustrate the application of these two methods for the reduction of an infinite-state system into a finite-state system
that can then be verified using model checking.
The technical contributions contained in this paper are a full formulation of abstraction when applied to a system with both
weak and strong fairness requirements and to a general temporal formula, and a presentation of a compositional framework for
shared variables and its application for forming network invariants. 相似文献
8.
Flip Korn Alexandros Labrinidis Yannis Kotidis Christos Faloutsos 《The VLDB Journal The International Journal on Very Large Data Bases》2000,8(3-4):254-266
Association Rule Mining algorithms operate on a data matrix (e.g., customers products) to derive association rules [AIS93b, SA96]. We propose a new paradigm, namely, Ratio Rules, which are quantifiable in that we can measure the “goodness” of a set of discovered rules. We also propose the “guessing
error” as a measure of the “goodness”, that is, the root-mean-square error of the reconstructed values of the cells of the
given matrix, when we pretend that they are unknown. Another contribution is a novel method to guess missing/hidden values
from the Ratio Rules that our method derives. For example, if somebody bought $10 of milk and $3 of bread, our rules can “guess”
the amount spent on butter. Thus, unlike association rules, Ratio Rules can perform a variety of important tasks such as forecasting,
answering “what-if” scenarios, detecting outliers, and visualizing the data. Moreover, we show that we can compute Ratio Rules
in a single pass over the data set with small memory requirements (a few small matrices), in contrast to association rule mining methods
which require multiple passes and/or large memory. Experiments on several real data sets (e.g., basketball and baseball statistics,
biological data) demonstrate that the proposed method: (a) leads to rules that make sense; (b) can find large itemsets in
binary matrices, even in the presence of noise; and (c) consistently achieves a “guessing error” of up to 5 times less than
using straightforward column averages.
Received: March 15, 1999 / Accepted: November 1, 1999 相似文献
9.
Christopher J. Atkinson 《Requirements Engineering》2000,5(2):67-73
The contributors to this special issue focus on socio-technical and soft approaches to information requirements elicitation
and systems development. They represent a growing body of research and practice in this field. This review presents an overview
and analysis of the salient themes within the papers encompassing their common underlying framework, the methodologies and
tools and techniques presented, the organisational situations in which they are deployed and the issues they seek to address.
It will be argued in the review that the contributions to this special edition exemplify the ‘post-methodological era’ and
the ‘contingency approaches’ from which it is formed. 相似文献
10.
We present a shared memory algorithm that allows a set of f+1 processes to wait-free “simulate” a larger system of n processes, that may also exhibit up to f stopping failures.
Applying this simulation algorithm to the k-set-agreement problem enables conversion of an arbitrary k-fault-tolerant{\it n}-process solution for the k-set-agreement problem into a wait-free k+1-process solution for the same problem. Since the k+1-processk-set-agreement problem has been shown to have no wait-free solution [5,18,26], this transformation implies that there is no
k-fault-tolerant solution to the n-process k-set-agreement problem, for any n.
More generally, the algorithm satisfies the requirements of a fault-tolerant distributed simulation.\/ The distributed simulation implements a notion of fault-tolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed
computing problems.
The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity,
expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include
a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multi-try snapshot\/ strategy.
The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write
shared memory systems is also presented.
Received: February 2001 / Accepted: February 2001 相似文献
11.
H. Courteney 《Cognition, Technology & Work》2000,2(3):142-153
Cognitive engineering has developed enormously over the last fifteen years. Yet, despite many excellent research projects
and publications, its full potential has not been embraced into mainstream system design. This paper will examine the reasons
for this failure and argue that the problem is not simply inertia or lack of education. There are strong organisational influences
that cause resistance to this particular approach. The discipline itself has characteristics that make it fragile in the modern
corporate structure. In addition, the cognitive engineers themselves are not blameless in the equation. They appear to have
done exactly what they criticise the engineering community for doing: they have packaged their product in a manner that is
not ‘user friendly’ to its target population, not structured to suit its application, and not output in the format required.
Suggestions will be made to rectify the situation: a list of actions is proposed for practising cognitive engineers to make
their product more likely to enjoy widespread uptake. 相似文献
12.
The requirements specification – as outcome of the requirements engineering process – falls short of capturing other useful
information generated during this process, such as the justification for selected requirements, trade-offs negotiated by stakeholders
and alternative requirements that were discarded. In the context of evolving systems and distributed development, this information
is essential. Rationale methods focus on capturing and structuring this missing information. In this paper, we propose an
integrated process with dedicated guidance for capturing requirements and their rationale, discuss its tool support and describe
the experiences we made during several case studies with students. Although the idea of integrating rationale methods with
requirements engineering is not new, few research projects so far have focused on smooth integration, dedicated tool support
and detailed guidance for such methods. 相似文献
13.
The success of the Object Management Group's General Inter-ORB Protocol (GIOP) is leading to the desire to deploy GIOP in
an ever-wider range of application areas, many of which are significantly more demanding than traditional areas in terms of
performance. The well-known performance limitations of present day GIOP-based object request brokers (ORBs) are therefore
increasingly being seen as a problem. To help address this problem, this paper discusses a GIOP implementation which has high
performance and quality of service support as explicit goals. The implementation, which is embedded in a research ORB called
Gopi, is modular and extensible in nature and includes novel optimization techniques which should be separately portable to other
ORB environments. This paper focuses on the message protocol aspects of Gopi's GIOP implementation; higher layer issues such as marshalling and operation demultiplexing are not covered in detail. Figures
are provided which position Gopi's GIOP performance against comparable ORBs. The paper also discusses some of the design decisions that have been made in
the development of the GIOP protocol in the light of our implementation experience.
Received: May 2000 / Accepted: December 2000 相似文献
14.
The elicitation or communication of user requirements comprises an early and critical but highly error-prone stage in system
development. Socially oriented methodologies provide more support for user involvement in design than the rigidity of more
traditional methods, facilitating the degree of user–designer communication and the ‘capture’ of requirements. A more emergent
and collaborative view of requirements elicitation and communication is required to encompass the user, contextual and organisational
factors. From this accompanying literature in communication issues in requirements elicitation, a four-dimensional framework
is outlined and used to appraise comparatively four different methodologies seeking to promote a closer working relationship
between users and designers. The facilitation of communication between users and designers is subject to discussion of the
ways in which communicative activities can be ‘optimised’ for successful requirements gathering, by making recommendations
based on the four dimensions to provide fruitful considerations for system designers. 相似文献
15.
Variability is a central concept in software product family development. Variability empowers constructive reuse and facilitates
the derivation of different, customer specific products from the product family. If many customer specific requirements can
be realised by exploiting the product family variability, the reuse achieved is obviously high. If not, the reuse is low.
It is thus important that the variability of the product family is adequately considered when eliciting requirements from
the customer.
In this paper we sketch the challenges for requirements engineering for product family applications. More precisely we elaborate
on the need to communicate the variability of the product family to the customer. We differentiate between variability aspects
which are essential for the customer and aspects which are more related to the technical realisation and need thus not be
communicated to the customer. Motivated by the successful usage of use cases in single product development we propose use
cases as communication medium for the product family variability. We discuss and illustrate which customer relevant variability
aspects can be represented with use cases, and for which aspects use cases are not suitable. Moreover we propose extensions
to use case diagrams to support an intuitive representation of customer relevant variability aspects.
Received: 14 October 2002 / Accepted: 8 January 2003
Published online: 27 February 2003
This work was partially funded by the CAFé project “From Concept to Application in System Family Engineering”; Eureka Σ! 2023
Programme, ITEA Project ip00004 (BMBF, F?rderkennzeichen 01 IS 002 C) and the state Nord-Rhein-Westfalia. This paper is a
significant extension of the paper “Modellierung der Variabilit?t einer Produktfamilie”, [15]. 相似文献
16.
Handling message semantics with Generic Broadcast protocols 总被引:1,自引:0,他引:1
Summary. Message ordering is a fundamental abstraction in distributed systems. However, ordering guarantees are usually purely “syntactic,”
that is, message “semantics” is not taken into consideration despite the fact that in several cases semantic information about
messages could be exploited to avoid ordering messages unnecessarily. In this paper we define the Generic Broadcast problem, which orders messages only if needed, based on the semantics of the messages. The semantic information about messages
is introduced by conflict relations. We show that Reliable Broadcast and Atomic Broadcast are special instances of Generic
Broadcast. The paper also presents two algorithms that solve Generic Broadcast.
Received: August 2000 / Accepted: August 2001 相似文献
17.
Summary. We prove the existence of a “universal” synchronous self-stabilizing protocol, that is, a protocol that allows a distributed
system to stabilize to a desired nonreactive behaviour (as long as a protocol stabilizing to that behaviour exists). Previous
proposals required drastic increases in asymmetry and knowledge to work, whereas our protocol does not use any additional
knowledge, and does not require more symmetry-breaking conditions than available; thus, it is also stabilizing with respect
to dynamic changes in the topology. We prove an optimal quiescence time n+D for a synchronous network of n processors and diameter D; the protocol can be made finite state with a negligible loss in quiescence time. Moreover, an optimal D+1 protocol is given for the case of unique identifiers. As a consequence, we provide an effective proof technique that allows
to show whether self-stabilization to a certain behaviour is possible under a wide range of models.
Received: January 1999 / Accepted: July 2001 相似文献
18.
Erich Hartmann 《The Visual computer》2001,17(7):445-456
x )=0 with ∥▿h∥=1. The normalform function h is (unlike the latter cases) not differentiable at curve points. Despite of this disadvantage the normalform is a suitable
tool for designing surfaces which can be treated as common implicit surfaces. Many examples (bisector surfaces, constant distance
sum/product surfaces, metamorphoses, blending surfaces, smooth approximation surfaces) demonstrate applications of the normalform
to surface design.
Published online: 25 July 2001 相似文献
19.
Carlo Combi Giuseppe Pozzi 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):294-311
The granularity of given temporal information is the level of abstraction at which information is expressed. Different units of measure allow
one to represent different granularities. Indeterminacy is often present in temporal information given at different granularities:
temporal indeterminacy is related to incomplete knowledge of when the considered fact happened. Focusing on temporal databases, different granularities
and indeterminacy have to be considered in expressing valid time, i.e., the time at which the information is true in the modeled
reality. In this paper, we propose HMAP (The term is the transliteration of an ancient Greek poetical word meaning “day”.), a temporal data model extending the capability
of defining valid times with different granularity and/or with indeterminacy. In HMAP, absolute intervals are explicitly represented by their start,end, and duration: in this way, we can represent valid times as “in December 1998 for five hours”, “from July 1995, for 15 days”, “from March
1997 to October 15, 1997, between 6 and 6:30 p.m.”. HMAP is based on a three-valued logic, for managing uncertainty in temporal relationships. Formulas involving different temporal
relationships between intervals, instants, and durations can be defined, allowing one to query the database with different
granularities, not necessarily related to that of data. In this paper, we also discuss the complexity of algorithms, allowing
us to evaluate HMAP formulas, and show that the formulas can be expressed as constraint networks falling into the class of simple temporal problems,
which can be solved in polynomial time.
Received 6 August 1998 / Accepted 13 July 2000 Published online: 13 February 2001 相似文献
20.
Shared memory provides a convenient programming model for parallel applications. However, such a model is provided on physically
distributed memory systems at the expense of efficiency of execution of the applications. For this reason, applications can
give minimum consistency requirements on the memory system, thus allowing alternatives to the shared memory model to be used
which exploit the underlying machine more efficiently. To be effective, these requirements need to be specified in a precise
way and to be amenable to formal analysis. Most approaches to formally specifying consistency conditions on memory systems
have been from the viewpoint of the machine rather than from the application domain.
In this paper we show how requirements on memory systems can be given from the viewpoint of the application domain formally
in a first-order theory MemReq, to improve the requirements engineering process for such systems. We show the general use of MemReq in expressing major classes of requirements for memory systems and conduct a case study of the use of MemReq in a real-life parallel system out of which the formalism arose. 相似文献