共查询到20条相似文献,搜索用时 62 毫秒
1.
The traditional style of working with computers generally revolves around the computer being used as a tool, with individual
users directly initiating operations and waiting for the results of them. A more recent paradigm of human-computer interaction,
based on the indirect management of computing resources, is agent-based interaction. The idea of delegation plays a key part
in this approach to computer-based work, which allows individuals to relinquish the routine, mechanistic parts of their everyday
tasks, having them performed automatically instead. Adaptive interfaces combine elements of both these approaches, where the
goal is to have the interface adapt to its users rather than the reverse. This paper addresses some of the issues arising
from a practical software development process which aimed to support individuals using this style of interaction. This paper
documents the development of a set of classes which implement an architecture for adaptive interfaces. These classes are intended
to be used as part of larger user interface systems which are to exhibit adaptive behaviour. One approach to the implementation
of an adaptive interface is to use a set of software “agents”– simple processes which effectively run “in the background”–
to decompose the task of implementing the interface. These agents form part of a larger adaptive interface architecture, which
in turn forms a component of the adaptive system. 相似文献
2.
Gerd Schürmann 《Multimedia Systems》1996,4(5):281-295
Electronic mail for traditional text exchange as asynchronous communication means between computer users is widely built upon
in many application areas. Whereas Multimedia-Mail systems – including text, graphics, still images, audio, video and documents
– were limited to isolated communities – at least two very promising approaches are being under development: the MIME (Multipurpose
Internet Mail Extension), an extension of Internet Mail as well as the Multimedia Teleservice based on CCITT Recommendation
X.400(88) being under development within the BERKOM project funded by the German TELEKOM. The store-and-forward mechanism
inherent to electronic mail is complemented in the later one by an additional exchange mechanism allowing the resolution of
references to message content, e.g. video. Such references may be put into a message in place of the content itself. Internet/MIME
and OSI/X.400, their interworking, asynchronous information server access via Multimedia-Mail, as well as possible future
developments especially in the area of asynchronous Computer Supported Cooperative Work (CSCW) are discussed. 相似文献
3.
Jean-Charles Pomerol 《Requirements Engineering》1998,3(3-4):174-181
In this paper, we address the question of how flesh and blood decision makers manage the combinatorial explosion in scenario
development for decision making under uncertainty. The first assumption is that the decision makers try to undertake ‘robust’
actions. For the decision maker a robust action is an action that has sufficiently good results whatever the events are. We
examine the psychological as well as the theoretical problems raised by the notion of robustness. Finally, we address the
false feeling of decision makers who talk of ‘risk control’. We argue that ‘risk control’ results from the thinking that one
can postpone action after nature moves. This ‘action postponement’ amounts to changing look-ahead reasoning into diagnosis.
We illustrate these ideas in the framework of software development and examine some possible implications for requirements
analysis. 相似文献
4.
In control systems, the interfaces between software and its embedding environment are a major source of costly errors. For
example, Lutz reported that 20–35% of the safety-related errors discovered during integration and system testing of two spacecraft
were related to the interfaces between the software and the embedding hardware. Also, the software’s operating environment
is likely to change over time, further complicating the issues related to system-level inter-component communication. In this
paper we discuss a formal approach to the specification and analysis of inter-component communication using a revised version
of RSML (Requirements State Machine Language). The formalism allows rigorous specification of the physical aspects of the
inter-component communication and forces encapsulation of communication-related properties in well-defined and easy-to-read
interface specifications. This enables us both to analyse a system design to detect incompatibilities between connected components
and to use the interface specifications as safety kernels to enforce safety constraints. 相似文献
5.
The concept of multiplicity in UML derives from that of cardinality in entity-relationship modeling techniques. The UML documentation
defines this concept but at the same time acknowledges some lack of obviousness in the specification of multiplicities for
n-ary associations. This paper shows an ambiguity in the definition given by UML documentation and proposes a clarification
to this definition, as well as the use of outer and inner multiplicities as a simple extension to the current notation to
represent other multiplicity constraints, such as participation constraints, that are equally valuable in understanding n-ary
associations.
Initial submission: 16 January 2002 / Revised submission: 17 October 2002
Published online: 2 December 2002
RID="*"
ID="*"A previous shorter version of this paper was presented under the title “Semantics of the Minimum Multiplicity in Ternary
Associations in UML” at The 4th International Conference on the Unified Modeling Language-UML’2001, October 1–5 2001, Toronto,
Ontario, Canada, Springer Verlag, LNCS 2185, pp. 329–341. 相似文献
6.
Integration – supporting multiple application classes with heterogeneous performance requirements – is an emerging trend
in networks, file systems, and operating systems. We evaluate two architectural alternatives – partitioned and integrated
– for designing next-generation file systems. Whereas a partitioned server employs a separate file system for each application
class, an integrated file server multiplexes its resources among all application classes; we evaluate the performance of the
two architectures with respect to sharing of disk bandwidth among the application classes. We show that although the problem
of sharing disk bandwidth in integrated file systems is conceptually similar to that of sharing network link bandwidth in
integrated services networks, the arguments that demonstrate the superiority of integrated services networks over separate
networks are not applicable to file systems. Furthermore, we show that: an integrated server outperforms the partitioned server
in a large operating region and has slightly worse performance in the remaining region; the capacity of an integrated server
is larger than that of the partitioned server; and an integrated server outperforms the partitioned server by a factor of
up to 6 in the presence of bursty workloads. 相似文献
7.
Theo C. Ruys Ed Brinksma 《International Journal on Software Tools for Technology Transfer (STTT)》2003,4(2):246-259
In this paper we take a closer look at the automated analysis of designs, in particular of verification by model checking.
Model checking tools are increasingly being used for the verification of real-life systems in an industrial context. In addition
to ongoing research aimed at curbing the complexity of dealing with the inherent state space explosion problem – which allows
us to apply these techniques to ever larger systems – attention must now also be paid to the methodology of model checking,
to decide how to use these techniques to their best advantage. Model checking “in the large” causes a substantial proliferation
of interrelated models and model checking sessions that must be carefully managed in order to control the overall verification
process. We show that in order to do this well both notational and tool support are required. We discuss the use of software
configuration management techniques and tools to manage and control the verification trajectory. We present Xspin/Project,
an extension to Xspin, which automatically controls and manages the validation trajectory when using the model checker Spin.
Published online: 18 June 2002 相似文献
8.
Edward E. Cobb 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(3):173-190
Businesses today are searching for information solutions that enable them to compete in the global marketplace. To minimize
risk, these solutions must build on existing investments, permit the best technology to be applied to the problem, and be
manageable. Object technology, with its promise of improved productivity and quality in application development, delivers
these characteristics but, to date, its deployment in commercial business applications has been limited. One possible reason
is the absence of the transaction paradigm, widely used in commercial environments and essential for reliable business applications.
For object technology to be a serious contender in the construction of these solutions requires:
– technology for transactional objects. In December 1994, the Object Management Group adopted a specification for an object
transaction service (OTS). The OTS specifies mechanisms for defining and manipulating transactions. Though derived from the X/Open distributed
transaction processing model, OTS contains additional enhancements specifically designed for the object environment. Similar
technology from Microsoft appeared at the end of 1995.
– methodologies for building new business systems from existing parts. Business process re-engineering is forcing businesses
to improve their operations which bring products to market. Workflow computing, when used in conjunction with “object wrappers” provides tools to both define and track execution of business processes which leverage existing applications and infrastructure.
– an execution environment which satisfies the requirements of the operational needs of the business. Transaction processing
(TP) monitor technology, though widely accepted for mainframe transaction processing, has yet to enjoy similar success in
the client/server marketplace. Instead the database vendors, with their extensive tool suites, dominate. As object brokers
mature they will require many of the functions of today's TP monitors. Marrying these two technologies can produce a robust
execution environment which offers a superior alternative for building and deploying client/server applications.
Edited by Andreas Reuter, Received February 1995 / Revised August 1995 / Accepted May 1996 相似文献
9.
This paper looks from an ethnographic viewpoint at the case of two information systems in a multinational engineering consultancy.
It proposes using the rich findings from ethnographic analysis during requirements discovery. The paper shows how context
– organisational and social – can be taken into account during an information system development process. Socio-technical
approaches are holistic in nature and provide opportunities to produce information systems utilising social science insights,
computer science technical competence and psychological approaches. These approaches provide fact-finding methods that are
appropriate to system participants’ and organisational stakeholders’ needs.
The paper recommends a method of modelling that results in a computerised information system data model that reflects the
conflicting and competing data and multiple perspectives of participants and stakeholders, and that improves interactivity
and conflict management. 相似文献
10.
In video processing, a common first step is to segment the videos into physical units, generally called shots. A shot is a video segment that consists of one continuous action. In general, these physical units need to be clustered
to form more semantically significant units, such as scenes, sequences, programs, etc. This is the so-called story-based video
structuring. Automatic video structuring is of great importance for video browsing and retrieval. The shots or scenes are
usually described by one or several representative frames, called key-frames. Viewed from a higher level, key frames of some shots might be redundant in terms of semantics. In this paper, we propose
automatic solutions to the problems of: (i) video partitioning, (ii) key frame computing, (iii) key frame pruning. For the
first problem, an algorithm called “net comparison” is devised. It is accurate and fast because it uses both statistical and
spatial information in an image and does not have to process the entire image. For the last two problems, we develop an original
image similarity criterion, which considers both spatial layout and detail content in an image. For this purpose, coefficients
of wavelet decomposition are used to derive parameter vectors accounting for the above two aspects. The parameters exhibit
(quasi-) invariant properties, thus making the algorithm robust for many types of object/camera motions and scaling variances.
The novel “seek and spread” strategy used in key frame computing allows us to obtain a large representative range for the
key frames. Inter-shot redundancy of the key-frames is suppressed using the same image similarity measure. Experimental results
demonstrate the effectiveness and efficiency of our techniques. 相似文献
11.
In this paper we argue that substitution-based function allocation methods (such as MABA-MABA, or Men-Are-Better-At/Machines-Are-Better-At
lists) cannot provide progress on human–automation co-ordination. Quantitative ‘who does what’ allocation does not work because
the real effects of automation are qualitative: it transforms human practice and forces people to adapt their skills and routines.
Rather than re-inventing or refining substitution-based methods, we propose that the more pressing question on human–automation
co-ordination is ‘How do we make them get along together?’
Correspondence and offprint requests to: S. W. A. Dekker, Department of Mechanical Engineering, IKP, Link?ping Institute of Technology, SE - 581 83 Link?ping, Sweden.
Tel.: +46 13 281646; fax +4613282579; email: sidde@ikp.liu.se 相似文献
12.
This article offers a research update on a 3-year programme initiated by the Kamloops Art Gallery and the University College
of the Cariboo in Kamloops, British Columbia. The programme is supported by a ‘Community–University Research Alliance’ grant
from the Social Sciences and Humanities Research Council of Canada, and the collaboration focuses on the cultural future of
small cities – on how cultural and arts organisations work together (or fail to work together) in a small city setting. If
not by definition, then certainly by default, ‘culture’ is associated with big city life: big cities are equated commonly
with ‘big culture’; small cities with something less. The Cultural Future of Small Cities research group seeks to provide
a more nuanced view of what constitutes culture in a small Canadian city. In particular, the researchers are exploring notions
of social capital and community asset building: in this context, ‘visual and verbal representation’, ‘home’, ‘community’ and
the need to define a local ‘sense of place’ have emerged as important themes. As the Small Cities programme begins its second
year, a unique but key aspect has become the artist-as-researcher.
Correspondence and offprint requests to: L. Dubinsky, Kamloops Art Gallery, 101–465 Victoria Street, Kamloops, BC V2C 2A9 Canada. Tel.: 250-828-3543; Email: ldubinsky@museums.ca 相似文献
13.
E. Francesconi M. Gori S. Marinai G. Soda 《International Journal on Document Analysis and Recognition》2001,3(3):160-168
In this paper we describe the connectionist-based classification engine of an OCR system. The classification engine is based
on a new modular connectionist architecture, where a multilayer perceptron (MLP) acting as a classifier is properly combined
with a set of autoassociators – one for each class – trained to copy the input to the output layer. The MLP-based classifier
selects a small group of classes with high score, that are afterwards verified by the corresponding autoassociators. The learning
samples used to train the classifiers are constructed by means of a synthetic noise generator starting from few grey level
characters labeled by the user. We report experimental results for comparing three neural architectures: an MLP-based classifier,
an autoassociator-based classifier, and the proposed combined architecture. The experiments show that the proposed architecture
exhibits the best performance, without increasing significantly the computational burden.
Received March 6, 2000 / Revised July 12, 2000 相似文献
14.
Fabio Casati Maria Grazia Fugini Isabelle Mirbel Barbara Pernici 《Requirements Engineering》2002,7(2):73-106
Workflow management systems are becoming a relevant support for a large class of business applications, and many workflow
models as well as commercial products are currently available. While the large availability of tools facilitates the development
and the fulfilment of customer requirements, workflow application development still requires methodological guidelines that
drive the developers in the complex task of rapidly producing effective applications. In fact, it is necessary to identify
and model the business processes, to design the interfaces towards existing cooperating systems, and to manage implementation
aspects in an integrated way. This paper presents the WIRES methodology for developing workflow applications under a uniform
modelling paradigm – UML modelling tools with some extensions – that covers all the life cycle of these applications: from
conceptual analysis to implementation. High-level analysis is performed under different perspectives, including a business and an organisational perspective. Distribution, interoperability and cooperation with external information systems are considered in this early
stage. A set of “workflowability” criteria is provided in order to identify which candidate processes are suited to be implemented
as workflows. Non-functional requirements receive particular emphasis in that they are among the most important criteria for
deciding whether workflow technology can be actually useful for implementing the business process at hand. The design phase
tackles aspects of concurrency and cooperation, distributed transactions and exception handling. Reuse of component workflows,
available in a repository as workflow fragments, is a distinguishing feature of the method. Implementation aspects are presented
in terms of rules that guide in the selection of a commercial workflow management system suitable for supporting the designed
processes, coupled with guidelines for mapping the designed workflows onto the model offered by the selected system. 相似文献
15.
Andrew Fano 《Personal and Ubiquitous Computing》2001,5(1):12-15
The promise of mobile devices lies not in their capacity to duplicate the capabilities of desktop machines, but rather in
their promise of enabling location-specific tasks. One of the challenges that must be addressed if they are to be used in
this way is how intuitive interfaces for mobile devices can be designed that enable access to location-specific services usable
across locations. We are developing a prototype mobile valet application that presents location-specific services organised
around the tasks associated with a location. The basic elements of the interface exploits commonalties in the way we address
tasks at various locations just as the familiar “file” and “edit” menus in various software applications exploit regularities
in software tasks. 相似文献
16.
Data overload is a generic and tremendously difficult problem that has only grown with each new wave of technological capabilities.
As a generic and persistent problem, three observations are in need of explanation: Why is data overload so difficult to address?
Why has each wave of technology exacerbated, rather than resolved, data overload? How are people, as adaptive responsible
agents in context, able to cope with the challenge of data overload? In this paper, first we examine three different characterisations
that have been offered to capture the nature of the data overload problem and how they lead to different proposed solutions.
As a result, we propose that (a) data overload is difficult because of the context sensitivity problem – meaning lies, not
in data, but in relationships of data to interests and expectations and (b) new waves of technology exacerbate data overload
when they ignore or try to finesse context sensitivity. The paper then summarises the mechanisms of human perception and cognition
that enable people to focus on the relevant subset of the available data despite the fact that what is interesting depends
on context. By focusing attention on the root issues that make data overload a difficult problem and on people’s fundamental
competence, we have identified a set of constraints that all potential solutions must meet. Notable among these constraints
is the idea that organisation precedes selectivity. These constraints point toward regions of the solution space that have
been little explored. In order to place data in context, designers need to display data in a conceptual space that depicts
the relationships, events and contrasts that are informative in a field of practice. 相似文献
17.
S. Bernardi S. Donatelli A. Horváth 《International Journal on Software Tools for Technology Transfer (STTT)》2001,3(4):417-430
An implementation of compositionality for stochastic well-formed nets (SWN) and, consequently, for generalized stochastic
Petri nets (GSPN) has been recently included in the GreatSPN tool. Given two SWNs and a labelling function for places and
transitions, it is possible to produce a third one as a superposition of places and transitions of equal label. Colour domains
and arc functions of SWNs have to be treated appropriately. The main motivation for this extension was the need to evaluate
a library of fault-tolerant “mechanisms” that have been recently defined, and are now under implementation, in a European
project called TIRAN. The goal of the TIRAN project is to devise a portable software solution to the problem of fault tolerance
in embedded systems, while the goal of the evaluation is to provide evidence of the efficacy of the proposed solution. Modularity
being a natural “must” for the project, we have tried to reflect it in our modelling effort. In this paper, we discuss the
implementation of compositionality in the GreatSPN tool, and we show its use for the modelling of one of the TIRAN mechanisms,
the so-called local voter.
Published online: 24 August 2001 相似文献
18.
Klaus Havelund Willem Visser 《International Journal on Software Tools for Technology Transfer (STTT)》2002,4(1):8-20
This paper introduces a special section of the STTT journal containing a selection of papers that were presented at the 7th
international SPIN workshop, Stanford, 30 August – 1 September 2000. The workshop was named SPIN Model Checking and Software
Verification, with an emphasis on model checking of programs. The paper outlines the motivation for stressing software verification,
rather than only design and model verification, by presenting the work done in the Automated Software Engineering group at
NASA Ames Research Center within the last 5 years. This includes work in software model checking, testing like technologies
and static analysis.
Published online: 2 October 2002 相似文献
19.
Karl H.E. Kroemer 《Universal Access in the Information Society》2001,1(2):99-160
This bibliography covers the period from 1878 through 1999. It contains, in chronological order, a thorough sampling of the
literature concerning the design and use of keyboards. The sources are selected and annotated to reflect the status of engineering
and technology know-how, and knowledge about ergonomic aspects of the use of the keyboards with, first, mechanical typewriters,
then electric typewriters and finally, from the 1960s on, computers. The bibliography illustrates the origin of Sholes’ 1878
QWERTY keyboard and its continued use in spite of its many shortcomings, which may be – at least partially – the reason for
cumulative trauma disorders in yesteryear’s typists and today’s keyboarders.
Published online: 6 September 2001 相似文献
20.
This paper has pointed out the necessity of careful decision making by nuclear power plant (NPP) operators based on the critical
parameters of an NPP, to maintain safety when these parameters are out of range. Yet under strong time pressure, it is virtually
impossible to make optimal decisions in these conditions. The automation of recovery actions may therefore be needed. Considering
the requirements for such automation, the paper proposes an autonomous system in collaboration with the human (i.e., an agent
system) that will remain effective even during unforeseen conditions. The numerical simulation study showed the effectiveness
of the proposed system. The desired relationship between human–machine as the joint system based on a new concept was also
proposed. 相似文献