共查询到20条相似文献,搜索用时 15 毫秒
1.
The present paper proposes a methodological framework for the design and evaluation of information technology systems supporting
complex cognitive tasks. The aim of the methodological framework is to permit the design of systems which: (1) address the
cognitive difficulties met by potential users in performing complex problem-solving tasks; (2) improve their potential users’
problem-solving performance; and (3) achieve compatibility with potential users’ competences and working environment. After
a short review of the weaknesses of existing systems supposed to support complex cognitive tasks, the theoretical foundations
of the proposed methodology are presented. These are the ergonomic work analysis of French ergonomists, cognitive engineering, cognitive anthropology–ethnomethodology and activity theory. The third section of the paper describes the generic ergonomic model, which constitutes a frame of reference useful for the analyst of the work situation to which the information technology
system is addressed. In the fourth section, the proposed methodology is outlined, and in the fifth a case study demonstrating
an application of the methodology is summarised. In the epilogue, the differences between the proposed methodological framework
and other more conventional approaches are discussed. Finally, directions for future developments of the problem-driven approach
are proposed. 相似文献
2.
Björn Regnell Martin Höst Johan Natt och Dag Per Beremark Thomas Hjelm 《Requirements Engineering》2001,6(1):51-62
When developing packaged software, which is sold ‘off-the-shelf’ on a worldwide marketplace, it is essential to collect needs
and opportunities from different market segments and use this information in the prioritisation of requirements for the next
software release. This paper presents an industrial case study where a distributed prioritisation process is proposed, observed
and evaluated. The stakeholders in the requirements prioritisation process include marketing offices distributed around the
world. A major objective of the distributed prioritisation is to gather and highlight the differences and similarities in
the requirement priorities of the different market segments. The evaluation through questionnaires shows that the stakeholders
found the process useful. The paper also presents novel approaches to visualise the priority distribution among stakeholders,
together with measures on disagreement and satisfaction. Product management found the proposed charts valuable as decision
support when selecting requirements for the next release, as they revealed unforeseen differences among stakeholder priorities.
Conclusions on stakeholder tactics are provided and issues of further research are identified, including ways of addressing
identified challenges. 相似文献
3.
Abstract. Parallel systems provide an approach to robust computing. The motivation for this work arises from using modern parallel
environments in intermediate-level feature extraction. This study presents parallel algorithms for the Hough transform (HT)
and the randomized Hough transform (RHT). The algorithms are analyzed in two parallel environments: multiprocessor computers
and workstation networks. The results suggest that both environments are suitable for the parallelization of HT. Because scalability
of the parallel RHT is weaker than with HT, only the multiprocessor environment is suitable. The limited scalability forces
us to use adaptive techniques to obtain good results regardless of the number of processors. Despite the fact that the speedups
with HT are greater than with RHT, in terms of total computation time, the new parallel RHT algorithm outperforms the parallel
HT.
Received: 8 December 2001 / Accepted: 5 June 2002
Correspondence to: V. Kyrki 相似文献
4.
Jocelyn Sérot Dominique Ginhac Roland Chapuis Jean-Pierre Dérutin 《Machine Vision and Applications》2001,12(6):271-290
We present a design methodology for real-time vision applications aiming at significantly reducing the design-implement-validate
cycle time on dedicated parallel platforms. This methodology is based upon the concept of algorithmic skeletons, i.e., higher
order program constructs encapsulating recurring forms of parallel computations and hiding their low-level implementation
details. Parallel programs are built by simply selecting and composing instances of skeletons chosen in a predefined basis.
A complete parallel programming environment was built to support the presented methodology. It comprises a library of vision-specific
skeletons and a chain of tools capable of turning an architecture-independent skeletal specification of an application into
an optimized, deadlock-free distributive executive for a wide range of parallel platforms. This skeleton basis was defined
after a careful analysis of a large corpus of existing parallel vision applications. The source program is a purely functional
specification of the algorithm in which the structure of a parallel application is expressed only as combination of a limited
number of skeletons. This specification is compiled down to a parametric process graph, which is subsequently mapped onto
the actual physical topology using a third-party CAD software. It can also be executed on any sequential platform to check
the correctness of the parallel algorithm. The applicability of the proposed methodology and associated tools has been demonstrated
by parallelizing several realistic real-time vision applications both on a multi-processor platform and a network of workstations.
It is here illustrated with a complete road-tracking algorithm based upon white-line detection. This experiment showed a dramatic
reduction in development times (hence the term fast prototyping), while keeping performances on par with those obtained with
the handcrafted parallel version.
Received: 22 July 1999 / Accepted: 9 November 2000 相似文献
5.
Deepak Kapur Mahadevan Subramaniam 《International Journal on Software Tools for Technology Transfer (STTT)》2000,3(1):32-65
We show that existing theorem proving technology can be used effectively for mechanically verifying a family of arithmetic
circuits. A theorem prover implementing: (i) a decision procedure for quantifier-free Presburger arithmetic with uninterpreted
function symbols; (ii) conditional rewriting; and (iii) heuristics for carefully selecting induction schemes from terminating
recursive function definitions; and (iv) well integrated with backtracking, can automatically verify number-theoretic properties
of parameterized and generic adders, multipliers and division circuits. This is illustrated using our theorem prover Rewrite Rule Laboratory (RRL). To our knowledge, this is the first such demonstration of the capabilities of a theorem prover mechanizing induction.
The above features of RRL are briefly discussed using illustrations from the verification of adder, multiplier and division
circuits. Extensions to the prover likely to make it even more effective for hardware verification are discussed. Furthermore,
it is believed that these results are scalable, and the proposed approach is likely to be effective for other arithmetic circuits
as well. 相似文献
6.
This paper looks from an ethnographic viewpoint at the case of two information systems in a multinational engineering consultancy.
It proposes using the rich findings from ethnographic analysis during requirements discovery. The paper shows how context
– organisational and social – can be taken into account during an information system development process. Socio-technical
approaches are holistic in nature and provide opportunities to produce information systems utilising social science insights,
computer science technical competence and psychological approaches. These approaches provide fact-finding methods that are
appropriate to system participants’ and organisational stakeholders’ needs.
The paper recommends a method of modelling that results in a computerised information system data model that reflects the
conflicting and competing data and multiple perspectives of participants and stakeholders, and that improves interactivity
and conflict management. 相似文献
7.
UnQL: a query language and algebra for semistructured data based on structural recursion 总被引:5,自引:0,他引:5
Peter Buneman Mary Fernandez Dan Suciu 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(1):76-110
Abstract. This paper presents structural recursion as the basis of the syntax and semantics of query languages for semistructured data
and XML. We describe a simple and powerful query language based on pattern matching and show that it can be expressed using
structural recursion, which is introduced as a top-down, recursive function, similar to the way XSL is defined on XML trees.
On cyclic data, structural recursion can be defined in two equivalent ways: as a recursive function which evaluates the data
top-down and remembers all its calls to avoid infinite loops, or as a bulk evaluation which processes the entire data in parallel
using only traditional relational algebra operators. The latter makes it possible for optimization techniques in relational
queries to be applied to structural recursion. We show that the composition of two structural recursion queries can be expressed
as a single such query, and this is used as the basis of an optimization method for mediator systems. Several other formal
properties are established: structural recursion can be expressed in first-order logic extended with transitive closure; its
data complexity is PTIME; and over relational data it is a conservative extension of the relational calculus. The underlying
data model is based on value equality, formally defined with bisimulation. Structural recursion is shown to be invariant with
respect to value equality.
Received: July 9, 1999 / Accepted: December 24, 1999 相似文献
8.
Sunil Prabhakar Divyakant Agrawal Amr El Abbadi Ambuj Singh Terence Smith 《Multimedia Systems》2003,8(6):459-469
Abstract. With rapid advances in computer and communication technologies, there is an increasing demand to build and maintain large
image repositories. To reduce the demands on I/O and network resources, multi-resolution representations are being proposed
for the storage organization of images. Image decomposition techniques such as wavelets can be used to provide these multi-resolution images. The original image is represented by several coefficients, one of them
with visual similarity to the original image, but at a lower resolution. These visually similar coefficients can be thought
of as thumbnails or icons of the original image. This paper addresses the problem of storing these multi-resolution coefficients on disks so that thumbnail
browsing as well as image reconstruction can be performed efficiently. Several strategies are evaluated to store the image
coefficients on parallel disks. These strategies can be classified into two broad classes, depending on whether the access
pattern of the images is used in the placement. Disk simulation is used to evaluate the performance of these strategies. Simulation
results are validated with results from experiments with real Disks, and are found to be in good qualitative agreement. The
results indicate that significant performance improvements can be achieved with as few as four disks by placing image coefficients
based upon browsing access patterns.
Work supported by a research grant from NSF/ARPA/NASA IRI9411330 and NSF instrumentation grant CDA-9421978 and NSF Career
grant No. IIS-9985019, and NSF grant 0010044-CCR. 相似文献
9.
In the light of the developing discourse on the relative merits of ‘hard’ and ‘soft’ approaches to information systems development,
we present a case study application of a methodology which attempts to dissolve such dualities. Personal Construct Psychology
(PCP) offers, as a unity, the construing person who is both biology and culture. PCP argues that both the world and the person’s
construct system are phenomenologically real and that the viability of any particular construct system depends only on its
usefulness to the construing person. In this study, we used PCP to explore the organisational context of information use and
distribution in a large hospital. We used repertory grids, a PCP technique, to elicit from 16 members of staff their personal
construals of information from different sources in the hospital. The results highlight the relationship between meaningful
information and meaningfully active relationships, a theme which we discuss in terms of the development of the hospital information
system and in terms of the value of PCP in dissolving hard–soft dichotomies. 相似文献
10.
The paper deals with the problems of staircase artifacts and low-contrast boundary smoothing in filtering (magnetic resonance
MR) brain tomograms that is based on geometry-driven diffusion (GDD). A novel method of the model-based GDD filtering of MR
brain tomograms is proposed to tackle these problems. It is based on a local adaptation of the conductance that is defined
for each diffusion iteration within the variable limits. The local adaptation uses a neighborhood inhomogeneity measure, pixel
dissimilarity, while gradient histograms of MR brain template regions are used as the variable limits for the conductance.
A methodology is developed for implementing the template image selected from an MR brain atlas to the model-based GDD filtering.
The proposed method is tested on an MR brain phantom. The methodology developed is exemplified on the real MR brain tomogram
with the corresponding template selected from the Brainweb. The performance of the developed algorithms is evaluated quantitatively
and visually.
Received: 1 September 1998 / Accepted: 20 August 2000 相似文献
11.
Odd-Wiking Rahlff Rolf Kenneth Rolfsen Jo Herstad 《Personal and Ubiquitous Computing》2001,5(1):50-53
Wearables are often described with a focus on providing the user with wearable information access and communication means.
The contextual information retrieval aspect is, however, an essential feature of such systems, as in, for example, the Remembrance Agent [1] where manually entered search-terms
are used for presenting relevant situational information, or as in different location-based systems [2]. In this position paper we outline a general framework of contextually aware wearable systems, and suggest how such mechanisms,
collecting massive traces of the user context, may lead to several other interesting uses in what we will call context trace technology. 相似文献
12.
Sérgio Vale Aguiar Campos Edmund Clarke 《International Journal on Software Tools for Technology Transfer (STTT)》1999,2(3):260-269
The task of checking if a computer system satisfies its timing specifications is extremely important. These systems are often
used in critical applications where failure to meet a deadline can have serious or even fatal consequences. This paper presents
an efficient method for performing this verification task. In the proposed method a real-time system is modeled by a state-transition
graph represented by binary decision diagrams. Efficient symbolic algorithms exhaustively explore the state space to determine
whether the system satisfies a given specification. In addition, our approach computes quantitative timing information such
as minimum and maximum time delays between given events. These results provide insight into the behavior of the system and
assist in the determination of its temporal correctness. The technique evaluates how well the system works or how seriously
it fails, as opposed to only whether it works or not. Based on these techniques a verification tool called Verus has been constructed. It has been used in the verification of several industrial real-time systems such as the robotics system
described below. This demonstrates that the method proposed is efficient enough to be used in real-world designs. The examples
verified show how the information produced can assist in designing more efficient and reliable real-time systems. 相似文献
13.
Doe-Wan Kim Tapas Kanungo 《International Journal on Document Analysis and Recognition》2002,5(1):47-66
Geometric groundtruth at the character, word, and line levels is crucial for designing and evaluating optical character recognition
(OCR) algorithms. Kanungo and Haralick proposed a closed-loop methodology for generating geometric groundtruth for rescanned
document images. The procedure assumed that the original image and the corresponding groundtruth were available. It automatically
registered the original image to the rescanned one using four corner points and then transformed the original groundtruth
using the estimated registration transformation. In this paper, we present an attributed branch-and-bound algorithm for establishing
the point correspondence that uses all the data points. We group the original feature points into blobs and use corners of blobs for matching. The Euclidean distance
between character centroids is used as the error metric. We conducted experiments on synthetic point sets with varying layout
complexity to characterize the performance of two matching algorithms. We also report results on experiments conducted using
the University of Washington dataset. Finally, we show examples of application of this methodology for generating groundtruth
for microfilmed and FAXed versions of the University of Washington dataset documents.
Received: July 24, 2001 / Accepted: May 20, 2002 相似文献
14.
Location Models from the Perspective of Context-Aware Applications and Mobile Ad Hoc Networks 总被引:1,自引:0,他引:1
Location models are crucial for providing location-dependent data to context-aware applications. In this paper, we present
two approaches for modeling location information taken from an infrastructure-based and an ad hoc network-based application scenario. From these approaches we derive requirements for a general location modeling language
for ubiquitous computing.
Correspondence to: M. Bauer, Fakult?t Informatik, Universit?t Stuttgart, Breitwiesenstr. 20-22, D-70565 Stuttgart, Germany. Email: mabauer@informatik.uni-stuttgart.de 相似文献
15.
A system to navigate a robot into a ship structure 总被引:1,自引:0,他引:1
Markus Vincze Minu Ayromlou Carlos Beltran Antonios Gasteratos Simon Hoffgaard Ole Madsen Wolfgang Ponweiser Michael Zillich 《Machine Vision and Applications》2003,14(1):15-25
Abstract. A prototype system has been built to navigate a walking robot into a ship structure. The 8-legged robot is equipped with
an active stereo head. From the CAD-model of the ship good view points are selected, such that the head can look at locations
with sufficient edge features, which are extracted automatically for each view. The pose of the robot is estimated from the
features detected by two vision approaches. One approach searches in stereo images for junctions and measures the 3-D position.
The other method uses monocular image and tracks 2-D edge features. Robust tracking is achieved with a method of edge projected
integration of cues (EPIC). Two inclinometres are used to stabilise the head while the robot moves. The results of the final
demonstration to navigate the robot within centimetre accuracy are given. 相似文献
16.
Fabio Casati Maria Grazia Fugini Isabelle Mirbel Barbara Pernici 《Requirements Engineering》2002,7(2):73-106
Workflow management systems are becoming a relevant support for a large class of business applications, and many workflow
models as well as commercial products are currently available. While the large availability of tools facilitates the development
and the fulfilment of customer requirements, workflow application development still requires methodological guidelines that
drive the developers in the complex task of rapidly producing effective applications. In fact, it is necessary to identify
and model the business processes, to design the interfaces towards existing cooperating systems, and to manage implementation
aspects in an integrated way. This paper presents the WIRES methodology for developing workflow applications under a uniform
modelling paradigm – UML modelling tools with some extensions – that covers all the life cycle of these applications: from
conceptual analysis to implementation. High-level analysis is performed under different perspectives, including a business and an organisational perspective. Distribution, interoperability and cooperation with external information systems are considered in this early
stage. A set of “workflowability” criteria is provided in order to identify which candidate processes are suited to be implemented
as workflows. Non-functional requirements receive particular emphasis in that they are among the most important criteria for
deciding whether workflow technology can be actually useful for implementing the business process at hand. The design phase
tackles aspects of concurrency and cooperation, distributed transactions and exception handling. Reuse of component workflows,
available in a repository as workflow fragments, is a distinguishing feature of the method. Implementation aspects are presented
in terms of rules that guide in the selection of a commercial workflow management system suitable for supporting the designed
processes, coupled with guidelines for mapping the designed workflows onto the model offered by the selected system. 相似文献
17.
Paola Carrara Daniela Fogli Giuseppe Fresta Piero Mussio 《Universal Access in the Information Society》2002,1(4):288-304
This paper proposes a new effective strategy for designing and implementing interactive systems overcoming culture, skill
and situation hurdles in Human-Computer Interaction (HCI). The strategy to identify and reduce these hurdles is developed
in the framework of a methodology based on a recently introduced model of HCI, and exploits the technological innovations
of XML (Extensible Markup Language). HCI is modelled as a cyclic process in which the user and the interactive system communicate
by materializing and interpreting a sequence of messages. The interaction process is formalized by specifying both the physical
message appearance and the computational aspect of the interaction. This formalization allows the adoption of notation traditionally
adopted by users in their workplaces as the starting point of the interactive system design. In this way, the human–system
interaction language takes into account the users’ culture. Moreover, the methodology permits user representatives to build
a hierarchy of systems progressively adapted to users’ situations, skills and habits, according to the work organization in
the domain considered. The strategy is proved to be effective by describing how to implement it using BANCO (Browsing Adaptive
Network for Changing user Operativity), a feasibility prototype based on XML, which allows the hierarchy implementation and
system adaptations. Several examples from an environmental case under study are used throughout the paper to illustrate the
methodology and the effectiveness of the technology adopted.
Published online: 4 June 2002 相似文献
18.
A survey of approaches to automatic schema matching 总被引:76,自引:1,他引:75
Erhard Rahm Philip A. Bernstein 《The VLDB Journal The International Journal on Very Large Data Bases》2001,10(4):334-350
Schema matching is a basic problem in many database application domains, such as data integration, E-business, data warehousing,
and semantic query processing. In current implementations, schema matching is typically performed manually, which has significant
limitations. On the other hand, previous research papers have proposed many techniques to achieve a partial automation of
the match operation for specific application domains. We present a taxonomy that covers many of these existing approaches,
and we describe the approaches in some detail. In particular, we distinguish between schema-level and instance-level, element-level
and structure-level, and language-based and constraint-based matchers. Based on our classification we review some previous
match implementations thereby indicating which part of the solution space they cover. We intend our taxonomy and review of
past work to be useful when comparing different approaches to schema matching, when developing a new match algorithm, and
when implementing a schema matching component.
Received: 5 February 2001 / Accepted: 6 September 2001 Published online: 21 November 2001 相似文献
19.
Abstract. This paper describes the design of a reconfigurable architecture for implementing image processing algorithms. This architecture
is a pipeline of small identical processing elements that contain a programmable logic device (FPGA) and double port memories.
This processing system has been adapted to accelerate the computation of differential algorithms. The log-polar vision selectively
reduces the amount of data to be processed and simplifies several vision algorithms, making possible their implementation
using few hardware resources. The reconfigurable architecture design has been devoted to implementation, and has been employed
in an autonomous platform, which has power consumption, size and weight restrictions. Two different vision algorithms have
been implemented in the reconfigurable pipeline, for which some experimental results are shown.
Received: 30 March 2001 / Accepted: 11 February 2002
RID="*"
ID="*" This work has been supported by the Ministerio de Ciencia y Tecnología and FEDER under project TIC2001-3546
Correspondence to: J.A. Boluda 相似文献
20.
Xiangyun Ye Mohamed Cheriet Ching Y. Suen Ke Liu 《International Journal on Document Analysis and Recognition》1999,2(2-3):53-66
This paper presents a technique for extracting the user-entered information from bankcheck images based on a layout-driven
item extraction method. The baselines of checks are detected and eliminated by using gray-level mathematical morphology. A
priori information about the positions of data is integrated into a combination of top-down and bottom-up analyses of check
images. The handwritten information is extracted by a local thresholding technique and the information lost during baseline
elimination is restored by mathematical morphology with dynamic kernels. A goal-directed evaluation of the extraction approaches
is proposed, and both qualitative and quantitative analyses show noticeable advantages of the proposed approach over the existing
approaches.
Received June 16, 1998 / Revised June 18, 1999 相似文献