首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Easy-to-use audio/video authoring tools play a crucial role in moving multimedia software from research curiosity to mainstream applications. However, research in multimedia authoring systems has rarely been documented in the literature. This paper describes the design and implementation of an interactive video authoring system called Zodiac, which employs an innovative edit history abstraction to support several unique editing features not found in existing commercial and research video editing systems. Zodiac provides users a conceptually clean and semantically powerful branching history model of edit operations to organize the authoring process, and to navigate among versions of authored documents. In addition, by analyzing the edit history, Zodiac is able to reliably detect a composed video stream's shot and scene boundaries, which facilitates interactive video browsing. Zodiac also features a video object annotation capability that allows users to associate annotations to moving objects in a video sequence. The annotations themselves could be text, image, audio, or video. Zodiac is built on top of MMFS, a file system specifically designed for interactive multimedia development environments, and implements an internal buffer manager that supports transparent lossless compression/decompression. Shot/scene detection, video object annotation, and buffer management all exploit the edit history information for performance optimization.  相似文献   

2.
Abstract. When authoring multimedia scenarios, and in particular scenarios with user interaction, where the sequence and time of occurrence of interactions is not predefined, it is difficult to guarantee the consistency of the resulting scenarios. As a consequence, the execution of the scenario may result in unexpected behavior or inconsistent use of media. The present paper proposes a methodology for checking the temporal integrity of interactive multimedia document (IMD) scenarios at authoring time at various levels. The IMD flow is mainly defined by the events occurring during the IMD session. Integrity checking consists of a set of discrete steps, during which we transform the scenario into temporal constraint networks representing the constraints linking the different possible events in the scenario. Temporal constraint verification techniques are applied to verify the integrity of the scenario, deriving a minimal network, showing possible temporal relationships between events given a set of constraints. Received June 9, 1998 / Accepted November 10, 1999  相似文献   

3.
This paper addresses the political nature of requirements for large systems, and argues that requirements engineering theory and practice must become more engaged with these issues. It argues that large-scale system requirements is constructed through a political decision process, whereby requirements emerge as a set of mappings between consecutive solution spaces justified by a problem space of concern to a set of principals. These solution spaces are complex socio-technical ensembles that often exhibit non-linear behaviour in expansion due to domain complexity and political ambiguity. Stabilisation of solutions into agreed-on specifications occurs only through the exercise of organisational power. Effective requirements engineering in such cases is most effectively seen as a form of heterogeneous engineering in which technical, social, economic and institutional factors are brought together in a current solution space that provides the baseline for construction of proposed new solution spaces.  相似文献   

4.
An airborne air-to-ground data link communication interface was evaluated in a multi-sector-planning scenario using an Airbus A 340 full flight simulator. In a close-to-reality experimental setting, eight professional crews performed a flight mission in a mixed voice/data link environment. Experimental factors were the medium (voice vs. data link), workload (low vs. high) and the role in the cockpit (pilot flying vs. pilot non-flying). Data link communication and the usability of the newly developed communication interface were rated positively by the pilots, but there is a clear preference for using a data link only in the phase of cruise. Cognitive demands were determined for selected sections of en-route flight. Demands are affected mainly by increased communication needs. In the pilots’ view, although a data link has no effect on safety or the possibilities of intervention, it causes more problems. The subjective workload, as measured with the NASA Task Load Index, increased moderately under data link conditions. A data link has no general effect on pilots’ situation awareness although flight plan negotiations with a data link cause a distraction of attention from monitoring tasks. The use of a data link has an impact on air-to-ground as well as intra-crew communication. Under data link conditions the pilot non-flying plays a more active role in the cockpit. Before introducing data link communication, several aspects of crew resource management have to be reconsidered. Correspondence and offprint requests to: T. Müller, Technical University of Berlin, Institute of Psychology and Ergonomics, Department of Human–Machine Systems, Jebensstrasse 1, 10623 Berlin, Germany.  相似文献   

5.
We present a shared memory algorithm that allows a set of f+1 processes to wait-free “simulate” a larger system of n processes, that may also exhibit up to f stopping failures. Applying this simulation algorithm to the k-set-agreement problem enables conversion of an arbitrary k-fault-tolerant{\it n}-process solution for the k-set-agreement problem into a wait-free k+1-process solution for the same problem. Since the k+1-processk-set-agreement problem has been shown to have no wait-free solution [5,18,26], this transformation implies that there is no k-fault-tolerant solution to the n-process k-set-agreement problem, for any n. More generally, the algorithm satisfies the requirements of a fault-tolerant distributed simulation.\/ The distributed simulation implements a notion of fault-tolerant reducibility\/ between decision problems. This paper defines these notions and gives examples of their application to fundamental distributed computing problems. The algorithm is presented and verified in terms of I/O automata. The presentation has a great deal of interesting modularity, expressed by I/O automaton composition and both forward and backward simulation relations. Composition is used to include a safe agreement\/ module as a subroutine. Forward and backward simulation relations are used to view the algorithm as implementing a multi-try snapshot\/ strategy. The main algorithm works in snapshot shared memory systems; a simple modification of the algorithm that works in read/write shared memory systems is also presented. Received: February 2001 / Accepted: February 2001  相似文献   

6.
We discuss the problem of capturing media streams which occur during a live lecture in class or during a telepresentation. Instead of presenting yet another method or system for capturing the classroom experience, we introduce some informal guidelines and show their importance for such a system. We derive from these guidelines a formal framework for sets of data streams and an application model to handle these sets so that a real-time replay becomes possible. The Authoring on the Fly system is a possible realization of a framework which follows these guidelines. It allows the capture and real-time replay of data streams captured during a (tele)presentation, including audio, video, and whiteboard action streams. This article gives an overview of the different AoF system components for the various phases of the teaching and learning cycle. It comprises an integrated text and graphics editor for the preparation of pages to be loaded by the whiteboard during the presentation phase. The recording component of the system captures various data streams of the live presentation. They are postprocessed by the system so that they become instances of the class of media for whose replay the general application model was developed. From a global point of view, the Authoring on the Fly system allows one to merge three apparently distinct tasks – teaching in class, telepresentation, and multimedia authoring – into one single activity. The system has been used routinely for recording telepresentations over the MBone net and has already led to a large number of multimedia documents which have been integrated automatically into Web-based teaching and learning environments.  相似文献   

7.
8.
Human error is responsible for a large proportion of the anaesthesia mishaps that occur annually in the United States. Ventilation-related events (VRE) constitute a significant number of anaesthesia critical incidents. Monitoring equipment and their displays extend anaesthesiologists’ resources during VRE but at the expense of additional cognitive demands. This project is a cognitive analysis of intraoperative (inside the operating room) VRE. Goal–means networks were utilised to build a problem-solving model of clinicians’ management of patients’ ventilation during anaesthesia and to map the demands of VRE. The model was also used to identify challenging VRE that were then simulated using a comprehensive anaesthesia simulator. The response of eight experienced clinicians was captured on videotape and analysed to investigate the effectiveness of medical equipment in supporting clinical decision making during VRE.  相似文献   

9.
The present study began with an assessment of the reliability and usefulness of an existing minor event coding system in a British ‘high-consequence’ industry. It was discovered that despite the fact that the system produced replicable data, when tested in a reliability trial the causal inferences it was producing failed to meet the normal criteria for statistical reliability. It was therefore felt necessary to create a new model of the human factors component of action in this industry, from which a model of human factors error in the same industry could be inferred. A set of codes (to facilitate statistical analysis) were deduced from this last, which were then tested in a new reliability trial. The results from this trial were very encouraging, and after a six-month pilot study in which it demonstrated its usefulness as a trend and patterning tool, the system is now being phased in within this industry.  相似文献   

10.
Towards video-based immersive environments   总被引:2,自引:0,他引:2  
Video provides a comprehensive visual record of environment activity over time. Thus, video data is an attractive source of information for the creation of virtual worlds which require some real-world fidelity. This paper describes the use of multiple streams of video data for the creation of immersive virtual environments. We outline our multiple perspective interactive video (MPI-Video) architecture which provides the infrastructure for the processing and analysis of multiple streams of video data. Our MPI-Video system performs automated analysis of the raw video and constructs a model of the environment and object activity within this environment. This model provides a comprehensive representation of the world monitored by the cameras which, in turn, can be used in the construction of a virtual world. In addition, using the information produced and maintained by the MPI-Video system, our immersive video system generates virtual video sequences. These are sequences of the dynamic environment from an arbitrary view point generated using the real camera data. Such sequences allow a user to navigate through the environment and provide a sense of immersion in the scene. We discuss results from our MPI-Video prototype, outline algorithms for the construction of virtual views and provide examples of a variety of such immersive video sequences.  相似文献   

11.
Target recognition is a multilevel process requiring a sequence of algorithms at low, intermediate and high levels. Generally, such systems are open loop with no feedback between levels and assuring their performance at the given probability of correct identification (PCI) and probability of false alarm (Pf) is a key challenge in computer vision and pattern recognition research. In this paper, a robust closed-loop system for recognition of SAR images based on reinforcement learning is presented. The parameters in model-based SAR target recognition are learned. The method meets performance specifications by using PCI and Pf as feedback for the learning system. It has been experimentally validated by learning the parameters of the recognition system for SAR imagery, successfully recognizing articulated targets, targets of different configuration and targets at different depression angles.  相似文献   

12.
We show that existing theorem proving technology can be used effectively for mechanically verifying a family of arithmetic circuits. A theorem prover implementing: (i) a decision procedure for quantifier-free Presburger arithmetic with uninterpreted function symbols; (ii) conditional rewriting; and (iii) heuristics for carefully selecting induction schemes from terminating recursive function definitions; and (iv) well integrated with backtracking, can automatically verify number-theoretic properties of parameterized and generic adders, multipliers and division circuits. This is illustrated using our theorem prover Rewrite Rule Laboratory (RRL). To our knowledge, this is the first such demonstration of the capabilities of a theorem prover mechanizing induction. The above features of RRL are briefly discussed using illustrations from the verification of adder, multiplier and division circuits. Extensions to the prover likely to make it even more effective for hardware verification are discussed. Furthermore, it is believed that these results are scalable, and the proposed approach is likely to be effective for other arithmetic circuits as well.  相似文献   

13.
Inter-object references are one of the key concepts of object-relational and object-oriented database systems. In this work, we investigate alternative techniques to implement inter-object references and make the best use of them in query processing, i.e., in evaluating functional joins. We will give a comprehensive overview and performance evaluation of all known techniques for simple (single-valued) as well as multi-valued functional joins. Furthermore, we will describe special order-preserving\/ functional-join techniques that are particularly attractive for decision support queries that require ordered results. While most of the presentation of this paper is focused on object-relational and object-oriented database systems, some of the results can also be applied to plain relational databases because index nested-loop joins\/ along key/foreign-key relationships, as they are frequently found in relational databases, are just one particular way to execute a functional join. Received February 28, 1999 / Accepted September 27, 1999  相似文献   

14.
This article covers the conception and development of Empedia, a new locative software environment for mobile phones specifically designed for expanded archives, documentary and heritage/historical interpretation, using situated and collaborative learning, at resonant and related sites. It will examine our developmental methods employed through a number of workshops for pilot projects, employed specifically to test the reception of rich media assets and augmented reality features in a simple open source user interface and authoring environment for iPhone and browser consumption. The research projects at the Institute of Creative Technologies (IOCT) examined here include: a D. H. Lawrence Heritage Blue Line trail in Eastwood, Nottinghamshire (2009); Riverains, a dramatised history trail in Shoreditch, London (2010); and the use of collaborative documentary in Codes of Disobedience and Dysfunctionality in Athens (2011).  相似文献   

15.
This paper discusses multimedia and hypermedia modeling, authoring and formatting tools, presenting the proposals of the HyperProp system and comparing them to related work. It also highlights several research challenges that still need to be addressed. Moreover, it stresses the importance of document logical structuring and considers the use of compositions in order to represent context relations, synchronization relations, derivation relations and task relations in hypermedia systems. It discusses temporal and spatial synchronization among multimedia objects and briefly presents the HyperProp graphical authoring and formatting tools. Integration between the proposed system and the WWW is also addressed.  相似文献   

16.
Detection, segmentation, and classification of specific objects are the key building blocks of a computer vision system for image analysis. This paper presents a unified model-based approach to these three tasks. It is based on using unsupervised learning to find a set of templates specific to the objects being outlined by the user. The templates are formed by averaging the shapes that belong to a particular cluster, and are used to guide a probabilistic search through the space of possible objects. The main difference from previously reported methods is the use of on-line learning, ideal for highly repetitive tasks. This results in faster and more accurate object detection, as system performance improves with continued use. Further, the information gained through clustering and user feedback is used to classify the objects for problems in which shape is relevant to the classification. The effectiveness of the resulting system is demonstrated in two applications: a medical diagnosis task using cytological images, and a vehicle recognition task. Received: 5 November 2000 / Accepted: 29 June 2001 Correspondence to: K.-M. Lee  相似文献   

17.
Abstract. A new parallel hybrid decision fusion methodology is proposed. It is demonstrated that existing parallel multiple expert decision combination approaches can be divided into two broad categories based on the implicit decision emphasis implemented. The first category consists of methods implementing computationally intensive decision frameworks incorporating a priori information about the target task domain and the reliability of the participating experts, while the second category encompasses approaches implementing group consensus without assigning any importance to the reliability of the experts and ignoring other contextual information. The methodology proposed in this paper is a hybridisation of these two approaches and has shown significant performance enhancements in terms of higher overall recognition rates along with lower substitution rates. Detailed analysis using two different databases supports this claim. Received January 19, 1999 / Revised March 20, 2000  相似文献   

18.
In many decision-making scenarios, decision makers require rapid feedback to their queries, which typically involve aggregates. The traditional blocking execution model can no longer meet the demands of these users. One promising approach in the literature, called online aggregation, evaluates an aggregation query progressively as follows: as soon as certain data have been evaluated, approximate answers are produced with their respective running confidence intervals; as more data are examined, the answers and their corresponding running confidence intervals are refined. In this paper, we extend this approach to handle nested queries with aggregates (i.e., at least one inner query block is an aggregate query) by providing users with (approximate) answers progressively as the inner aggregation query blocks are evaluated. We address the new issues pose by nested queries. In particular, the answer space begins with a superset of the final answers and is refined as the aggregates from the inner query blocks are refined. For the intermediary answers to be meaningful, they have to be interpreted with the aggregates from the inner queries. We also propose a multi-threaded model in evaluating such queries: each query block is assigned to a thread, and the threads can be evaluated concurrently and independently. The time slice across the threads is nondeterministic in the sense that the user controls the relative rate at which these subqueries are being evaluated. For enumerative nested queries, we propose a priority-based evaluation strategy to present answers that are certainly in the final answer space first, before presenting those whose validity may be affected as the inner query aggregates are refined. We implemented a prototype system using Java and evaluated our system. Results for nested queries with a level and multiple levels of nesting are reported. Our results show the effectiveness of the proposed mechanisms in providing progressive feedback that reduces the initial waiting time of users significantly without sacrificing the quality of the answers. Received April 25, 2000 / Accepted June 27, 2000  相似文献   

19.
Constructing stories is a type of playing that involves mobilizing the storyteller’s imagination and finding original ways to convey narrative intentions. When a child invents a story, there is a natural interaction with the local environment and the use of various means of expression. We adopted a user-centered approach to design POGO, a playful environment which utilizes the child’s physical environment and sensory modalities. Pogo is a system of active tools that enable children to create stories by connecting physical and virtual environments. By providing children with the possibility of capturing and manipulating images and various media, and combining them in sequential form, Pogo triggered new strategies in the construction of narrative logic, time and space, in the construction of the episodes and in the visual narration. Correspondence to: Fran?oise Decortis, Psychology and Education Sciences Department, University of Liege, 4000 Liege, Belgium. Email:francoise.decortis@ulg.ac.be  相似文献   

20.
Summary.   Different replication algorithms provide different solutions to the same basic problem. However, there is no precise specification of the problem itself, only of particular classes of solutions, such as active replication and primary-backup. Having a precise specification of the problem would help us better understand the space of possible solutions and possibly come out with new ones. We present a formal definition of the problem solved by replication in the form of a correctness criterion called x-ability (exactly-once ability). An x-able service has obligations to its environment and its clients. It must update its environment under exactly-once semantics. Furthermore, it must provide idempotent, non-blocking request processing and deliver consistent results to its clients. We illustrate the value of x-ability through a novel replication protocol that handles non-determinism and external side-effects. The replication protocol is asynchronous in the sense that it may vary, at run-time and according to the asynchrony of the system, between some form of primary-backup and some form of active replication. Received: December 2000 / Accepted: September 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号