共查询到20条相似文献,搜索用时 31 毫秒
1.
传统DPIV算法主要于基区域相关运算,此法由于概念简单,操作方便而被广泛接受,但是存在速度慢,错配点多等众所周知的缺点。此文根据DPIV的成像特点以及所研究对象的物理性质,提出了流场的无源仿射模型,结合修改得到的整体不流模型,形成了一种计算DPVI的新方法。 相似文献
2.
Tony Manninen 《Personal and Ubiquitous Computing》2002,6(5-6):390-406
This paper relates to the problems of designing rich interaction, in the context of multi-player games, that would adequately
support communication, control and co-ordination. The aspects of fun and rich experiences, usually required within the entertainment
context, are easily overlooked in technologically driven system design. The concepts of a future ubiquitous game can be difficult
to comprehend and evaluate in cases where a fully functioning physical prototype is not an option. One solution for the problem
is Contextual Virtual Reality Prototyping that adds the missing context to the design simulations. The product can be designed
and demonstrated in the corresponding environment, thus making it easier to understand the use-cases of, for example, a mobile
device that has various location-dependent features. The main contribution of this research is the design and development
approach that supports the creation of rich interaction. The primary emphasis of the approach is to avoid purely technologically
driven design and development, but rather to provide a supporting, or even a guiding, approach that focuses on the creative
process and conceptual understanding of rich interaction. This conceptually grounded content production-oriented approach
to interactive system design is described and evaluated.
Correspondence to: T. Manninen, Department of Information Processing Science, University of Oulu, PO Box 3000, 90014 Oulun Yliopisto, Finland.
Email: tony.manninen@oulu.fi 相似文献
3.
Wood inspection with non-supervised clustering 总被引:9,自引:0,他引:9
Abstract. The appearance of sawn timber has huge natural variations that the human inspector easily compensates for mentally when determining
the types of defects and the grade of each board. However, for automatic wood inspection systems these variations are a major
source for complication. This makes it difficult to use textbook methodologies for visual inspection. These methodologies
generally aim at systems that are trained in a supervised manner with samples of defects and good material, but selecting
and labeling the samples is an error-prone process that limits the accuracy that can be achieved. We present a non-supervised
clustering-based approach for detecting and recognizing defects in lumber boards. A key idea is to employ a self-organizing
map (SOM) for discriminating between sound wood and defects. Human involvement needed for training is minimal. The approach
has been tested with color images of lumber boards, and the achieved false detection and error escape rates are low. The approach
also provides a self-intuitive visual user interface.
Received: 16 December 2000 / Accepted: 8 December 2001
Correspondence to: O. Silvén 相似文献
4.
We present a novel technique, called 2-Phase Service Model, for streaming videos to home users in a limited-bandwidth environment. This scheme first delivers some number of non-adjacent
data fragments to the client in Phase 1. The missing fragments are then transmitted in Phase 2 as the client is playing back
the video. This approach offers many benefits. The isochronous bandwidth required for Phase 2 can be controlled within the
capability of the transport medium. The data fragments received during Phase 1 can be used to provide an excellent preview
of the video. They can also be used to facilitate VCR-style operations such as fast-forward and fast-reverse. Systems designed
based on this method are less expensive because the fast-forward and fast-reverse versions of the video files are no longer
needed. Eliminating these files also improves system performance because mapping between the regular files and their fast-forward
and fast-reverse versions is no longer part of the VCR operations. Furthermore, since each client machine handles its own
VCR-style interaction, this technique is very scalable. We provide simulation results to show that 2-Phase Service Model is
able to handle VCR functions efficiently. We also implement a video player called {\em FRVplayer}. With this prototype, we
are able to judge that the visual quality of the previews and VCR-style operations is excellent. These features are essential
to many important applications. We discuss the application of FRVplayer in the design of a video management system, called
VideoCenter. This system is intended for Internet applications such as digital video libraries. 相似文献
5.
Klaus U. Schulz Stoyan Mihov 《International Journal on Document Analysis and Recognition》2002,5(1):67-85
The Levenshtein distance between two words is the minimal number of insertions, deletions or substitutions that are needed
to transform one word into the other. Levenshtein automata of degree n for a word W are defined as finite state automata that recognize the set of all words V where the Levenshtein distance between V and W does not exceed n. We show how to compute, for any fixed bound n and any input word W, a deterministic Levenshtein automaton of degree n for W in time linear to the length of W. Given an electronic dictionary that is implemented in the form of a trie or a finite state automaton, the Levenshtein automaton
for W can be used to control search in the lexicon in such a way that exactly the lexical words V are generated where the Levenshtein distance between V and W does not exceed the given bound. This leads to a very fast method for correcting corrupted input words of unrestricted text
using large electronic dictionaries. We then introduce a second method that avoids the explicit computation of Levenshtein
automata and leads to even improved efficiency. Evaluation results are given that also address variants of both methods that
are based on modified Levenshtein distances where further primitive edit operations (transpositions, merges and splits) are
used.
Received: 13 February 2002 / Accepted: 13 March 2002 相似文献
6.
This paper presents an efficient method for creating the animation of flexible objects. The mass-spring model was used to
represent flexible objects. The easiest approach to creating animation with the mass-spring model is the explicit Euler method,
but the method has a serious weakness in that it suffers from an instability problem. The implicit integration method is a possible
solution, but a critical flaw of the implicit method is that it involves a large linear system. This paper presents an approximate
implicit method for the mass-spring model. The proposed technique updates with stability the state of n mass points in O(n) time when the number of total springs is O(n). In order to increase the efficiency of simulation or reduce the numerical errors of the proposed approximate implicit method,
the number of mass points must be as small as possible. However, coarse discretization with a small number of mass points
generates an unrealistic appearance for a cloth model. By introducing a wrinkled cubic spline curve, we propose a new technique
that generates realistic details of the cloth model, even though a small number of mass points are used for simulation. 相似文献
7.
Easy-to-use audio/video authoring tools play a crucial role in moving multimedia software from research curiosity to mainstream
applications. However, research in multimedia authoring systems has rarely been documented in the literature. This paper describes
the design and implementation of an interactive video authoring system called Zodiac, which employs an innovative edit history abstraction to support several unique editing features not found in existing commercial
and research video editing systems. Zodiac provides users a conceptually clean and semantically powerful branching history model of edit operations to organize the authoring process, and to navigate among versions of authored documents. In addition,
by analyzing the edit history, Zodiac is able to reliably detect a composed video stream's shot and scene boundaries, which facilitates interactive video browsing.
Zodiac also features a video object annotation capability that allows users to associate annotations to moving objects in a video sequence. The annotations themselves could
be text, image, audio, or video. Zodiac is built on top of MMFS, a file system specifically designed for interactive multimedia development environments, and implements an internal buffer
manager that supports transparent lossless compression/decompression. Shot/scene detection, video object annotation, and buffer
management all exploit the edit history information for performance optimization. 相似文献
8.
We propose a system that simultaneously utilizes the stereo disparity and optical flow information of real-time stereo grayscale
multiresolution images for the recognition of objects and gestures in human interactions. For real-time calculation of the
disparity and optical flow information of a stereo image, the system first creates pyramid images using a Gaussian filter.
The system then determines the disparity and optical flow of a low-density image and extracts attention regions in a high-density
image. The three foremost regions are recognized using higher-order local autocorrelation features and linear discriminant
analysis. As the recognition method is view based, the system can process the face and hand recognitions simultaneously in
real time. The recognition features are independent of parallel translations, so the system can use unstable extractions from
stereo depth information. We demonstrate that the system can discriminate the users, monitor the basic movements of the user,
smoothly learn an object presented by users, and can communicate with users by hand signs learned in advance.
Received: 31 January 2000 / Accepted: 1 May 2001
Correspondence to: I. Yoda (e-mail: yoda@ieee.org, Tel.: +81-298-615941, Fax: +81-298-613313) 相似文献
9.
Ashish Mehta James Geller Yehoshua Perl Erich Neuhold 《The VLDB Journal The International Journal on Very Large Data Bases》1998,7(1):25-47
A path-method is used as a mechanism in object-oriented databases (OODBs) to retrieve or to update information relevant to one class that
is not stored with that class but with some other class. A path-method is a method which traverses from one class through
a chain of connections between classes and accesses information at another class. However, it is a difficult task for a casual
user or even an application programmer to write path-methods to facilitate queries. This is because it might require comprehensive
knowledge of many classes of the conceptual schema that are not directly involved in the query, and therefore may not even
be included in a user's (incomplete) view about the contents of the database. We have developed a system, called path-method generator (PMG), which generates path-methods automatically according to a user's database-manipulating requests. The PMG offers the
user one of the possible path-methods and the user verifies from his knowledge of the intended purpose of the request whether
that path-method is the desired one. If the path method is rejected, then the user can utilize his now increased knowledge
about the database to request (with additional parameters given) another offer from the PMG. The PMG is based on access weights attached to the connections between classes and precomputed access relevance between every pair of classes of the OODB. Specific rules for access weight assignment and algorithms for computing access
relevance appeared in our previous papers [MGPF92, MGPF93, MGPF96]. In this paper, we present a variety of traversal algorithms
based on access weights and precomputed access relevance. Experiments identify some of these algorithms as very successful
in generating most desired path-methods. The PMG system utilizes these successful algorithms and is thus an efficient tool
for aiding the user with the difficult task of querying and updating a large OODB.
Received July 19, 1993 / Accepted May 16, 1997 相似文献
10.
Alex Aizman 《International Journal on Software Tools for Technology Transfer (STTT)》2001,3(4):456-468
Advances in technology raise expectations. As far as software engineering is concerned, the common expectation is that coding
and deploying applications is going to be simple. It seems, though, that software engineering is not getting easier, and the
complexity moves to an application domain. One of the sources of complexity is an application concurrency. It is not an uncommon
development practice that concurrency and transaction management in multi-user, multi-threaded, event-driven applications
are postponed until after most of the required functionality is implemented. This situation has various explanations. On the
one hand, business logic may require access and modification of large sets of inter-connected application objects. On the
other, testing and stress-testing of this logic becomes possible only at advanced stages of product development. At these
stages, increasing lock granularities may appear to be less "expensive" than debugging race conditions and deadlocks. Coarse-grained
locking has, of course, an adverse effect on application scalability.
Declaring rules of concurrency outside of the application may solve part of the problem. This paper presents an approach allowing
developers to define concurrency in application-specific terms, design it in the early stages of development, and implement
it using a documented API of the concurrency engine (CE). Simple notation makes it possible to record concurrency specifications in terms of application operations, relationships between application resources, and synchronization conflicts between operations. These concepts are demonstrated on examples. The final sections include the CE UML diagram, notes on API usage, and performance
benchmarks.
Published online: 25 July 2001 相似文献
11.
Ada Wai-chee Fu Polly Mei-shuen Chan Yin-Ling Cheung Yiu Sang Moon 《The VLDB Journal The International Journal on Very Large Data Bases》2000,9(2):154-173
Abstract. For some multimedia applications, it has been found that domain objects cannot be represented as feature vectors in a multidimensional
space. Instead, pair-wise distances between data objects are the only input. To support content-based retrieval, one approach
maps each object to a k-dimensional (k-d) point and tries to preserve the distances among the points. Then, existing spatial access index methods such as the R-trees
and KD-trees can support fast searching on the resulting k-d points. However, information loss is inevitable with such an approach since the distances between data objects can only
be preserved to a certain extent. Here we investigate the use of a distance-based indexing method. In particular, we apply
the vantage point tree (vp-tree) method. There are two important problems for the vp-tree method that warrant further investigation,
the n-nearest neighbors search and the updating mechanisms. We study an n-nearest neighbors search algorithm for the vp-tree, which is shown by experiments to scale up well with the size of the dataset
and the desired number of nearest neighbors, n. Experiments also show that the searching in the vp-tree is more efficient than that for the -tree and the M-tree. Next, we propose solutions for the update problem for the vp-tree, and show by experiments that the algorithms are
efficient and effective. Finally, we investigate the problem of selecting vantage-point, propose a few alternative methods,
and study their impact on the number of distance computation.
Received June 9, 1998 / Accepted January 31, 2000 相似文献
12.
Deepak Kapur Mahadevan Subramaniam 《International Journal on Software Tools for Technology Transfer (STTT)》2000,3(1):32-65
We show that existing theorem proving technology can be used effectively for mechanically verifying a family of arithmetic
circuits. A theorem prover implementing: (i) a decision procedure for quantifier-free Presburger arithmetic with uninterpreted
function symbols; (ii) conditional rewriting; and (iii) heuristics for carefully selecting induction schemes from terminating
recursive function definitions; and (iv) well integrated with backtracking, can automatically verify number-theoretic properties
of parameterized and generic adders, multipliers and division circuits. This is illustrated using our theorem prover Rewrite Rule Laboratory (RRL). To our knowledge, this is the first such demonstration of the capabilities of a theorem prover mechanizing induction.
The above features of RRL are briefly discussed using illustrations from the verification of adder, multiplier and division
circuits. Extensions to the prover likely to make it even more effective for hardware verification are discussed. Furthermore,
it is believed that these results are scalable, and the proposed approach is likely to be effective for other arithmetic circuits
as well. 相似文献
13.
Nizami Cummins 《Personal and Ubiquitous Computing》2002,6(5-6):362-370
This paper investigates how many users of commercial interactive systems are not properly agents within the interactive narrative,
largely due to the dynamics of branding in cyberspace. Parallels are drawn between the dynamic personalization of e-CRM engines
and context aware computing systems. Several seminal games are discussed as examples of systems in which very different relationships
exist between users and the system. Arguments are made for designing e-commerce interactive systems that install into games,
inside the game narrative.
Correspondence to: Ms N. Cummins, Preject Brand Communications Consultancy, Unit P, Carlton Works Studios, Asylum Road, London SE15 2SB, UK.
Email: nizami@preject.com 相似文献
14.
In this paper we define a requirements-level execution semantics for object-oriented statecharts and show how properties of
a system specified by these statecharts can be model checked using tool support for model checkers. Our execution semantics
is requirements-level because it uses the perfect technology assumption, which abstracts from limitations imposed by an implementation.
Statecharts describe object life cycles. Our semantics includes synchronous and asynchronous communication between objects
and creation and deletion of objects. Our tool support presents a graphical front-end to model checkers, making these tools
usable to people who are not specialists in model checking. The model-checking approach presented in this paper is embedded
in an informal but precise method for software requirements and design. We discuss some of our experiences with model checking.
Correspondence and offprint requests to: Rik Eshuis, Department of Computer Science, University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands. Email: eshuis@cs.utwente.nl 相似文献
15.
Ajay D. Kshemkalyani 《Distributed Computing》1998,11(4):169-189
Summary. In a distributed system, high-level actions can be modeled by nonatomic events. This paper proposes causality relations between
distributed nonatomic events and provides efficient testing conditions for the relations. The relations provide a fine-grained
granularity to specify causality relations between distributed nonatomic events. The set of relations between nonatomic events
is complete in first-order predicate logic, using only the causality relation between atomic events. For a pair of distributed
nonatomic events X and Y, the evaluation of any of the causality relations requires integer comparisons, where and , respectively, are the number of nodes on which the two nonatomic events X and Y occur. In this paper, we show that this polynomial complexity of evaluation can by simplified to a linear complexity using
properties of partial orders. Specifically, we show that most relations can be evaluated in integer comparisons, some in integer comparisons, and the others in integer comparisons. During the derivation of the efficient testing conditions, we also define special system execution prefixes
associated with distributed nonatomic events and examine their knowledge-theoretic significance.
Received: July 1997 / Accepted: May 1998 相似文献
16.
Fast template matching using bounded partial correlation 总被引:8,自引:0,他引:8
This paper describes a novel, fast template-matching technique, referred to as bounded partial correlation (BPC), based on
the normalised cross-correlation (NCC) function. The technique consists in checking at each search position a suitable elimination
condition relying on the evaluation of an upper-bound for the NCC function. The check allows for rapidly skipping the positions
that cannot provide a better degree of match with respect to the current best-matching one. The upper-bounding function incorporates
partial information from the actual cross-correlation function and can be calculated very efficiently using a recursive scheme.
We show also a simple improvement to the basic BPC formulation that provides additional computational benefits and renders
the technique more robust with respect to the parameters choice.
Received: 2 November 2000 / Accepted: 25 July 2001
Correspondence to: L. Di Stefano 相似文献
17.
A bin picking system based on depth from defocus 总被引:3,自引:0,他引:3
It is generally accepted that to develop versatile bin-picking systems capable of grasping and manipulation operations, accurate
3-D information is required. To accomplish this goal, we have developed a fast and precise range sensor based on active depth from defocus (DFD). This sensor is used in conjunction with a three-component vision system, which is able to recognize and evaluate the
attitude of 3-D objects. The first component performs scene segmentation using an edge-based approach. Since edges are used
to detect the object boundaries, a key issue consists of improving the quality of edge detection. The second component attempts
to recognize the object placed on the top of the object pile using a model-driven approach in which the segmented surfaces
are compared with those stored in the model database. Finally, the attitude of the recognized object is evaluated using an
eigenimage approach augmented with range data analysis. The full bin-picking system will be outlined, and a number of experimental
results will be examined.
Received: 2 December 2000 / Accepted: 9 September 2001
Correspondence to: O. Ghita 相似文献
18.
Multi-tuple interpolation using Fourier descriptors 总被引:1,自引:0,他引:1
19.
Cynthia E. Irvine Timothy Levin Jeffery D. Wilson David Shifflett Barbara Pereira 《Requirements Engineering》2002,7(4):192-206
Requirements specifications for high-assurance secure systems are rare in the open literature. This paper examines the development
of a requirements document for a multilevel secure system that must meet stringent assurance and evaluation requirements.
The system is designed to be secure, yet combines popular commercial components with specialised high-assurance ones. Functional
and non-functional requirements pertinent to security are discussed. A multidimensional threat model is presented. The threat
model accounts for the developmental and operational phases of system evolution and for each phase accounts for both physical
and non-physical threats. We describe our team-based method for developing a requirements document and relate that process
to techniques in requirements engineering. The system requirements document presented provides a calibration point for future
security requirements engineering techniques intended to meet both functional and assurance goals.
RID="*"
ID="*"The views expressed in this paper are those of the authors and should not be construed to reflect those of their employers
or the Department of Defense. This work was supported in part by the MSHN project of the DARPA/ITO Quorum programme and by
the MYSEA project of the DARPA/ATO CHATS programme.
Correspondence and offprint requests to: T. Levin, Department of Computer Science, Naval Postgraduate School, Monterey, CA 93943-5118, USA. Tel.: +1 831 656 2339;
Fax: +1 831 656 2814; Email: levin@nps.navy.mil 相似文献
20.
David Crandall Sameer Antani Rangachar Kasturi 《International Journal on Document Analysis and Recognition》2003,5(2-3):138-157
Abstract. The popularity of digital video is increasing rapidly. To help users navigate libraries of video, algorithms that automatically
index video based on content are needed. One approach is to extract text appearing in video, which often reflects a scene's
semantic content. This is a difficult problem due to the unconstrained nature of general-purpose video. Text can have arbitrary
color, size, and orientation. Backgrounds may be complex and changing. Most work so far has made restrictive assumptions about
the nature of text occurring in video. Such work is therefore not directly applicable to unconstrained, general-purpose video.
In addition, most work so far has focused only on detecting the spatial extent of text in individual video frames. However,
text occurring in video usually persists for several seconds. This constitutes a text event that should be entered only once
in the video index. Therefore it is also necessary to determine the temporal extent of text events. This is a non-trivial
problem because text may move, rotate, grow, shrink, or otherwise change over time. Such text effects are common in television
programs and commercials but so far have received little attention in the literature. This paper discusses detecting, binarizing,
and tracking caption text in general-purpose MPEG-1 video. Solutions are proposed for each of these problems and compared
with existing work found in the literature.
Received: January 29, 2002 / Accepted: September 13, 2002
D. Crandall is now with Eastman Kodak Company, 1700 Dewey Avenue, Rochester, NY 14650-1816, USA; e-mail: david.crandall@kodak.com
S. Antani is now with the National Library of Medicine, 8600 Rockville Pike, Bethesda, MD 20894, USA; e-mail: antani@nlm.nih.gov
Correspondence to: David Crandall 相似文献