共查询到20条相似文献,搜索用时 15 毫秒
1.
传统DPIV算法主要于基区域相关运算,此法由于概念简单,操作方便而被广泛接受,但是存在速度慢,错配点多等众所周知的缺点。此文根据DPIV的成像特点以及所研究对象的物理性质,提出了流场的无源仿射模型,结合修改得到的整体不流模型,形成了一种计算DPVI的新方法。 相似文献
2.
3.
An alternative, hybrid approach for disparity estimation, based on the phase difference technique, is presented. The proposed technique combines the robustness of the matching method with the sub-pixel accuracy of the phase difference approach. A matching between the phases of the left and right signals is introduced in order to allow the phase difference method to work in a reduced disparity range. In this framework, a new criterion to detect signal singularities is proposed. The presented test cases show that the performance of the proposed technique in terms of accuracy and density of the disparity estimates has greatly improved. Received: 24 June 1997 / Accepted: 15 September 1998 相似文献
4.
Sparse optic flow maps are general enough to obtain useful information about camera motion. Usually, correspondences among
features over an image sequence are estimated by radiometric similarity. When the camera moves under known conditions, global
geometrical constraints can be introduced in order to obtain a more robust estimation of the optic flow. In this paper, a
method is proposed for the computation of a robust sparse optic flow (OF) which integrates the geometrical constraints induced
by camera motion to verify the correspondences obtained by radiometric-similarity-based techniques. A raw OF map is estimated
by matching features by correlation. The verification of the resulting correspondences is formulated as an optimization problem
that is implemented on a Hopfield neural network (HNN). Additional constraints imposed in the energy function permit us to
achieve a subpixel accuracy in the image locations of matched features. Convergence of the HNN is reached in a small enough
number of iterations to make the proposed method suitable for real-time processing. It is shown that the proposed method is
also suitable for identifying independently moving objects in front of a moving vehicle.
Received: 26 December 1995 / Accepted: 20 February 1997 相似文献
5.
6.
Norifumi Katafuchi Mutsuo Sano Shuichi Ohara Masashi Okudaira 《Machine Vision and Applications》2000,12(4):170-176
A new method based on an optics model for highly reliable surface inspection of industrial parts has been developed. This
method uses multiple images taken under different camera conditions. Phong's model is employed for surface reflection, and
then the albedo and the reflection model parameters are estimated by the least squares method. The developed method has advantages
over conventional binarization in that it can easily determine the threshold of product acceptability and cope with changes
in light intensity when detecting defects. 相似文献
7.
A model-driven approach for real-time road recognition 总被引:6,自引:0,他引:6
This article describes a method designed to detect and track road edges starting from images provided by an on-board monocular
monochromic camera. Its implementation on specific hardware is also presented in the framework of the VELAC project. The method
is based on four modules: (1) detection of the road edges in the image by a model-driven algorithm, which uses a statistical
model of the lane sides which manages the occlusions or imperfections of the road marking – this model is initialized by an
off-line training step; (2) localization of the vehicle in the lane in which it is travelling; (3) tracking to define a new
search space of road edges for the next image; and (4) management of the lane numbers to determine the lane in which the vehicle
is travelling. The algorithm is implemented in order to validate the method in a real-time context. Results obtained on marked
and unmarked road images show the robustness and precision of the method.
Received: 18 November 2000 / Accepted: 7 May 2001 相似文献
8.
A real-time vision module for interactive perceptual agents 总被引:2,自引:0,他引:2
Bruce A. Maxwell Nathaniel Fairfield Nikolas Johnson Pukar Malla Paul Dickson Suor Kim Stephanie Wojtkowski Thomas Stepleton 《Machine Vision and Applications》2003,14(1):72-82
Abstract. Interactive robotics demands real-time visual information about the environment. Real-time vision processing, however, places
a heavy load on the robot's limited resources, which must accommodate multiple other processes running simultaneously. This
paper describes a vision module capable of providing real-time information from ten or more operators while maintaining at
least a 20-Hz frame rate and leaving sufficient processor time for a robot's other capabilities. The vision module uses a
probabilistic scheduling algorithm to ensure both timely information flow and a fast frame capture. In addition, it tightly
integrates the vision operators with control of a pan-tilt-zoom camera. The vision module makes its information available
to other modules in the robot architecture through a shared memory structure. The information provided by the vision module
includes the operator information along with a time stamp indicating information relevance. Because of this design, our robots
are able to react in a timely manner to a wide variety of visual events. 相似文献
9.
Diego Lo´pez de Ipin˜a Paulo R. S. Mendonça Andy Hopper Andy Hopper 《Personal and Ubiquitous Computing》2002,6(3):206-219
Sentient Computing provides computers with perception so that they can react and provide assistance to user activities. Physical
spaces are made sentient when they are wired with networks of sensors capturing context data, which is communicated to computing
devices spread through the environment. These devices interpret the information provided and react by performing the actions
expected by the user. Among the types of context information provided by sensors, location has proven to be especially useful. Since location is an important context that changes whenever the user moves, a reliable
location-tracking system is critical to many sentient applications. However, the sensor technologies used in indoor location
tracking are expensive and complex to deploy, configure and maintain. These factors have prevented a wider adoption of Sentient
Computing in our living and working spaces. This paper presents TRIP, a low-cost and easily deployable vision-based sensor
technology addressing these issues. TRIP employs off-the-shelf hardware (low-cost CCD cameras and PCs) and printable 2-D circular
markers for entity identification and location. The usability of TRIP is illustrated through the implementation of several
sentient applications. 相似文献
10.
Bing Wang 《International Journal on Digital Libraries》1999,2(2-3):91-110
A digital library (DL) consists of a database which contains library information and a user interface which provides a visual
window for users to search relevant information stored in the database. Thus, an abstract structure of a digital library can
be defined as a combination of a special purpose database and a user-friendly interface. This paper addresses one of the fundamental aspects of such
a combination. This is the formal data structure for linking an object oriented database with hypermedia to support digital
libraries. It is important to establish a formal structure for a digital library in order to efficiently maintain different
types of library information. This article discusses how to build an object oriented hybrid system to support digital libraries.
In particular, we focus on the discussion of a general purpose data model for digital libraries and the design of the corresponding
hypermedia interface. The significant features of this research are, first, a formalized data model to define a digital library
system structure; second, a practical approach to manage the global schema of a library system; and finally, a design strategy
to integrate hypermedia with databases to support a wide range of application areas.
Received: 15 December 1997 / Revised: June 1999 相似文献
11.
Jeffrey A. Fayman Oded Sudarsky Ehud Rivlin Michael Rudzsky 《Machine Vision and Applications》2001,13(1):25-37
We present a new active vision technique called zoom tracking. Zoom tracking is the continuous adjustment of a camera's focal
length in order to keep a constant-sized image of an object moving along the camera's optical axis. Two methods for performing
zoom tracking are presented: a closed-loop visual feedback algorithm based on optical flow, and use of depth information obtained
from an autofocus camera's range sensor. We explore two uses of zoom tracking: recovery of depth information and improving
the performance of scale-variant algorithms. We show that the image stability provided by zoom tracking improves the performance
of algorithms that are scale variant, such as correlation-based trackers. While zoom tracking cannot totally compensate for
an object's motion, due to the effect of perspective distortion, an analysis of this distortion provides a quantitative estimate
of the performance of zoom tracking. Zoom tracking can be used to reconstruct a depth map of the tracked object. We show that
under normal circumstances this reconstruction is much more accurate than depth from zooming, and works over a greater range
than depth from axial motion while providing, in the worst case, only slightly less accurate results. Finally, we show how
zoom tracking can also be used in time-to-contact calculations.
Received: 15 February 2000 / Accepted: 19 June 2000 相似文献
12.
Chiang Lee Chi-Sheng Shih Yaw-Huei Chen 《The VLDB Journal The International Journal on Very Large Data Bases》2001,9(4):327-343
Traditional algorithms for optimizing the execution order of joins are no more valid when selections and projections involve
methods and become very expensive operations. Selections and projections could be even more costly than joins such that they
are pulled above joins, rather than pushed down in a query tree. In this paper, we take a fundamental look at how to approach
query optimization from a top-down design perspective, rather than trying to force one model to fit into another. We present
a graph model which is designed to characterize execution plans. Each edge and each vertex of the graph is assigned a weight
to model execution plans. We also design algorithms that use these weights to optimize the execution order of operations.
A cost model of these algorithms is developed. Experiments are conducted on the basis of this cost model. The results show
that our algorithms are superior to similar work proposed in the literature.
Received 20 April 1999 / Accepted 9 August 2000 Published online 20 April 2001 相似文献
13.
Jocelyn Sérot Dominique Ginhac Roland Chapuis Jean-Pierre Dérutin 《Machine Vision and Applications》2001,12(6):271-290
We present a design methodology for real-time vision applications aiming at significantly reducing the design-implement-validate
cycle time on dedicated parallel platforms. This methodology is based upon the concept of algorithmic skeletons, i.e., higher
order program constructs encapsulating recurring forms of parallel computations and hiding their low-level implementation
details. Parallel programs are built by simply selecting and composing instances of skeletons chosen in a predefined basis.
A complete parallel programming environment was built to support the presented methodology. It comprises a library of vision-specific
skeletons and a chain of tools capable of turning an architecture-independent skeletal specification of an application into
an optimized, deadlock-free distributive executive for a wide range of parallel platforms. This skeleton basis was defined
after a careful analysis of a large corpus of existing parallel vision applications. The source program is a purely functional
specification of the algorithm in which the structure of a parallel application is expressed only as combination of a limited
number of skeletons. This specification is compiled down to a parametric process graph, which is subsequently mapped onto
the actual physical topology using a third-party CAD software. It can also be executed on any sequential platform to check
the correctness of the parallel algorithm. The applicability of the proposed methodology and associated tools has been demonstrated
by parallelizing several realistic real-time vision applications both on a multi-processor platform and a network of workstations.
It is here illustrated with a complete road-tracking algorithm based upon white-line detection. This experiment showed a dramatic
reduction in development times (hence the term fast prototyping), while keeping performances on par with those obtained with
the handcrafted parallel version.
Received: 22 July 1999 / Accepted: 9 November 2000 相似文献
14.
A system to navigate a robot into a ship structure 总被引:1,自引:0,他引:1
Markus Vincze Minu Ayromlou Carlos Beltran Antonios Gasteratos Simon Hoffgaard Ole Madsen Wolfgang Ponweiser Michael Zillich 《Machine Vision and Applications》2003,14(1):15-25
Abstract. A prototype system has been built to navigate a walking robot into a ship structure. The 8-legged robot is equipped with
an active stereo head. From the CAD-model of the ship good view points are selected, such that the head can look at locations
with sufficient edge features, which are extracted automatically for each view. The pose of the robot is estimated from the
features detected by two vision approaches. One approach searches in stereo images for junctions and measures the 3-D position.
The other method uses monocular image and tracks 2-D edge features. Robust tracking is achieved with a method of edge projected
integration of cues (EPIC). Two inclinometres are used to stabilise the head while the robot moves. The results of the final
demonstration to navigate the robot within centimetre accuracy are given. 相似文献
15.
A modified version of the CDWT optical flow algorithm developed by Magarey and Kingsbury is applied to the problem of moving-target
detection in noisy infrared image sequences, in the case where the sensor is also moving. Frame differencing is used to detect
pixel-size targets moving in strongly cluttered backgrounds. To compensate for sensor motion, prior to differencing, the background
is registered spatially using the estimated motion field between the frames. Results of applying the method to three image
sequences show that the target SNR is higher when the estimated motion field for the whole scene is explicitly regularized.
A comparison with another optical flow algorithm is also presented. 相似文献
16.
A database model for object dynamics 总被引:1,自引:0,他引:1
M.P. Papazoglou B.J. Krämer 《The VLDB Journal The International Journal on Very Large Data Bases》1997,6(2):73-96
To effectively model complex applications in which constantly changing situations can be represented, a database system must
be able to support the runtime specification of structural and behavioral nuances for objects on an individual or group basis.
This paper introduces the role mechanism as an extension of object-oriented databases to support unanticipated behavioral
oscillations for objects that may attain many types and share a single object identity. A role refers to the ability to represent
object dynamics by seamlessly integrating idiosyncratic behavior, possibly in response to external events, with pre-existing
object behavior specified at instance creation time. In this manner, the same object can simultaneously be an instance of
different classes which symbolize the different roles that this object assumes. The role concept and its underlying linguistic
scheme simplify the design requirements of complex applications that need to create and manipulate dynamic objects.
Edited by D. McLeod / Received March 1994 / Accepted January 1996 相似文献
17.
We present a method of colour shade grading for industrial inspection of surfaces, the differences of which are at the threshold
of human perception. This method converts the input data from the electronic sensor to the corresponding data as they would
have been viewed using the human vision system. Then their differences are computed using a perceptually uniform colour space,
thus approximating the way the human experts would grade the product. The transformation from the electronic sensor to the
human sensor makes use of synthetic metameric data to determine the transformation parameters. The method has been tested
using real data.
Received: 17 November 1997 / Accepted: 15 September 1998 相似文献
18.
19.
20.
Hwan-Chul Park Se-Young Ok Young-Jung Yu Hwan-Gue Cho 《International Journal on Document Analysis and Recognition》2001,4(2):115-130
Automatic character recognition and image understanding of a given paper document are the main objectives of the computer
vision field. For these problems, a basic step is to isolate characters and group words from these isolated characters. In
this paper, we propose a new method for extracting characters from a mixed text/graphic machine-printed document and an algorithm
for distinguishing words from the isolated characters. For extracting characters, we exploit several features (size, elongation,
and density) of characters and propose a characteristic value for classification using the run-length frequency of the image
component. In the context of word grouping, previous works have largely been concerned with words which are placed on a horizontal
or vertical line. Our word grouping algorithm can group words which are on inclined lines, intersecting lines, and even curved
lines. To do this, we introduce the 3D neighborhood graph model which is very useful and efficient for character classification
and word grouping. In the 3D neighborhood graph model, each connected component of a text image segment is mapped onto 3D
space according to the area of the bounding box and positional information from the document. We conducted tests with more
than 20 English documents and more than ten oriental documents scanned from books, brochures, and magazines. Experimental
results show that more than 95% of words are successfully extracted from general documents, even in very complicated oriental
documents.
Received August 3, 2001 / Accepted August 8, 2001 相似文献