共查询到20条相似文献,搜索用时 0 毫秒
1.
Presently, man-machine interface development is a widespread research activity. A system to understand hand drawn architectural
drawings in a CAD environment is presented in this paper. To understand a document, we have to identify its building elements
and their structural properties. An attributed graph structure is chosen as a symbolic representation of the input document
and the patterns to recognize in it. An inexact subgraph isomorphism procedure using relaxation labeling techniques is performed.
In this paper we focus on how to speed up the matching. There is a building element, the walls, characterized by a hatching
pattern. Using a straight line Hough transform (SLHT)-based method, we recognize this pattern, characterized by parallel straight
lines, and remove from the input graph the edges belonging to this pattern. The isomorphism is then applied to the remainder
of the input graph. When all the building elements have been recognized, the document is redrawn, correcting the inaccurate
strokes obtained from a hand-drawn input.
Received 6 June 1996 / Accepted 4 February 1997 相似文献
2.
3.
Discretization errors in the Hough transform 总被引:8,自引:0,他引:8
Straight line-segments in a two-dimensional image can be detected with the Hough transform by searching peaks in a parameter space. The influence on the Hough transform of the quantization of the parameter space, the quantization of the image and the width of the line-segment is investigated in this paper.
The Hough transform was improved by O'Gorman and Clowes by taking into account the gradient direction. The resulting scatter of the peaks can be reduced by using a weighting function in the transform. Examples of asbestos preparations are given. 相似文献
4.
Rapid computation of the Hough transform is necessary in very many computer vision applications. One of the major approaches for fast Hough transform computation is based on the use of a small random sample of the data set rather than the full set. Two different algorithms within this family are the randomized Hough transform (RHT) and the probabilistic Hough transform (PHT). There have been contradictory views on the relative merits and drawbacks of the RHT and the PHT. In this paper, a unified theoretical framework for analyzing the RHT and the PHT is established. The performance of the two algorithms is characterized both theoretically and experimentally. Clear guidelines for selecting the algorithm that is most suitable for a given application are provided. We show that, when considering the basic algorithms, the RHT is better suited for the analysis of high quality low noise edge images, while for the analysis of noisy low quality images the PHT should be selected. 相似文献
5.
A technique for real-time object recognition in digital images is described. On the one hand, our approach combines robustness against occlusions, clutter, arbitrary illumination changes, and noise with invariance under rigid motion, i.e., translation and rotation. On the other hand, the computational effort is small in order to fulfill requirements of real-time applications. Our approach uses a modification of the generalized Hough transform (GHT) to improve the GHT's performance: A novel efficient limitation of the search space in combination with a hierarchical search strategy is implemented to reduce the computational effort. To meet the demands for high precision in industrial tasks, a subsequent refinement adjusts the final pose parameters. An empirical performance evaluation of the modified GHT is presented by comparing it to two standard 2D object recognition techniques. 相似文献
6.
An algorithm to implement the Hough transform for the detection of a straight line on a pyramidal architecture is presented. The algorithm consists of two phases. The first phase, called block-projection, takes constant time. The second phase, called block-combination, is repeated logn times and takes a total ofO(n
1/2) time for the detection of all straight lines having a given slope on an n×n image; if there arep different slopes to be detected, then the total time becomesO(pn
1/2). 相似文献
7.
A fast digital Radon transform based on recursively defined digital straight lines is described, which has the sequential complexity of N2 log N additions for an N × N image. This transform can be used to evaluate the Hough transform to detect straight lines in a digital image. Whilst a parallel implementation of the Hough transform algorithm is difficult because of global memory access requirements, the fast digital Radon transform is vectorizable and therefore well suited for parallel computation. The structure of the fast algorithm is shown to be quite similar to the FFT algorithm for decimation in frequency. It is demonstrated that even for sequential computation the fast Radon transform is an attractive alternative to the classical Hough transform algorithm. 相似文献
8.
Subspace-based line detection (SLIDE) is a novel approach for straight line fitting that has recently been suggested by Aghajan
and Kailath. It is based on an analogy made between a straight line in an image and a planar propagating wavefront impinging
on an array of sensors. Efficient sensor array processing algorithms are used to detect the parameters of the line. SLIDE
is computationally cheaper than the Hough transform, but it has not been clear whether or not this is a magical free bonus.
In particular, it has not been known how the breakpoints of SLIDE relate to those of the Hough transform. We compare the failure
modes and limitations of the two algorithms and demonstrate that SLIDE is significantly less robust than the Hough transform. 相似文献
9.
10.
REFLICS: Real-time flow imaging and classification system 总被引:1,自引:0,他引:1
Sadahiro Iwamoto David M. Checkley Jr. Mohan M. Trivedi 《Machine Vision and Applications》2001,13(1):1-13
An accurate analysis of a large dynamic system like our oceans requires spatially fine and temporally matched data collection
methods. Current methods to estimate fish stock size from pelagic (marine) fish egg abundance by using ships to take point
samples of fish eggs have large margins of error due to spatial and temporal undersampling. The real-time flow imaging and
classification system (REFLICS) enhances fish egg sampling by obtaining continuous, accurate information on fish egg abundance
as the ship cruises along in the area of interest. REFLICS images the dynamic flow with a progressive-scan area camera (60
frames/s) and a synchronized strobe in backlighting configuration. Digitization and processing occur on a dual-processor Pentium
II PC and a pipeline-based image-processing board. REFLICS uses a segmentation algorithm to locate fish-egg-like objects in
the image and then a classifier to determine fish egg, species, and development stage (age). We present an integrated system
design of REFLICS and performance results. REFLICS can perform in real time (60 Hz), classify fish eggs with low false negative
rates on real data collected from a cruise, and work in harsh conditions aboard ships at sea. REFLICS enables cost-effective,
real-time assessment of pelagic fish eggs for research and management.
Received: 12 April 2000 / Accepted: 6 July 2000 相似文献
11.
Real-time system = discrete system + clock variables 总被引:3,自引:0,他引:3
Rajeev Alur Thomas A. 《International Journal on Software Tools for Technology Transfer (STTT)》1997,1(1-2):86-109
This paper introduces, gently but rigorously, the clock approach to real-time programming. We present with mathematical precision,
assuming no prerequisites other than familiarity with logical and programming notations, the concepts that are necessary for
understanding, writing, and executing clock programs. In keeping with an expository style, all references are clustered in
bibliographic remarks at the end of each section. The first appendix presents proof rules for verifying temporal properties
of clock programs. The second appendix points to selected literature on formal methods and tools for programming with clocks.
In particular, the timed automaton, which is a finite-state machine equipped with clocks, has become a standard paradigm for
real-time model checking; it underlies the tools HyTech, Kronos, and Uppaal, which are discussed elsewhere in this volume. 相似文献
12.
Abstract. Parallel systems provide an approach to robust computing. The motivation for this work arises from using modern parallel
environments in intermediate-level feature extraction. This study presents parallel algorithms for the Hough transform (HT)
and the randomized Hough transform (RHT). The algorithms are analyzed in two parallel environments: multiprocessor computers
and workstation networks. The results suggest that both environments are suitable for the parallelization of HT. Because scalability
of the parallel RHT is weaker than with HT, only the multiprocessor environment is suitable. The limited scalability forces
us to use adaptive techniques to obtain good results regardless of the number of processors. Despite the fact that the speedups
with HT are greater than with RHT, in terms of total computation time, the new parallel RHT algorithm outperforms the parallel
HT.
Received: 8 December 2001 / Accepted: 5 June 2002
Correspondence to: V. Kyrki 相似文献
13.
In this paper, an improved Hough transform (HT) method is proposed to robustly detect line segments in images with complicated backgrounds. The work focuses on detecting line segments of distinct lengths, totally independent of prior knowledge of the original image. Based on the characteristics of accumulation distribution obtained by conventional HT, a local operator is implemented to enhance the difference between the accumulation peaks caused by line segments and noise. Through analysis of the effect of the operator, a global threshold is obtained in the histogram of the enhanced accumulator to detect peaks. Experimental results are provided to demonstrate the efficiency and robustness of the proposed method. 相似文献
14.
Accurate and efficient vectorization of line drawings is essential for any higher level processing in document analysis and
recognition systems. In spite of the prevalence of vectorization and line detection methods, no standard for their performance
evaluation protocol exists. We propose a protocol for evaluating both straight and circular line extraction to help compare,
select, improve, and even design line detection algorithms to be incorporated into line drawing recognition and understanding
systems. The protocol involves both positive and negative sets of indices, at pixel and vector levels. Time efficiency is
also included in the protocol. The protocol may be extended to handle lines of any shape as well as other classes of graphic
objects. 相似文献
15.
Hough Transform (HT) is recognized as a powerful tool for graphic element extraction from images due to its global vision and robustness in noisy or degraded environment. However, the application of HT has been limited to small-size images for a long time. Besides the well-known heavy computation in the accumulation, the peak detection and the line verification become much more time-consuming for large-size images. Another limitation is that most existing HT-based line recognition methods are not able to detect line thickness, which is essential to large-size images, usually engineering drawings. We believe these limitations arise from that these methods only work on the HT parameter space. This paper therefore proposes a new HT-based line recognition method, which utilizes both the HT parameter space and the image space. The proposed method devises an image-based gradient prediction to accelerate the accumulation, introduces a boundary recorder to eliminate redundant analyses in the line verification, and develops an image-based line verification algorithm to detect line thickness and reduce false detections as well. It also proposes to use pixel removal to avoid overlapping lines instead of rigidly suppressing the N×N neighborhood. We perform experiments on real images with different sizes in terms of speed and detection accuracy. The experimental results demonstrate the significant performance improvement, especially for large-size images. 相似文献
16.
17.
织物纬斜角度检测是布料整纬的关键技术环节,如何快速、准确检测纬斜角度对提高整纬质量具有重要意义;针对现有布料图像整纬方法存在速度慢、精度不高的缺点,提出了基于Hough变换原理和快速傅里叶变换(FFT)多投影的织物纬斜图像快速检测方法;首先对采集到的织物图像通过傅里叶变换对图像进行频域滤波再逆变换,滤除图像中不表示纬斜方向的区域信息,其次使用Sobel边缘方向检测算子对图像进行卷积以得到边缘方向图,提取纬纱方向信息,利用形态学滤波得到纬纱骨架图,进一步精简纬纱区域以减少计算量,最后进行Hough变换和FFT多投影分析得到织物图像的纬斜角度;实验测试证明对于不同类型的织物图像,该算法的检测时间低于0.55 s,误差值低于0.2°,能够兼顾检测精度和检测速度,满足工程实际应用要求. 相似文献
18.
虹膜定位是在虹膜图像中确定虹膜的内外边界,是虹膜识别过程的首要环节。Hough变换是虹膜定位的经典算法,但对原始图像质量要求高,算法运算时间长。依据人眼图像的灰度特性,结合形态学处理提出一种改进的Hough变换定位新算法。对图像进行灰度二值化运算后进行形态学处理分离出瞳孔,结合Sobel算子边缘检测出瞳孔边界点,通过最小二乘法拟合定位出虹膜内边界;在先验知识和形态学处理的基础上对图像进行Hough变换,定位出虹膜的外边界。实验表明所提出的算法性能比传统Hough变换有较大提高,可用于实际虹膜识别的预处理过程中。 相似文献
19.
This letter presents a binary Hough transform (BHT) derived from the conventional Hough transform with slope/ intercept parameterization and a systolic architecture for its efficient implementation using only adders and delay-elements. 相似文献
20.
Amer Dawoud Mohamed Kamel 《International Journal on Document Analysis and Recognition》2002,5(1):28-38
Binarization of document images with poor contrast, strong noise, complex patterns, and variable modalities in the gray-scale
histograms is a challenging problem. A new binarization algorithm has been developed to address this problem for personal
cheque images. The main contribution of this approach is optimizing the binarization of a part of the document image that
suffers from noise interference, referred to as the Target Sub-Image (TSI), using information easily extracted from another
noise-free part of the same image, referred to as the Model Sub-Image (MSI). Simple spatial features extracted from MSI are
used as a model for handwriting strokes. This model captures the underlying characteristics of the writing strokes, and is
invariant to the handwriting style or content. This model is then utilized to guide the binarization in the TSI. Another contribution
is a new technique for the structural analysis of document images, which we call “Wavelet Partial Reconstruction” (WPR). The
algorithm was tested on 4,200 cheque images and the results show significant improvement in binarization quality in comparison
with other well-established algorithms.
Received: October 10, 2001 / Accepted: May 7, 2002
This research was supported in part by NCR and NSERC's industrial postgraduate scholarship No. 239464.
A simplified version of this paper has been presented at ICDAR 2001 [3]. 相似文献