共查询到20条相似文献,搜索用时 9 毫秒
1.
Line detection in images through regularized Hough transform. 总被引:17,自引:0,他引:17
The problem of determining the location and orientation of straight lines in images is of great importance in the fields of computer vision and image processing. Traditionally the Hough transform, (a special case of the Radon transform) has been widely used to solve this problem for binary images. In this paper, we pose the problem of detecting straight lines in gray-scale images as an inverse problem. Our formulation is based on use of the inverse Radon operator, which relates the parameters determining the location and orientation of the lines in the image to the noisy input image. The advantage of this formulation is that we can then approach the problem of line detection within a regularization framework and enhance the performance of the Hough-based line detector through the incorporation of prior information in the form of regularization. We discuss the type of regularizers that are useful for this problem and derive efficient computational schemes to solve the resulting optimization problems enabling their use in large applications. Finally, we show how our new approach can be alternatively viewed as one of finding an optimal representation of the noisy image in terms of elements chosen from a dictionary of lines. This interpretation relates the problem of Hough-based line finding to the body of work on adaptive signal representation. 相似文献
2.
3.
Automatic generation of fast discrete signal transforms 总被引:1,自引:0,他引:1
This paper presents an algorithm that derives fast versions for a broad class of discrete signal transforms symbolically. The class includes but is not limited to the discrete Fourier and the discrete trigonometric transforms. This is achieved by finding fast sparse matrix factorizations for the matrix representations of these transforms. Unlike previous methods, the algorithm is entirely automatic and uses the defining matrix as its sole input. The sparse matrix factorization algorithm consists of two steps: first, the “symmetry” of the matrix is computed in the form of a pair of group representations; second, the representations are stepwise decomposed, giving rise to a sparse factorization of the original transform matrix. We have successfully demonstrated the method by computing automatically efficient transforms in several important cases: for the DFT, we obtain the Cooley-Tukey (1965) FFT; for a class of transforms including the DCT, type II, the number of arithmetic operations for our fast transforms is the same as for the best-known algorithms. Our approach provides new insights and interpretations for the structure of these signal transforms and the question of why fast algorithms exist. The sparse matrix factorization algorithm is implemented within the software package AREP 相似文献
4.
5.
6.
A new method,triplet circular Hough transform,is proposed for circle detection in image processing and pattern recognition.In the method,a curve in an image is first detected.Next,a sequence of three points on the curve are selected.a sequence of parameters(a,b,r)corresponding to the three points are calculated by solving the circle equation of the curve,and two 2-D accumulators A(a,b)and R(a,b)are accumulated with 1 and r,respectively,Then the parameters{(a,b,r)}of the circles fitting the curve are determined from A(a,b)and R(a,b) by searching for the local maximum over A(a,b).Because no computation loops over center(a,b) and/or radius r are needed,the method is faster than the basic and directional gradient methods It needs also much smaller memory for accumulation. 相似文献
7.
A fast Hough transform for segment detection 总被引:8,自引:0,他引:8
The authors describe a new algorithm for the fast Hough transform (FHT) that satisfactorily solves the problems other fast algorithms propose in the literature-erroneous solutions, point redundance, scaling, and detection of straight lines of different sizes-and needs less storage space. By using the information generated by the algorithm for the detection of straight lines, they manage to detect the segments of the image without appreciable computational overhead. They also discuss the performance and the parallelization of the algorithm and show its efficiency with some examples. 相似文献
8.
9.
基于Hough变换的线段检测算法 总被引:2,自引:2,他引:0
为了克服Hough变换需要大量存储资源的缺点,在分析传统Hough变换定义方式的基础上,提出了一种新的改进算法.该算法采用多点对应一点的匹配映射规则,对参数空间中的线段形参数单元进行逐个单独处理,寻找图像空间中与之对应的模板,只有那些符合模板匹配条件的图像特征点才对该单元进行投票,通过动态寻找局域峰值点以及记录该峰值点所对应图像空间中线段的端点位置完成图像中的线段检测.该算法不需要预先存储整个参数空间,可以节约大量的存储资源,保持了与传统Hough变换相同的精度,且易于并行处理. 相似文献
10.
基于Hough变换改进的直线检测与定位 总被引:5,自引:3,他引:2
利用标准的Hough变换检测空间的直线,研究各种改进的Hough变换算法,设计一种新的Hough变换检测算法,通过实验对比,从而获得更高的精度。 相似文献
11.
Randomized Hough transform (RHT) is an effective method for circle detection. But when dealing with multi-circle complex images, the random sampling will bring lots of invalid accumulations and result in a large number of calculations. In this paper, by selecting three points of the candidate circle, a fast detection algorithm of multi-circle with randomized Hough transform is presented. Experimental results demonstrate that the proposed scheme can quickly detect multiple circles, and has a strong robustness. 相似文献
12.
Novel detection of conics using 2-D Hough planes 总被引:1,自引:0,他引:1
The authors present a new approach to the use of the Hough transform for the detection of ellipses in a 2-D image. In the proposed algorithm, the conventional 5-D Hough voting space is replaced by four 2-D Hough planes which require only 90 kbytes of memory for a 384×256 image. One of the main differences between the proposed transform and other techniques is the way to extract feature points from the image under question. For the accumulation process in the Hough domain, an inherent property of the suggested algorithm is its capability to effect verification. Experimental results from the authors' work on real and synthetic images show that a significant improvement of the recognition is achieved as compared to other algorithms. Furthermore, the proposed algorithm is applicable to the detection of both circular and elliptical objects concurrently 相似文献
13.
14.
15.
SRAM based FPGA are subjected to ion radiation in many operating environments. Following the current trend of shrinking device feature size & increasing die area, newer FPGA are more susceptible to radiation induced errors. Single event upsets (SEU), (also known as soft-errors) account for a considerable amount of radiation induced errors. SEU are difficult to detect & correct when they affect memory-elements present in the FPGA, which are used for the implementation of finite state machines (FSM). Conventional practice to improve FPGA design reliability in the presence of soft-errors is through configuration memory scrubbing, and through component redundancy. Configuration memory scrubbing, although suitable for combinatorial logic in an FPGA design, does not work for sequential blocks such as FSM. This is because the state-bits stored in flip-flops (FF) are variable, and change their value after each state transition. Component redundancy, which is also used to mitigate soft-errors, comes at the expense of significant area overhead, and increased power consumption compared to nonredundant designs. In this paper, we propose an alternate approach to implement the FSM using synchronous embedded memory blocks to enhance the runtime reliability without significant increase in power consumption. Experiments conducted on various benchmark FSM show that this approach has higher reliability, lower area overhead, and consumes less power compared to a component redundancy technique. 相似文献
16.
A fast circle detection method using a variant of Hough-like technique is reported. The proposed technique is simple in implementation, efficient in computation and robust to noise. In general, to evaluate circle parameters for all possible point triplets in an edge image containing n points, nC3 enumerations of the points have to be examined. However, if specific relations of the circle points are sought, the required number of enumerations can be reduced. The authors propose one such scheme of detection with point triplets possessing right angle property and the required enumerations can be reduced to nC2. Moreover, a novel processing strategy known as hypothesis filtering is introduced. The strategy includes two hypothesis constraints termed consistency checking with gradient angles and neighbouring points validation. Experimental results are demonstrated to reveal the performance of the method in detecting circles in both synthetic and real images. Since the proposed method adopts a right angle criterion for hypothesis, circles occluded or broken by more than one half may not be detected. Test results show that the limitation of the proposed method appears to be acceptable. When compared with established Hough transform techniques, the main strengths of the proposed detection method are its attractively computational and memory complexities and good accuracy of detection 相似文献
17.
18.
实现高效准确的目标检测算法在视频监控、自动导航等诸多领域至关重要的作用。针对现有目标检测算法速度不高且鲁棒性差的缺点,提出了一种基于对象性测度估计和霍夫森林的快速目标检测方法。首先,基于自下而上的视觉注意机制,采用对象性测度估计,提取图像中的物体候选集;然后,在由物体候选集确定的感兴趣区域内进行霍夫森林目标检测,确定目标中心位置;最后,结合目标中心所在的对象性测度估计候选框的尺度信息,确定目标大小。实现结果表明,该方法在提高霍夫森林目标检测算法检测准确度的同时,大大提升了检测速率。 相似文献
19.
This paper first presents a fastW-transform (FWT) algorithm for computing one-dimensional cyclic and skew-cyclic convolutions. By using this FWT in conjunction with the fast polynomial transform (FPT), an efficient algorithm is then proposed for calculating the two-dimensional cyclic convolution (2D CC). Compared to the conventional row-column 2D discrete Fourier transform algorithm or the FPT Fast Fourier transform algorithm for 2D CC, the proposed algorithm achieves 65% or 40% savings in the number of multiplications, respectively. The number of additions required is also reduced considerably. 相似文献
20.
Automatic image orientation detection 总被引:3,自引:0,他引:3
Vailaya A. Zhang H. Changjiang Yang Feng-I Liu Jain A.K. 《IEEE transactions on image processing》2002,11(7):746-755
We present an algorithm for automatic image orientation estimation using a Bayesian learning framework. We demonstrate that a small codebook (the optimal size of codebook is selected using a modified MDL criterion) extracted from a learning vector quantizer (LVQ) can be used to estimate the class-conditional densities of the observed features needed for the Bayesian methodology. We further show how principal component analysis (PCA) and linear discriminant analysis (LDA) can be used as a feature extraction mechanism to remove redundancies in the high-dimensional feature vectors used for classification. The proposed method is compared with four different commonly used classifiers, namely k-nearest neighbor, support vector machine (SVM), a mixture of Gaussians, and hierarchical discriminating regression (HDR) tree. Experiments on a database of 16 344 images have shown that our proposed algorithm achieves an accuracy of approximately 98% on the training set and over 97% on an independent test set. A slight improvement in classification accuracy is achieved by employing classifier combination techniques. 相似文献