共查询到20条相似文献,搜索用时 546 毫秒
1.
Legendre正交矩在模式识别和图像分析等领域有着广泛的应用,但由于计算的复杂性,相关的快速算法尚未得到很好的解决,已有方法均局限于二值图像.文章提出了一种灰度图像的Legendre正交矩的快速算法,借助于Legendre多项式的递推公式推导出计算一维Legendre矩的递归公式.利用该关系式,一维Legendre矩Lp可以用一系列初始值L1(a),a<p,Lo(a),a<p-1来得到.而二维Legendre矩pq可以利用一维算法进行计算,为了降低算法复杂度,文中采用基于Systolic阵列的快速算法进行计算L1(a),Lo(a),与直接方法相比,快速算法可以大幅度减少乘法的次数,从而达到了降低算法复杂度的目的。 相似文献
2.
A novel algorithm that permits the fast and accurate computation of the Legendre image moments is introduced in this paper. The proposed algorithm is based on the block representation of an image and on a new image representation scheme, the Image Slice Representation (ISR) method. The ISR method decomposes a gray-scale image as an expansion of several two-level images of different intensities (slices) and thus enables the partial application of the well-known Image Block Representation (IBR) algorithm to each image component. Moreover, using the resulted set of image blocks, the Legendre moments’ computation can be accelerated through appropriate computation schemes. Extensive experiments prove that the proposed methodology exhibits high efficiency in calculating Legendre moments on gray-scale, but furthermore on binary images. The newly introduced algorithm is suitable for the computation of the Legendre moments for pattern recognition and computer vision applications, where the images consist of objects presented in a scene. 相似文献
3.
Chee-Way ChongAuthor Vitae P. RaveendranAuthor VitaeR. MukundanAuthor Vitae 《Pattern recognition》2003,36(3):731-742
This paper details a comparative analysis on time taken by the present and proposed methods to compute the Zernike moments, Zpq. The present method comprises of Direct, Belkasim's, Prata's, Kintner's and Coefficient methods. We propose a new technique, denoted as q-recursive method, specifically for fast computation of Zernike moments. It uses radial polynomials of fixed order p with a varying index q to compute Zernike moments. Fast computation is achieved because it uses polynomials of higher index q to derive the polynomials of lower index q and it does not use any factorial terms. Individual order of moments can be calculated independently without employing lower- or higher-order moments. This is especially useful in cases where only selected orders of Zernike moments are needed as pattern features. The performance of the present and proposed methods are experimentally analyzed by calculating Zernike moments of orders 0 to p and specific order p using binary and grayscale images. In both the cases, the q-recursive method takes the shortest time to compute Zernike moments. 相似文献
4.
Moments constitute a well-known tool in the field of image analysis and pattern recognition, but they suffer from the drawback
of high computational cost. Efforts for the reduction of the required computational complexity have been reported, mainly
focused on binary images, but recently some approaches for gray images have been also presented. In this study, we propose
a simple but effective approach for the computation of gray image moments. The gray image is decomposed in a set of binary
images. Some of these binary images are substituted by an ideal image, which is called “half-intensity” image. The remaining
binary images are represented using the image block representation concept and their moments are computed fast using block
techniques. The proposed method computes approximated moment values with an error of 2–3% from the exact values and operates
in real time (i.e., video rate). The procedure is parameterized by the number m of “half-intensity” images used, which controls the approximation error and the speed gain of the method. The computational
complexity is O(kL
2), where k is the number of blocks and L is the moment order. 相似文献
5.
Souad Benabdelkader Author Vitae Mohammed Boulemden Author Vitae 《Pattern recognition》2005,38(8):1289-1294
The fuzzy c-partition entropy approach for threshold selection behaves well in segmenting images. But the size of search space increases very rapidly when the number of parameters needed to determine the membership function increases. The computation complexity of the fuzzy 2-partition entropy approach is bounded by O(L3). In this paper, a recursive scheme which decreases the computation complexity of the basic algorithm to O(L2) is proposed. The approach does not need the calculation of the membership function. The processing time of each image is reduced from more than 5 min to less than 20 s. 相似文献
6.
7.
Roman M. PalenichkaAuthor Vitae Marek B. ZarembaAuthor Vitae 《Pattern recognition》2003,36(7):1519-1528
This paper describes a fast algorithm to compute local axial moments used in the detection of objects of interest in images. The basic idea is the elimination of redundant operations while computing axial moments for two neighboring angles of orientation. The main result is that the complexity of the recursive computation of axial moments becomes independent of the total number of computed moments at a given point, i.e., it is of the order O(N) where N is the size of the data set. This result is of great importance in computer vision since many feature extraction methods rely on the computation of axial moments. The use of this algorithm for fast object skeletonization in images by orthogonal regression fitting is described in detail, with the experimental results confirming the theoretical computational complexity. 相似文献
8.
Khalid M. Hosny 《Pattern recognition letters》2011,32(9):1305-1314
A new method is proposed for fast and low-complexity computation of exact 3D Legendre moments. The proposed method consists of three main steps. In the first step, the symmetry property is employed where the computational complexity is reduced by 87%. In the second step, exact values of 3D Legendre moments are obtained by mathematically integrating the Legendre polynomials over digital image voxels. An algorithm is employed to significantly accelerate the computational process. In this algorithm, the equations of 3D Legendre moments are treated in a separated form. The proposed method is applied to determine translation-scale invariance of 3D Legendre moments in a very simple way. Numerical experiments are performed where the results are compared with those of the existing methods. Complexity analysis and results of the numerical experiments clearly ensure the efficiency of the proposed method. 相似文献
9.
Prosenjit Bose Vida Dujmović Ferran Hurtado Pat Morin 《Computer Vision and Image Understanding》2009,113(10):1027-1038
A binary image I is Ba, Wb-connected, where a, b ∈ {4, 8}, if its foreground is a-connected and its background is b-connected. We consider a local modification of a Ba, wb-connected image I in which a black pixel can be interchanged with an adjacent white pixel provided that this preserves the connectivity of both the foreground and the background of I. We have shown that for any (a, b) ∈ {(4, 8), (8, 4), (8, 8)}, any two Ba, wb-connected images I and J each with n black pixels differ by a sequence of Θ(n2) interchanges. We have also shown that any two B4, W4-connected images I and J each with n black pixels differ by a sequence of O(n4) interchanges. 相似文献
10.
F. Dubeau 《Computing》1996,57(4):365-369
Fromf(x)=x n ?r and a polynomialQ p (y)=∑ i=0 p a i y i , we consider Newton's method to solveF p (x)=Q p (f(x))=0. We obtain convergent iterative methods of orderp+1 to findr 1/n for arbitraryp. 相似文献
11.
DiscreteL p -approximation, especially for 1≤p≤2 is useful for practical purposes. We describe the experience with an algorithm for its numerical computation. 相似文献
12.
The Euler quotient modulo an odd-prime power pr (r > 1) can be uniquely decomposed as a p-adic number of the form \(\frac{{u^{(p - 1)p^{r - 1} } - 1}}{{p^r }} \equiv a_0 (u) + a_1 (u)p + \cdots + a_{r - 1} (u)p^{r - 1} (\bmod p^r ), \gcd (u,p) = 1,\) where 0 ? aj (u) < p for 0 ? j ? r?1 and we set all aj (u) = 0 if gcd(u, p) > 1. We firstly study certain arithmetic properties of the level sequences (aj (u))u?0 over \(\mathbb{F}_p \) via introducing a new quotient. Then we determine the exact values of linear complexity of (aj (u))u?0 and values of k-error linear complexity for binary sequences defined by (aj (u))u?0. 相似文献
13.
Given a real number sequence A=(a1,a2,…,an), an average lower bound L, and an average upper bound U, the Average-Constrained Maximum-Sum Segment problem is to locate a segment A(i,j)=(ai,ai+1,…,aj) that maximizes ∑i?k?jak subject to . In this paper, we give an O(n)-time algorithm for the case where the average upper bound is ineffective, i.e., U=∞. On the other hand, we prove that the time complexity of the problem with an effective average upper bound is Ω(nlogn) even if the average lower bound is ineffective, i.e., L=−∞. 相似文献
14.
《Intelligent Data Analysis》1998,2(1-4):265-286
The main problem considered in this paper consists of binarizing categorical (nominal) attributes having a very large number of values (204 in our application). A small number of relevant binary attributes are gathered from each initial attribute. Let us suppose that we want to binarize a categorical attribute v with L values, where L is large or very large. The total number of binary attributes that can be extracted from v is 2L−1− 1, which in the case of a large L is prohibitive. Our idea is to select only those binary attributes that are predictive; and these shall constitute a small fraction of all possible binary attributes. In order to do this, the significant idea consists in grouping the L values of a categorical attribute by means of an hierarchical clustering method. To do so, we need to define a similarity between values, which is associated with their predictive power. By clustering the L values into a small number of clusters (J), we define a new categorical attribute with only J values. The hierarchical clustering method used by us, AVL, allows to choose a significant value for J. Now, we could consider using all the 2L−1− 1 binary attributes associated with this new categorical attribute. Nevertheless, the J values are tree-structured, because we have used a hierarchical clustering method. We profit from this, and consider only about 2 × J binary attributes. If L is extremely large, for complexity and statistical reasons, we might not be able to apply a clustering algorithm directly. In this case, we start by “factorizing” v into a pair (v2, v2), each one with about √L(v) values. For a simple example, consider an attribute v with only four values m1,m2, m3,m4. Obviously, in this example, there is no need to factorize the set of values of v, because it has a very small number of values. Nevertheless, for illustration purposes, v could be decomposed (factorized) into 2 attributes with only two values each; the correspondence between the values of v and (v2, v2) would be
υ | (υ1, | υ2) |
---|---|---|
m1 | 1 | 1 |
m2 | 1 | 2 |
m3 | 2 | 1 |
m4 | 2 | 2 |