首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The rotation, scaling and translation invariant property of image moments has a high significance in image recognition. Legendre moments as a classical orthogonal moment have been widely used in image analysis and recognition. Since Legendre moments are defined in Cartesian coordinate, the rotation invariance is difficult to achieve. In this paper, we first derive two types of transformed Legendre polynomial: substituted and weighted radial shifted Legendre polynomials. Based on these two types of polynomials, two radial orthogonal moments, named substituted radial shifted Legendre moments and weighted radial shifted Legendre moments (SRSLMs and WRSLMs) are proposed. The proposed moments are orthogonal in polar coordinate domain and can be thought as generalized and orthogonalized complex moments. They have better image reconstruction performance, lower information redundancy and higher noise robustness than the existing radial orthogonal moments. At last, a mathematical framework for obtaining the rotation, scaling and translation invariants of these two types of radial shifted Legendre moments is provided. Theoretical and experimental results show the superiority of the proposed methods in terms of image reconstruction capability and invariant recognition accuracy under both noisy and noise-free conditions.  相似文献   

2.
In this paper, we introduce new sets of 2D and 3D rotation, scaling and translation invariants based on orthogonal radial Racah moments. We also provide theoretical mathematics to derive them. Thus, this work proposes in the first case a new 2D radial Racah moments based on polar representation of an object by one-dimensional orthogonal discrete Racah polynomials on non-uniform lattice, and a circular function. In the second case, we present new 3D radial Racah moments using a spherical representation of volumetric image by one-dimensional orthogonal discrete Racah polynomials and a spherical function. Further 2D and 3D invariants are extracted from the proposed 2D and 3D radial Racah moments respectively will appear in the third case. To validate the proposed approach, we have resolved three problems. The 2D/ 3D image reconstruction, the invariance of 2D/3D rotation, scaling and translation, and the pattern recognition. The result of experiments show that the Racah moments have done better than the Krawtchouk moments, with and without noise. Simultaneously, the mentioned reconstruction converges rapidly to the original image using 2D and 3D radial Racah moments, and the test 2D/3D images are clearly recognized from a set of images that are available in COIL-20 database for 2D image, and PSB database for 3D image.  相似文献   

3.
Recently, orthogonal moments have become efficient tools for two-dimensional and three-dimensional (2D and 3D) image not only in pattern recognition, image vision, but also in image processing and applications engineering. Yet, there is still a major difficulty in 3D rotation invariants. In this paper, we propose new sets of invariants for 2D and 3D rotation, scaling and translation based on orthogonal radial Hahn moments. We also present theoretical mathematics to derive them. Thus, this paper introduces in the first case new 2D radial Hahn moments based on polar representation of an object by one-dimensional orthogonal discrete Hahn polynomials, and a circular function. In the second case, we present new 3D radial Hahn moments using a spherical representation of volumetric image by one-dimensional orthogonal discrete Hahn polynomials and a spherical function. Further 2D and 3D invariants are derived from the proposed 2D and 3D radial Hahn moments respectively, which appear as the third case. In order to test the proposed approach, we have resolved three issues: the image reconstruction, the invariance of rotation, scaling and translation, and the pattern recognition. The result of experiments show that the Hahn moments have done better than the Krawtchouk moments, with and without noise. Simultaneously, the mentioned reconstruction converges quickly to the original image using 2D and 3D radial Hahn moments, and the test images are clearly recognized from a set of images that are available in COIL-20 database for 2D image, and Princeton shape benchmark (PSB) database for 3D image.  相似文献   

4.
In this paper, we propose a new set of 2D and 3D rotation invariants based on orthogonal radial Meixner moments. We also present a theoretical mathematics to derive them. Hence, this paper introduces in the first case a new 2D radial Meixner moments based on polar representation of an object by a one-dimensional orthogonal discrete Meixner polynomials and a circular function. In the second case, we present a new 3D radial Meixner moments using a spherical representation of volumetric image by a one-dimensional orthogonal discrete Meixner polynomials and a spherical function. Further 2D and 3D rotational invariants are derived from the proposed 2D and 3D radial Meixner moments respectively. In order to prove the proposed approach, three issues are resolved mainly image reconstruction, rotational invariance and pattern recognition. The result of experiments prove that the Meixner moments have done better than the Krawtchouk moments with and without nose. Simultaneously, the reconstructed volumetric image converges quickly to the original image using 2D and 3D radial Meixner moments and the test images are clearly recognized from a set of images that are available in a PSB database.  相似文献   

5.
The property of rotation, scaling and translation invariant has a great important in 3D image classification and recognition. Tchebichef moments as a classical orthogonal moment have been widely used in image analysis and recognition. Since Tchebichef moments are represented in Cartesian coordinate, the rotation invariance can’t easy to realize. In this paper, we propose a new set of 3D rotation scaling and translation invariance of radial Tchebichef moments. We also present a theoretical mathematics to derive them. Hence, this paper we present a new 3D radial Tchebichef moments using a spherical representation of volumetric image by a one-dimensional orthogonal discrete Tchebichef polynomials and a spherical function. They have better image reconstruction performance, lower information redundancy and higher noise robustness than the existing radial orthogonal moments. At last, a mathematical framework for obtaining the rotation, scaling and translation invariants of these two types of Tchebichef moments is provided. Theoretical and experimental results show the superiority of the proposed methods in terms of image reconstruction capability and invariant recognition accuracy under both noisy and noise-free conditions. The result of experiments prove that the Tchebichef moments have done better than the Krawtchouk moments with and without noise. Simultaneously, the reconstructed 3D image converges quickly to the original image using 3D radial Tchebichef moments and the test images are clearly recognized from a set of images that are available in a PSB database.  相似文献   

6.
7.

In this work, we propose new sets of 2D and 3D rotation invariants based on orthogonal radial dual Hahn moments, which are orthogonal on a non-uniform lattice. We also present theoretical mathematics to derive them. Thus, this paper presents in the first case new 2D radial dual Hahn moments based on polar representation of an image by one-dimensional orthogonal discrete dual Hahn polynomials and a circular function. The dual Hahn polynomials are general case of Tchebichef and Krawtchouk polynomials. In the second case, we introduce new 3D radial dual Hahn moments employing a spherical representation of volumetric image by one-dimensional orthogonal discrete dual Hahn polynomials and a spherical function, which are orthogonal on a non-uniform lattice. The 2D and 3D rotational invariants are extracts from the proposed 2D and 3D radial dual Hahn moments respectively. In order to test the proposed approach, three problems namely image reconstruction, rotational invariance and pattern recognition are attempted using the proposed moments. The result of experiments shows that the radial dual Hahn moments have performed better than the radial Tchebichef and Krawtchouk moments, with and without noise. Simultaneously, the mentioned reconstruction converges quickly to the original image using 2D and 3D radial dual Hahn moments, and the test images are clearly recognized from a set of images that are available in COIL-20 database for 2D image and PSB database for 3D image.

  相似文献   

8.

Orthogonal moments and their invariants to geometric transformations for gray-scale images are widely used in many pattern recognition and image processing applications. In this paper, we propose a new set of orthogonal polynomials called adapted Gegenbauer–Chebyshev polynomials (AGC). This new set is used as a basic function to define the orthogonal adapted Gegenbauer–Chebyshev moments (AGCMs). The rotation, scaling, and translation invariant property of (AGCMs) is derived and analyzed. We provide a novel series of feature vectors of images based on the adapted Gegenbauer–Chebyshev orthogonal moments invariants (AGCMIs). We practice a novel image classification system using the proposed feature vectors and the fuzzy k-means classifier. A series of experiments is performed to validate this new set of orthogonal moments and compare its performance with the existing orthogonal moments as Legendre invariants moments, the Gegenbauer and Tchebichef invariant moments using three different image databases: the MPEG7-CE Shape database, the Columbia Object Image Library (COIL-20) database and the ORL-faces database. The obtained results ensure the superiority of the proposed AGCMs over all existing moments in representation and recognition of the images.

  相似文献   

9.
10.
New Invariant Moments for Non-Uniformly Scaled Images   总被引:1,自引:0,他引:1  
The usual regular moment functions are only invariant to image translation, rotation and uniform scaling. These moment invariants are not invariant when an image is scaled non-uniformly in the x- and y-axes directions. This paper addresses this problem by presenting a new technique to obtain moments that are invariant to non-uniform scaling. However, this technique produces a set of features that are only invariant to translation and uniform/non-uniform scaling. To obtain invariance to rotation, moments are calculated with respect to the x-y-axis of the image. To perform this, a neural network is used to estimate the angle of rotation from the x-y-axis and the image is unrotated to the x-y-axis. Consequently, we are able to obtain features that are invariant to translation, rotation and uniform/non-uniform scaling. The mathematical background behind the development and invariance of the new moments are presented. The results of experimental studies using English alphabets and Arabic numerals scaled uniformly/non-uniformly, rotated and translated are discussed to further verify the validity of the new moments.  相似文献   

11.
This paper addresses bivariate orthogonal polynomials, which are a tensor product of two different orthogonal polynomials in one variable. These bivariate orthogonal polynomials are used to define several new types of continuous and discrete orthogonal moments. Some elementary properties of the proposed continuous Chebyshev–Gegenbauer moments (CGM), Gegenbauer–Legendre moments (GLM), and Chebyshev–Legendre moments (CLM), as well as the discrete Tchebichef–Krawtchouk moments (TKM), Tchebichef–Hahn moments (THM), Krawtchouk–Hahn moments (KHM) are presented. We also detail the application of the corresponding moments describing the noise-free and noisy images. Specifically, the local information of an image can be flexibly emphasized by adjusting parameters in bivariate orthogonal polynomials. The global extraction capability is also demonstrated by reconstructing an image using these bivariate polynomials as the kernels for a reversible image transform. Comparisons with the known moments are performed, and the results show that the proposed moments are useful in the field of image analysis. Furthermore, the study investigates invariant pattern recognition using the proposed three moment invariants that are independent of rotation, scale and translation, and an example is given of using the proposed moment invariants as pattern features for a texture classification application.  相似文献   

12.
Conventional regular moment functions have been proposed as pattern sensitive features in image classification and recognition applications. But conventional regular moments are only invariant to translation, rotation and equal scaling. It is shown that the conventional regular moment invariants remain no longer invariant when the image is scaled unequally in the x- and y-axis directions. We address this problem by presenting a technique to make the regular moment functions invariant to unequal scaling. However, the technique produces a set of features that are only invariant to translation, unequal/equal scaling and reflection. They are not invariant to rotation. To make them invariant to rotation, moments are calculated with respect to the principal axis of the image. To perform this, the exact angle of rotation must be known. But the method of using the second-order moments to determine this angle will also be inclusive of an undesired tilt angle. Therefore, in order to correctly determine the amount of rotation, the tilt angle which differs for different scaling factors in the x- and y-axis directions for the particular image must be obtained. In order to solve this problem, a neural network using the back-propagation learning algorithm is trained to estimate the tilt angle of the image and from this the amount of rotation for the image can be determined. Next, the new moments are derived and a Fuzzy ARTMAP network is used to classify these images into their respective classes. Sets of experiments involving images rotated and scaled unequally in the x- and y-axis directions are carried out to demonstrate the validity of the proposed technique.  相似文献   

13.
Moment functions defined using a polar coordinate representation of the image space, such as radial moments and Zernike moments, are used in several recognition tasks requiring rotation invariance. However, this coordinate representation does not easily yield translation invariant functions, which are also widely sought after in pattern recognition applications. This paper presents a mathematical framework for the derivation of translation invariants of radial moments defined in polar form. Using a direct application of this framework, translation invariant functions of Zernike moments are derived algebraically from the corresponding central moments. Both derived functions are developed for non-symmetrical as well as symmetrical images. They mitigate the zero-value obtained for odd-order moments of the symmetrical images. Vision applications generally resort to image normalization to achieve translation invariance. The proposed method eliminates this requirement by providing a translation invariance property in a Zernike feature set. The performance of the derived invariant sets is experimentally confirmed using a set of binary Latin and English characters.  相似文献   

14.
A new method for the template matching, invariant to image translation, rotation and scaling, is proposed. In the first step of our approach, the ring-projection transform (RPT) process is used to convert the 2D template in a circular region into a 1D gray-level signal as a function of radius. The advantages of the RPT process are that it owns the characteristic of rotation invariance and reduces the computational complexity of normalized correlation (NC). Then, the template matching is performed by constructing a parametric template vector (PTV) of the 1D gray-level signal with differently scaled templates of the object. The merits of our approach are that it not only obtains rotation invariance, but also scale invariance Additionally, our approach is conceptually simple, easy to implement and only one training image is needed for the training phase. Experimental results show that the computational time of the proposed approach is faster and the performance is better than the parametric template (PT) method and the affine moment invariants (AMIs) method in the image rotation or scaling. Moreover, our approach not only enjoys high accurate rate under the changes of translation, rotation and scale, but also estimates the scaling value of the target object in the input scene. Experiments with Gaussian noise demonstrate that the proposed algorithm is robust to detect the target object with the changes of translation, orientation and scale. This indicates that our approach is suitable for on-line template matching with scene translation, rotation and scaling.  相似文献   

15.
16.
The ideal of Bessel-Fourier moments (BFMs) for image analysis and only rotation invariant image cognition has been proposed recently. In this paper, we extend the previous work and propose a new method for rotation, scaling and translation (RST) invariant texture recognition using Bessel-Fourier moments. Compared with the others moments based methods, the radial polynomials of Bessel-Fourier moments have more zeros and these zeros are more evenly distributed. It makes Bessel-Fourier moments more suitable for invariant texture recognition as a generalization of orthogonal complex moments. In the experiment part, we got three testing sets of 16, 24 and 54 texture images by way of translating, rotating and scaling them separately. The correct classification percentages (CCPs) are compared with that of orthogonal Fourier-Mellin moments and Zernike moments based methods in both noise-free and noisy condition. Experimental results validate the conclusion of theoretical derivation: BFM performs better in recognition capability and noise robustness in terms of RST texture recognition under both noise-free and noisy condition when compared with orthogonal Fourier-Mellin moments and Zernike moments based methods.  相似文献   

17.
为了更有效地利用彩色人脸的色彩信息进行识别,提出了一种新的基于彩色图像四元数表示的算法. 首先基于彩色图像四元数表示和四元数代数理论定义了四元数伪Zernike矩(Quaternion pseudo-Zernike moments, QPZMs), 将传统的主要处理灰度图像的伪Zernike矩(Pseudo-Zernike moments, PZMs)推广应用于彩色图像,然后基于QPZMs构造了彩色人脸图像针对旋转、缩放和平移(Rotation, scaling, and translation, RST)变换的四元数值不变量,最后结合这些鲁棒的不变量特征和四元数BP神经网络(Quaternion back propagation neural network, QBPNN)分类器进行彩色人脸识别. 实验结果表明,与现有基于四元数的算法比较,本文算法在表情、光照、位置等变化方面具有更强的鲁棒性.  相似文献   

18.
提出一种基于Krawtchouk矩的水印算法,通过修改一些原始Krawtchouk矩并重构图像以获得水印图像.基于Krawtchouk矩与几何矩的关系,提出采用具有平移、比例缩放和旋转不变性的几何不变矩来检测水印.实验表明,与用Krawtchouk不变矩检测相比,该算法对于大角度旋转和图像平移的几何攻击具有更好的鲁棒性.  相似文献   

19.
抵抗几何攻击是鲁棒水印研究的关键问题之一。为实现水印同步,提出了一种结合Zernike矩和小波域模板实现由粗到精几何同步的算法,即首先利用平移归一化图像的Zernike矩估计旋转和缩放参数,并用校正旋转和缩放后的图像和原图像之间的质心增量估计平移参数;然后基于粗略估计的参数,通过匹配小波域模板实现旋转、缩放和平移(RST)参数的精确识别和校正,该同步方法可以较大程度地降低搜索空间。水印嵌入和检测采用了小波域向量隐马尔可夫模型(DWT-HMM)。仿真结果表明,利用由粗到精的几何同步方法和基于HMM的水印算法能有效抵抗StirMark平台的多种单项攻击和联合攻击,算法具有较好的鲁棒性能。  相似文献   

20.
李宗民  刘玉杰  李华 《软件学报》2007,18(Z1):71-76
提出一种三维极半径曲面矩,并应用于三维模型检索.三维极半径矩是一种具有平移、旋转和缩放不变性的不变量,将三维极半径矩推广到三维极半径曲面矩获得了新的不变矩,该方法不需要将三角面片表示的三维模型数据体素化,从而提高了计算速度和计算精度.同时,基于这种三维极半径曲面矩的识别算法具有很好的识别率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号