首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A set of orthonormal polynomials is proposed for image reconstruction from projection data. The relationship between the projection moments and image moments is discussed in detail, and some interesting properties are demonstrated. Simulation results are provided to validate the method and to compare its performance with previous works.  相似文献   

2.
3.
The Radon transform and its inverse (a filtered backprojection) are receiving increasing attention for applications in image reconstruction. As data collection capabilities and image reconstruction algorithms have become more sophisticated, the computational intensity of these problems has drastically increased. Parallel processing techniques are being used to implement highspeed hardware designs that will speed up this computationally burdensome task. Parallel arrays of digital signal processing (DSP) chips may be used to compute the Radon transform and back-projection for high-speed image reconstruction. In this paper we describe computation of the Radon transform and back-projection using a parallel pipelined processor architecture of DSP chips and evaluate the accuracy of the computations and quality of reconstructed images. To justify the computational approach selected, alternative procedures for computation of the Radon transform and back-projection are described and their performance using the 32-bit fixed-point arithmetic of the selected DSP chips are compared. We present, evaluate, and compare the simulated performances of implementations of these procedures on two fixed-point DSP chips: the TI TMS32020 and the AT&T DSP16.  相似文献   

4.
This paper proposes a novel full-reference quality assessment (QA) metric that automatically assesses the quality of an image in the discrete orthogonal moments domain. This metric is constructed by representing the spatial information of an image using low order moments. The computation, up to fourth order moments, is performed on each individual (8×8) non-overlapping block for both the test and reference images. Then, the computed moments of both the test and reference images are combined in order to determine the moment correlation index of each block in each order. The number of moment correlation indices used in this study is nine. Next, the mean of each moment correlation index is computed and thereafter the single quality interpretation of the test image with respect to its reference is determined by taking the mean value of the computed means of all the moment correlation indices. The proposed objective metrics based on two discrete orthogonal moments, Tchebichef and Krawtchouk moments, are developed and their performances are evaluated by comparing them with subjective ratings on several publicly available databases. The proposed discrete orthogonal moments based metric performs competitively well with the state-of-the-art models in terms of quality prediction while outperforms them in terms of computational speed.  相似文献   

5.
The outputs of cascaded digital filters operating as accumulators are combined with a simplified Tchebichef polynomials to form Tchebichef moments (TMs). In this paper, we derive a simplified recurrence relationship to compute Tchebichef polynomials based on Z-transform properties. This paves the way for the implementation of second order digital filter to accelerate the computation of the Tchebichef polynomials. Then, some aspects of digital filter design for image reconstruction from TMs are addressed. The new proposed digital filter structure for reconstruction is based on the 2D convolution between the digital filter outputs used in the computation of the TMs and the impulse response of the proposed digital filter. They operate as difference operators and accordingly act on the transformed image moment sets to reconstruct the original image. Experimental results show that both the proposed algorithms to compute TMs and inverse Tchebichef moments (ITMs) perform better than existing methods in term of computation speed.  相似文献   

6.
A novel methodology is proposed in this paper to accelerate the computation of discrete orthogonal image moments. The computation scheme is mainly based on a new image representation method, the image slice representation (ISR) method, according to which an image can be expressed as the outcome of an appropriate combination of several non-overlapped intensity slices. This image representation decomposes an image into a number of binary slices of the same size whose pixels come in two intensities, black or any other gray-level value. Therefore the image block representation can be effectively applied to describe the image in a more compact way. Once the image is partitioned into intensity blocks, the computation of the image moments can be accelerated, as the moments can be computed by using decoupled computation forms. The proposed algorithm constitutes a unified methodology that can be applied to any discrete moment family in the same way and produces similar promising results, as has been concluded through a detailed experimental investigation.  相似文献   

7.
The paper concerns a two-stage method of local and global information processing used to extract low contrast, linear features embedded in SAR images degraded by speckle. The first processing stage which involves the scaling laws of fractal geometry identifies regions in images where there is a high probability of a target feature. Such features cause deviations from the statistics for a background only. Thus by computing these deviations, the statistics of the target features are automatically separated from those of the background. The result of this processing is then passed, in the form of a binary image, to the second stage where it is parametrically transformed and the threshold transformation is used to deduce the location of the target feature within the image. The method is robust, automatic and can be shown to be computationally realistic. It is illustrated using data from a SAR image (C-band, VV polarisation).  相似文献   

8.
This paper presents discrete wavelet transform (DWT) and its inverse (IDWT) with Haar wavelets as tools to compute the variable size interpolated versions of an image at optimum computational load. As a human observer moves closer to or farther from a scene, the retinal image of the scene zooms in or out, respectively. This zooming in or out can be modeled using variable scale interpolation. The paper proposes a novel way of applying DWT and IDWT in a piecewise manner by non-uniform down- or up-sampling of the images to achieve partially sampled versions of the images. The partially sampled versions are then aggregated to achieve the final variable scale interpolated images. The non-uniform down- or up-sampling here is a function of the required scale of interpolation. Appropriate zero padding is used to make the images suitable for the required non-uniform sampling and the subsequent interpolation to the required scale. The concept of zeroeth level DWT is introduced here, which works as the basis for interpolating the images to achieve bigger size than the original one. The main emphasis here is on the computation of variable size images at less computational load, without compromise of quality of images. The interpolated images to different sizes and the reconstructed images are benchmarked using the statistical parameters and visual comparison. It has been found that the proposed approach performs better as compared to bilinear and bicubic interpolation techniques.  相似文献   

9.
Cris L.  Michael  Piet W.  Lucas J. 《Pattern recognition》2005,38(12):2494-2505
The generalized Radon (or Hough) transform is a well-known tool for detecting parameterized shapes in an image. The Radon transform is a mapping between the image space and a parameter space. The coordinates of a point in the latter correspond to the parameters of a shape in the image. The amplitude at that point corresponds to the amount of evidence for that shape. In this paper we discuss three important aspects of the Radon transform. The first aspect is discretization. Using concepts from sampling theory we derive a set of sampling criteria for the generalized Radon transform. The second aspect is accuracy. For the specific case of the Radon transform for spheres, we examine how well the location of the maxima matches the true parameters. We derive a correction term to reduce the bias in the estimated radii. The third aspect concerns a projection-based algorithm to reduce memory requirements.  相似文献   

10.
This study proposes a novel near infrared face recognition algorithm based on a combination of both local and global features. In this method local features are extracted from partitioned images by means of undecimated discrete wavelet transform (UDWT) and global features are extracted from the whole face image by means of Zernike moments (ZMs). Spectral regression discriminant analysis (SRDA) is then used to reduce the dimension of features. In order to make full use of global and local features and further improve the performance, a decision fusion technique is employed by using weighted sum rule. Experiments conducted on CASIA NIR database and PolyU-NIRFD database indicate that the proposed method has superior overall performance compared to some other methods in the presence of facial expressions, eyeglasses, head rotation, image noise and misalignments. Moreover its computational time is acceptable for on-line face recognition systems.  相似文献   

11.
A new method for focus measure computation is proposed to reconstruct 3D shape using image sequence acquired under varying focus plane. Adaptive histogram equalization is applied to enhance varying contrast across different image regions for better detection of sharp intensity variations. Fast discrete curvelet transform (FDCT) is employed for enhanced representation of singularities along curves in an input image followed by noise removal using bivariate shrinkage scheme based on locally estimated variance. The FDCT coefficients with high activity are exploited to detect high frequency variations of pixel intensities in a sequence of images. Finally, focus measure is computed utilizing neighborhood support of these coefficients to reconstruct the shape and a well-focused image of the scene being probed.  相似文献   

12.
Zernike moments which are superior to geometric moments because of their special properties of image reconstruction and immunity to noise, suffer from several discretization errors. These errors lead to poor quality of reconstructed image and wide variations in the numerical values of the moments. The predominant factor, as observed in this paper, is due to the discrete integer implementation of the steps involved in moment calculation. It is shown in this paper that by modifying the algorithms to include discrete float implementation, the quality of the reconstructed image improves significantly and the first-order moment becomes zero. Low-order Zernike moments have been found to be stable under linear transformations while the high-order moments have large variations. The large variations in high-order moments, however, do not greatly affect the quality of the reconstructed image, implying that they should be ignored when numerical values of moments are used as features. The 11 functions based on geometric moments have also been found to be stable under linear transformations and thus these can be used as features. Pixel level analysis of the images has been carried out to strengthen the results.  相似文献   

13.
An investigation of a fault diagnostic technique for internal combustion engines using discrete wavelet transform (DWT) and neural network is presented in this paper. Generally, sound emission signal serves as a promising alternative to the condition monitoring and fault diagnosis in rotating machinery when the vibration signal is not available. Most of the conventional fault diagnosis techniques using sound emission and vibration signals are based on analyzing the signal amplitude in the time or frequency domain. Meanwhile, the continuous wavelet transform (CWT) technique was developed for obtaining both time-domain and frequency-domain information. Unfortunately, the CWT technique is often operated over a longer computing time. In the present study, a DWT technique which is combined with a feature selection of energy spectrum and fault classification using neural network for analyzing fault signal is proposed for improving the shortcomings without losing its original property. The features of the sound emission signal at different resolution levels are extracted by multi-resolution analysis and Parseval’s theorem [Gaing, Z. L. (2004). Wavelet-based neural network for power disturbance recognition and classification. IEEE Transactions on Power Delivery 19, 1560–1568]. The algorithm is obtained from previous work by Daubechies [Daubechies, I. (1988). Orthonormal bases of compactly supported wavelets. Communication on Pure and Applied Mathematics 41, 909–996.], the“db4”, “db8” and “db20” wavelet functions are adopted to perform the proposed DWT technique. Then, these features are used for fault recognition using a neural network. The experimental results indicated that the proposed system using the sound emission signal is effective and can be used for fault diagnosis of various engine operating conditions.  相似文献   

14.
基于脊波的多光谱和全色图像融合方法研究   总被引:1,自引:0,他引:1  
应用了双线性插值的矩形阵列到径向阵列的变换算法,给出了一个离散脊波变换的实现方法,将其应用于多光谱图像与全色图像的融合算法中,通过清晰度、灰度方差、信息熵三个方面,将算法结果与小波变换的结果进行了对比,实验结果表明,相对于小波变换而言,脊波变换能更好地处理线和面的奇异性,而且由融合的结果来看,脊波变换得到的结果在清晰度等方面要高于小波变换。  相似文献   

15.
基于Radon变换的视频测速算法   总被引:1,自引:0,他引:1  
为了从视频监测图像中自动提取车辆行驶速度,提出了一种基于Radon变换的视频测速算法。根据车辆行驶轨迹的时空特点,利用道路交通标线为图像与道路建立距离映射关系,简化了现场标定的条件和过程;利用时间堆栈成像方法,建立了车辆行驶轨迹的时空关系;基于Radon变换的图像处理,实现了行驶车辆的视频测速技术。算法具有较好的鲁棒性,与传统方法相比,计算复杂性也要低很多。  相似文献   

16.
Multi-frame image super-resolution (SR) has recently become an active area of research. The orthogonal rotation invariant moments (ORIMs) have several useful characteristics which make them very suitable for multi-frame image super-resolution application. Among the various ORIMs, Zernike moments (ZMs) and pseudo-Zernike moments (PZMs)-based SR approaches, i.e., NLM-ZMs and NLM-PZMs, have already shown improved SR performances for multi-frame image super-resolution. However, it is a well-known fact that among many ORIMs, orthogonal Fourier-Mellin moments (OFMMs) demonstrate better noise robustness and image representation capabilities for small images as compared to ZMs and PZMs. Therefore, in this paper, we propose a multi-frame image super-resolution approach using OFMMs. The proposed approach is based on the NLM framework because of its inherent capability of estimating motion implicitly. We have referred to this proposed approach as NLM-OFMMs-I. Also, a novel idea of using OFMMs-based interpolation in place of traditional Lanczos interpolation for obtaining an initial estimate of HR sequence has been presented in this paper. This variant of the proposed approach is referred to as NLM-OFMMs-II. Detailed experimental analysis demonstrates the effectiveness of the proposed OFMMs-based SR approaches to generate high-quality HR images in the presence of factors like image noise, global motion, local motion, and rotation in between the image frames.  相似文献   

17.
18.
Discrete cosine transform (DCT) is a powerful transform to extract proper features for face recognition. After applying DCT to the entire face images, some of the coefficients are selected to construct feature vectors. Most of the conventional approaches select coefficients in a zigzag manner or by zonal masking. In some cases, the low-frequency coefficients are discarded in order to compensate illumination variations. Since the discrimination power of all the coefficients is not the same and some of them are discriminant than others, so we can achieve a higher true recognition rate by using discriminant coefficients (DCs) as feature vectors. Discrimination power analysis (DPA) is a statistical analysis based on the DCT coefficients properties and discrimination concept. It searches for the coefficients which have more power to discriminate different classes better than others. The proposed approach, against the conventional approaches, is data-dependent and is able to find DCs on each database. The simulations results of the various coefficient selection (CS) approaches on ORL and Yale databases confirm the success of the proposed approach. The DPA-based approaches achieve the performance of PCA/LDA or better with less complexity. The proposed method can be implemented for any feature selection problem as well as DCT coefficients. Also, a new modification of PCA and LDA is proposed namely, DPA-PCA and DPA-LDA. In these modifications DCs which are selected by DPA are used as the input of these transforms. Simulation results of DPA-PCA and DPA-LDA on the ORL and Yale database verify the improvement of the results by using these new modifications.  相似文献   

19.
Efficient decoding of Dual Tone Multi-Frequency (DTMF) signals can be achieved using the sub-band non-uniform discrete Fourier transform (SB-NDFT). In this paper, the details of its implementation on the ADSP-2192 processor are put forward. The decoder performance in terms of its computational complexity and computational speed of this algorithm, implemented on the ADSP-2192 processor, are compared for different implementations of the SB-NDFT algorithm, with and without optimization for the chosen DSP, ADSP-2912. The algorithm is tested for various types of input signals on the DSP and these are compared with the results from Matlab®. Problems on using other DTMF decoding algorithms that use the conventional discrete Fourier transform (DFT) and the non-uniform discrete Fourier transform (NDFT) are also addressed.  相似文献   

20.
Compression and encryption technologies are important to the efficient solving of network bandwidth and security issues. A novel scheme, called the Image Compression Encryption Scheme (ICES), is presented. It combines the Haar Discrete Wavelet Transform (DWT), Significance-Linked Connected Component Analysis (SLCCA), and the Advance Encryption Standard (AES). Because of above reason the ICES efficiently reduce the overall processing time. This study develops a novel hardware system to compress and encrypt an image in real-time using an image compression encryption scheme. The proposed system exploits parallel processing to increase the throughout of the cryptosystem for Internet multimedia applications to implement the ICES. Using hardware acceleration for encryption and decryption, the FPGA implementation of DWT, SLCCA and the AES algorithm can be used. Using a pipeline structure, a very high data throughput of 330 Mbit/s at a clock frequency of 40 MHz was obtained. Therefore, the ICES is secure, fast and suited to high speed network protocols such as ATM (Asynchronous Transfer Mode), FDDI (Fiber Distributed Data Interface) or Internet multimedia applications. Shih-Ching Ou is working with the Department of Electrical Engineering, National Central University as a senior professor. His research interests include computer aided design, e-learning system, and virtual reality, etc. In August 2004, he serves as Leader University Professor and Director of Research and Development, now he act as Leader University Professor and Institute of Applied Information (Chairman). He has published a number of international journal and conferences papers related to these areas. Currently, he is the chief of Bioinformatics & CAD Laboratory. Hung-Yuan Chung joined the Department of Electrical Engineering at the National Central University, Chung-li, Taiwan as an associate professor in August 1987. Since August 1992, he was promoted as professor. In addition, he is a registered professional Engineer in R. O. C. He is a life member of the CIEE and the CIE. He received the outstanding Electrical Engineer award of the Chinese Institute of Electrical Engineering in October 2003. His research and teaching interests include System Theory and Control, Adaptive Control, Fuzzy Control, Neural Network Applications, and Microcomputer-Based Control Applications. Wen-Tsai Sung is a PhD candidate at Department of Electrical Engineering, National Central University in Taiwan. His research interests include computer aided design, web-based learning system, bioinformatics and virtual reality. He has published a number of international journal and conferences papers related to these areas. He received a BS degree from the Department of Industrial Education, National Taiwan Normal University, Taiwan in 1993 and received a MS degree from the Department of Electrical Engineering, National Central University, Taiwan in 2000. He has win the dragon thesis award; master degree thesis be recognized the most outstanding academic research. The thesis entitle is: “Integrated computer graphics system in a virtual environment.” Sponsor is Acer Foundation (Acer Universal Computer Co.). Currently, he is studying PhD at the Department of Electrical Engineering, National Central University as a researcher of Bioinformatics & CAD Laboratory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号