首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A set of orthonormal polynomials is proposed for image reconstruction from projection data. The relationship between the projection moments and image moments is discussed in detail, and some interesting properties are demonstrated. Simulation results are provided to validate the method and to compare its performance with previous works.  相似文献   

2.
3.
The Radon transform and its inverse (a filtered backprojection) are receiving increasing attention for applications in image reconstruction. As data collection capabilities and image reconstruction algorithms have become more sophisticated, the computational intensity of these problems has drastically increased. Parallel processing techniques are being used to implement highspeed hardware designs that will speed up this computationally burdensome task. Parallel arrays of digital signal processing (DSP) chips may be used to compute the Radon transform and back-projection for high-speed image reconstruction. In this paper we describe computation of the Radon transform and back-projection using a parallel pipelined processor architecture of DSP chips and evaluate the accuracy of the computations and quality of reconstructed images. To justify the computational approach selected, alternative procedures for computation of the Radon transform and back-projection are described and their performance using the 32-bit fixed-point arithmetic of the selected DSP chips are compared. We present, evaluate, and compare the simulated performances of implementations of these procedures on two fixed-point DSP chips: the TI TMS32020 and the AT&T DSP16.  相似文献   

4.
The outputs of cascaded digital filters operating as accumulators are combined with a simplified Tchebichef polynomials to form Tchebichef moments (TMs). In this paper, we derive a simplified recurrence relationship to compute Tchebichef polynomials based on Z-transform properties. This paves the way for the implementation of second order digital filter to accelerate the computation of the Tchebichef polynomials. Then, some aspects of digital filter design for image reconstruction from TMs are addressed. The new proposed digital filter structure for reconstruction is based on the 2D convolution between the digital filter outputs used in the computation of the TMs and the impulse response of the proposed digital filter. They operate as difference operators and accordingly act on the transformed image moment sets to reconstruct the original image. Experimental results show that both the proposed algorithms to compute TMs and inverse Tchebichef moments (ITMs) perform better than existing methods in term of computation speed.  相似文献   

5.
This paper proposes a novel full-reference quality assessment (QA) metric that automatically assesses the quality of an image in the discrete orthogonal moments domain. This metric is constructed by representing the spatial information of an image using low order moments. The computation, up to fourth order moments, is performed on each individual (8×8) non-overlapping block for both the test and reference images. Then, the computed moments of both the test and reference images are combined in order to determine the moment correlation index of each block in each order. The number of moment correlation indices used in this study is nine. Next, the mean of each moment correlation index is computed and thereafter the single quality interpretation of the test image with respect to its reference is determined by taking the mean value of the computed means of all the moment correlation indices. The proposed objective metrics based on two discrete orthogonal moments, Tchebichef and Krawtchouk moments, are developed and their performances are evaluated by comparing them with subjective ratings on several publicly available databases. The proposed discrete orthogonal moments based metric performs competitively well with the state-of-the-art models in terms of quality prediction while outperforms them in terms of computational speed.  相似文献   

6.
A novel methodology is proposed in this paper to accelerate the computation of discrete orthogonal image moments. The computation scheme is mainly based on a new image representation method, the image slice representation (ISR) method, according to which an image can be expressed as the outcome of an appropriate combination of several non-overlapped intensity slices. This image representation decomposes an image into a number of binary slices of the same size whose pixels come in two intensities, black or any other gray-level value. Therefore the image block representation can be effectively applied to describe the image in a more compact way. Once the image is partitioned into intensity blocks, the computation of the image moments can be accelerated, as the moments can be computed by using decoupled computation forms. The proposed algorithm constitutes a unified methodology that can be applied to any discrete moment family in the same way and produces similar promising results, as has been concluded through a detailed experimental investigation.  相似文献   

7.
The paper concerns a two-stage method of local and global information processing used to extract low contrast, linear features embedded in SAR images degraded by speckle. The first processing stage which involves the scaling laws of fractal geometry identifies regions in images where there is a high probability of a target feature. Such features cause deviations from the statistics for a background only. Thus by computing these deviations, the statistics of the target features are automatically separated from those of the background. The result of this processing is then passed, in the form of a binary image, to the second stage where it is parametrically transformed and the threshold transformation is used to deduce the location of the target feature within the image. The method is robust, automatic and can be shown to be computationally realistic. It is illustrated using data from a SAR image (C-band, VV polarisation).  相似文献   

8.
Cris L.  Michael  Piet W.  Lucas J. 《Pattern recognition》2005,38(12):2494-2505
The generalized Radon (or Hough) transform is a well-known tool for detecting parameterized shapes in an image. The Radon transform is a mapping between the image space and a parameter space. The coordinates of a point in the latter correspond to the parameters of a shape in the image. The amplitude at that point corresponds to the amount of evidence for that shape. In this paper we discuss three important aspects of the Radon transform. The first aspect is discretization. Using concepts from sampling theory we derive a set of sampling criteria for the generalized Radon transform. The second aspect is accuracy. For the specific case of the Radon transform for spheres, we examine how well the location of the maxima matches the true parameters. We derive a correction term to reduce the bias in the estimated radii. The third aspect concerns a projection-based algorithm to reduce memory requirements.  相似文献   

9.
A new method for focus measure computation is proposed to reconstruct 3D shape using image sequence acquired under varying focus plane. Adaptive histogram equalization is applied to enhance varying contrast across different image regions for better detection of sharp intensity variations. Fast discrete curvelet transform (FDCT) is employed for enhanced representation of singularities along curves in an input image followed by noise removal using bivariate shrinkage scheme based on locally estimated variance. The FDCT coefficients with high activity are exploited to detect high frequency variations of pixel intensities in a sequence of images. Finally, focus measure is computed utilizing neighborhood support of these coefficients to reconstruct the shape and a well-focused image of the scene being probed.  相似文献   

10.
Zernike moments which are superior to geometric moments because of their special properties of image reconstruction and immunity to noise, suffer from several discretization errors. These errors lead to poor quality of reconstructed image and wide variations in the numerical values of the moments. The predominant factor, as observed in this paper, is due to the discrete integer implementation of the steps involved in moment calculation. It is shown in this paper that by modifying the algorithms to include discrete float implementation, the quality of the reconstructed image improves significantly and the first-order moment becomes zero. Low-order Zernike moments have been found to be stable under linear transformations while the high-order moments have large variations. The large variations in high-order moments, however, do not greatly affect the quality of the reconstructed image, implying that they should be ignored when numerical values of moments are used as features. The 11 functions based on geometric moments have also been found to be stable under linear transformations and thus these can be used as features. Pixel level analysis of the images has been carried out to strengthen the results.  相似文献   

11.
An investigation of a fault diagnostic technique for internal combustion engines using discrete wavelet transform (DWT) and neural network is presented in this paper. Generally, sound emission signal serves as a promising alternative to the condition monitoring and fault diagnosis in rotating machinery when the vibration signal is not available. Most of the conventional fault diagnosis techniques using sound emission and vibration signals are based on analyzing the signal amplitude in the time or frequency domain. Meanwhile, the continuous wavelet transform (CWT) technique was developed for obtaining both time-domain and frequency-domain information. Unfortunately, the CWT technique is often operated over a longer computing time. In the present study, a DWT technique which is combined with a feature selection of energy spectrum and fault classification using neural network for analyzing fault signal is proposed for improving the shortcomings without losing its original property. The features of the sound emission signal at different resolution levels are extracted by multi-resolution analysis and Parseval’s theorem [Gaing, Z. L. (2004). Wavelet-based neural network for power disturbance recognition and classification. IEEE Transactions on Power Delivery 19, 1560–1568]. The algorithm is obtained from previous work by Daubechies [Daubechies, I. (1988). Orthonormal bases of compactly supported wavelets. Communication on Pure and Applied Mathematics 41, 909–996.], the“db4”, “db8” and “db20” wavelet functions are adopted to perform the proposed DWT technique. Then, these features are used for fault recognition using a neural network. The experimental results indicated that the proposed system using the sound emission signal is effective and can be used for fault diagnosis of various engine operating conditions.  相似文献   

12.
基于Radon变换的视频测速算法   总被引:1,自引:0,他引:1  
为了从视频监测图像中自动提取车辆行驶速度,提出了一种基于Radon变换的视频测速算法。根据车辆行驶轨迹的时空特点,利用道路交通标线为图像与道路建立距离映射关系,简化了现场标定的条件和过程;利用时间堆栈成像方法,建立了车辆行驶轨迹的时空关系;基于Radon变换的图像处理,实现了行驶车辆的视频测速技术。算法具有较好的鲁棒性,与传统方法相比,计算复杂性也要低很多。  相似文献   

13.
14.
Discrete cosine transform (DCT) is a powerful transform to extract proper features for face recognition. After applying DCT to the entire face images, some of the coefficients are selected to construct feature vectors. Most of the conventional approaches select coefficients in a zigzag manner or by zonal masking. In some cases, the low-frequency coefficients are discarded in order to compensate illumination variations. Since the discrimination power of all the coefficients is not the same and some of them are discriminant than others, so we can achieve a higher true recognition rate by using discriminant coefficients (DCs) as feature vectors. Discrimination power analysis (DPA) is a statistical analysis based on the DCT coefficients properties and discrimination concept. It searches for the coefficients which have more power to discriminate different classes better than others. The proposed approach, against the conventional approaches, is data-dependent and is able to find DCs on each database. The simulations results of the various coefficient selection (CS) approaches on ORL and Yale databases confirm the success of the proposed approach. The DPA-based approaches achieve the performance of PCA/LDA or better with less complexity. The proposed method can be implemented for any feature selection problem as well as DCT coefficients. Also, a new modification of PCA and LDA is proposed namely, DPA-PCA and DPA-LDA. In these modifications DCs which are selected by DPA are used as the input of these transforms. Simulation results of DPA-PCA and DPA-LDA on the ORL and Yale database verify the improvement of the results by using these new modifications.  相似文献   

15.
Compression and encryption technologies are important to the efficient solving of network bandwidth and security issues. A novel scheme, called the Image Compression Encryption Scheme (ICES), is presented. It combines the Haar Discrete Wavelet Transform (DWT), Significance-Linked Connected Component Analysis (SLCCA), and the Advance Encryption Standard (AES). Because of above reason the ICES efficiently reduce the overall processing time. This study develops a novel hardware system to compress and encrypt an image in real-time using an image compression encryption scheme. The proposed system exploits parallel processing to increase the throughout of the cryptosystem for Internet multimedia applications to implement the ICES. Using hardware acceleration for encryption and decryption, the FPGA implementation of DWT, SLCCA and the AES algorithm can be used. Using a pipeline structure, a very high data throughput of 330 Mbit/s at a clock frequency of 40 MHz was obtained. Therefore, the ICES is secure, fast and suited to high speed network protocols such as ATM (Asynchronous Transfer Mode), FDDI (Fiber Distributed Data Interface) or Internet multimedia applications. Shih-Ching Ou is working with the Department of Electrical Engineering, National Central University as a senior professor. His research interests include computer aided design, e-learning system, and virtual reality, etc. In August 2004, he serves as Leader University Professor and Director of Research and Development, now he act as Leader University Professor and Institute of Applied Information (Chairman). He has published a number of international journal and conferences papers related to these areas. Currently, he is the chief of Bioinformatics & CAD Laboratory. Hung-Yuan Chung joined the Department of Electrical Engineering at the National Central University, Chung-li, Taiwan as an associate professor in August 1987. Since August 1992, he was promoted as professor. In addition, he is a registered professional Engineer in R. O. C. He is a life member of the CIEE and the CIE. He received the outstanding Electrical Engineer award of the Chinese Institute of Electrical Engineering in October 2003. His research and teaching interests include System Theory and Control, Adaptive Control, Fuzzy Control, Neural Network Applications, and Microcomputer-Based Control Applications. Wen-Tsai Sung is a PhD candidate at Department of Electrical Engineering, National Central University in Taiwan. His research interests include computer aided design, web-based learning system, bioinformatics and virtual reality. He has published a number of international journal and conferences papers related to these areas. He received a BS degree from the Department of Industrial Education, National Taiwan Normal University, Taiwan in 1993 and received a MS degree from the Department of Electrical Engineering, National Central University, Taiwan in 2000. He has win the dragon thesis award; master degree thesis be recognized the most outstanding academic research. The thesis entitle is: “Integrated computer graphics system in a virtual environment.” Sponsor is Acer Foundation (Acer Universal Computer Co.). Currently, he is studying PhD at the Department of Electrical Engineering, National Central University as a researcher of Bioinformatics & CAD Laboratory.  相似文献   

16.
Efficient decoding of Dual Tone Multi-Frequency (DTMF) signals can be achieved using the sub-band non-uniform discrete Fourier transform (SB-NDFT). In this paper, the details of its implementation on the ADSP-2192 processor are put forward. The decoder performance in terms of its computational complexity and computational speed of this algorithm, implemented on the ADSP-2192 processor, are compared for different implementations of the SB-NDFT algorithm, with and without optimization for the chosen DSP, ADSP-2912. The algorithm is tested for various types of input signals on the DSP and these are compared with the results from Matlab®. Problems on using other DTMF decoding algorithms that use the conventional discrete Fourier transform (DFT) and the non-uniform discrete Fourier transform (NDFT) are also addressed.  相似文献   

17.
The information in a digital picture may be compressed by summing gray levels over known paths, or rays, of the picture. A projection is an ordered set of ray sums. Sets of projections may be used to store pictures in compressed form. Given a projection set, reconstruction is achieved by operating on the projection set to produce either the original picture or an approximation to it (called a similar picture). The latter is due to the ambiguity of the reconstruction technique inherent in the data compression.

This paper describes the ambiguity problem resulting in picture reconstruction from projection sets. The concept of switching components, a key factor in the ambiguity, is discussed. Switching components are groups of picture points which can be changed in a certain way (therefore changing the original picture to a “similar” picture) without affecting the projection set.

A technique is developed that addresses itself first to unambiguous areas of the picture in reconstruction. This technique is used along with more traditional methods for the doubtful areas.  相似文献   


18.
Masayuki Iizuka 《Displays》1994,15(4):226-240
The comparison between modified and/or compressed digital images and original digital images, i.e. reconstructed digital images, may be simply demonstrated by means of two-dimensional orthogonal transform techniques and adaptive spatial filters in the domain of the spatial frequency. The main purpose of this study is to examine how much the visual appearance of the reconstructed images is essentially affected by two types of parameters: the number of the threshold value and the value of the weighting coefficients necessary to the concept of an adaptive spatial filter, which is generally used in a small divided block domain in connection with simplified digital image compression techniques. Moreover, the suitability of 3D visualization and/or quasi-colour representation techniques is verified as a means for intuitively understanding the discriminating features of reconstructed images. To sum up, this study is executed to examine the contrast between the visual appearance of difference images and fidelity evaluation of image quality.  相似文献   

19.
When an image is given with only some measurable data, e.g., projections, the most important task is to reconstruct it, i.e., to find an image that provides the measured data. These tomographic problems are frequently used in the theory and applications of image processing. In this paper, memetic algorithms are investigated on triangular grids for the reconstruction of binary images using their three and six direction projections. The algorithm generates an initial population using the network flow algorithm for two of the input projections. The reconstructed images evolve towards an optimal solution or close to the optimal solution, by using crossover operators and guided mutation operators. The quality of the images is improved by using switching components and compactness operator.  相似文献   

20.
The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time–frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time–frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time–frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV segments classified by the LD classifier. A combination of linear/nonlinear features from HRV signals is effective in automatic sleep staging. Moreover, time–frequency features are more informative than others. In addition, a separability measure and classification results showed that HRV signal features, especially nonlinear features, extracted from 5-min segments are more discriminative than those from 0.5-min segments in automatic sleep staging.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号