首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, a new spatio-temporal filtering method for removing noise from image sequences is proposed. This method combines the use of motion compensation and signal decomposition to account for the effects of object motion. Because of object motion, image sequences are temporally nonstationary, which requires the use of adaptive filters. By motion compensating the sequence prior to filtering, nonstationarities, i.e., parts of the signal that are momentarily not stationary, can be reduced significantly. However, since not all nonstationarities can be accounted for by motion, a motion-compensated signal still contains nonstationarities. An adaptive algorithm based on order statistics is described that decomposes the motion-compensated signal into a noise-free nonstationary part and a noisy stationary part. An RLS filter is then used to filter the noise from the stationary signal. Our new method is experimentally compared with various noise filtering approaches from literature.  相似文献   

2.
Accelerated Hough transform using rectangular image decomposition   总被引:2,自引:0,他引:2  
A novel fast method for evaluating the Hough transform is proposed, which can be used to accelerate detection of prevalent linear formations in binary images. An image is decomposed using rectangular blocks and the contribution of each whole block to the Hough transform space is evaluated, rather than the contribution of each image point. The resulting acceleration in the calculation of the Hough transform field is demonstrated in two image processing experiments related to object axis identification and skew detection of digitised documents  相似文献   

3.
Presents a new algorithm that utilizes mathematical morphology for pyramidal coding of color images. The authors obtain lossy color image compression by using block truncation coding at the pyramid levels to attain reduced data rates. The pyramid approach is attractive due to low computational complexity, simple parallel implementation, and the ability to produce acceptable color images at moderate data rates. In many applications, the progressive transmission capability of the algorithm is very useful. The authors show experimental results for color images at data rates of 1.89 bits/pixel.  相似文献   

4.
The recent explosion in multimedia and networking application places a great demand on efficient transmission of images at low bit rate with high security. Mixing several existing standard encryption techniques with image encoding tends to change the compression ratio greatly. In this paper, a novel image encryption algorithm is embedded as a part of JPEG image encoding scheme to meet three major necessities: (1) to provide temporal security against casual observer, (2) to preserve the compression ratio, (3) remain compliant with the JPEG file format. In the proposed algorithm, the modified DCT blocks are confused by a fuzzy PN sequence. In addition to that, the DCT coefficients of each modified DCT block are converted to unique uncorrelated symbols, which are confused by another fuzzy PN sequence. Finally, the variable length encoded bits are encrypted by chaotic stream cipher. An amalgamation of all the three techniques with random combination of seeds will provide the required security against the casual listener/observer where the security needed is only in-terms of few hours.  相似文献   

5.
采用序列图像的被动光学测速技术   总被引:1,自引:0,他引:1  
在获取无人机等飞行器的飞行速度时,自主、隐蔽性好的测量技术具有重要的应用价值。提出了一种利用光学相机拍摄的地面景象进行飞行器速度测量的技术。文中详细阐述了该项技术的工作原理,利用同一景物在两幅有重叠区域的图像中的位移量及间隔时间计算飞行器的运动速度。并且分析了其中关键的图像匹配技术,采用SIFT算法获得抗旋转、尺度变化的稳定特征点。重点对测速精度的各个影响因素进行了理论分析,给出了利用多帧图像进行测速提高测速精度的方法,综合分析了速度测量中的误差来源和各影响因素。理论研究和分析结果表明:利用光学图像进行速度测算的方案正确,精度能够满足飞行器速度测量的要求。  相似文献   

6.
Demonstrates the utility of the Gabor expansion as a new tool in geophysical research. The Gabor expansion provides good time-frequency (or space-wavenumber) localization and is ideally suited to represent nonstationary processes. The properties of this tool are demonstrated by expanding an FM-chirp waveform, and azimuth cuts taken from two different SAR ocean images. The effects of filtering in Gabor phase space are also investigated  相似文献   

7.
Cho  J.-H. Kim  S.-D. 《Electronics letters》2004,40(18):1109-1110
An algorithm using spatio-temporal thresholding for object detection with spatio-temporal distance metric in image sequences is proposed. The distance metric consists of the feature which uses the intensity and gradient at the same time in feature level instead of in decision level. In the model update process truncated variable adaptation rate is used, which can control adaptation rate up to its statistics, so it is able to maintain its statistics properly through the whole sequence. Some experimental results in various environments show that the averaged performance of the proposed algorithm is good.  相似文献   

8.
A morphological subband decomposition with perfect reconstruction is proposed. Critical subsampling is achieved. The reconstructed images using this decomposition do not suffer from any ringing effect. In order to avoid poor texture representation by the morphological filters an adaptive subband decomposition is introduced. It chooses linear filters on textured regions and morphological filters otherwise. A simple and efficient texture detection criterion is proposed and applied to the adaptive decomposition. Comparisons to other coding techniques such as JPEG and linear subband coding show that the proposed scheme performs significantly better both in terms of PSNR and visual quality  相似文献   

9.
Fractal coding has been proved useful for image compression. In fractal coding, an image is represented by a number of self-transformations (fractal code) by which an approximation of the original image can be reconstructed. The authors present a block-constrained fractal coding scheme and a nona-tree decomposition based matching strategy for content-based image retrieval. In the coding scheme, an image is partitioned into non-overlapped blocks with a size close to that of a query iconic image. The fractal code is generated for each block independently. In the similarity measure of the fractal code, an improved nona-tree decomposition scheme is adopted to avoid matching the fractal code globally in order to reduce computational complexity. The experimental results show that the authors' coding scheme and matching strategy are useful for image retrieval, and compare favourably with two other methods tested in terms of storage usage and computing time  相似文献   

10.
Using clues from neurobiological adaptation, we have developed the constant-statistics (CS) algorithm for nonuniformity correction of infrared focal point arrays (IRFPAs) and other imaging arrays. The CS model provides an efficient implementation that can also eliminate much of the ghosting artifact that plagues all scene-based nonuniformity correction (NUC) algorithms. The CS algorithm with deghosting is demonstrated on synthetic and real infrared (IR) sequences and shown to improve the overall accuracy of the correction procedure.  相似文献   

11.
A method is presented for the processing of temporal image sequences to enhance a desired process and suppress an undesired (interfering) process and random noise. Furthermore, the processed information is contained in a single frame which is easily interpreted. The method consists of collecting information about the desired and interfering processes from the frames of the given image sequence. The information is in the form of vectors that characterize the temporal properties of the processes. Matrices are formed by performing outer product expansions on these vectors and an eigenvector matrix is found which will simultaneously diagonalize these matrices. By calculating the inner product of a selected eigenvector from this matrix with the image sequence, an enhanced image of the desired process is obtained. A parameter can be adjusted which will increase the amount of suppression for either random noise or the interfering process. At one limit setting of this parameter, a matched filter for the desired process results, while at the other extreme, very high attenuation of the interfering process will occur. Simulations which demonstrate the effectiveness of this technique are presented along with results obtained by processing a radiographic temporal image sequence.  相似文献   

12.
No-reference image quality assessment using visual codebooks   总被引:1,自引:0,他引:1  
The goal of no-reference objective image quality assessment (NR-IQA) is to develop a computational model that can predict the human-perceived quality of distorted images accurately and automatically without any prior knowledge of reference images. Most existing NR-IQA approaches are distortion specific and are typically limited to one or two specific types of distortions. In most practical applications, however, information about the distortion type is not really available. In this paper, we propose a general-purpose NR-IQA approach based on visual codebooks. A visual codebook consisting of Gabor-filter-based local features extracted from local image patches is used to capture complex statistics of a natural image. The codebook encodes statistics by quantizing the feature space and accumulating histograms of patch appearances. This method does not assume any specific types of distortions; however, when evaluating images with a particular type of distortion, it does require examples with the same or similar distortion for training. Experimental results demonstrate that the predicted quality score using our method is consistent with human-perceived image quality. The proposed method is comparable to state-of-the-art general-purpose NR-IQA methods and outperforms the full-reference image quality metrics, peak signal-to-noise ratio and structural similarity index on the Laboratory for Image and Video Engineering IQA database.  相似文献   

13.
No-reference image quality assessment using structural activity   总被引:2,自引:0,他引:2  
Presuming that human visual perception is highly sensitive to the structural information in a scene, we propose the concept of structural activity (SA) together with a model of SA indicator in a new framework for no-reference (NR) image quality assessment (QA) in this study. The proposed framework estimates image quality based on the quantification of the SA information of different visual significance. We propose some alternative implementations of SA indicator in this paper as examples to demonstrate the effectiveness of the SA-motivated framework. Comprehensive testing demonstrates that the model of SA indicator exhibits satisfactory performance in comparison with subjective quality scores as well as representative full-reference (FR) image quality measures.  相似文献   

14.
Reduced-reference image quality assessment (RR IQA) aims to evaluate the perceptual quality of a distorted image through partial information of the corresponding reference image. In this paper, a novel RR IQA metric is proposed by using the moment method. We claim that the first and second moments of wavelet coefficients of natural images can have approximate and regular change that are disturbed by different types of distortions, and that this disturbance can be relevant to human perceptions of quality. We measure the difference of these statistical parameters between reference and distorted image to predict the visual quality degradation. The introduced IQA metric is suitable for implementation and has relatively low computational complexity. The experimental results on Laboratory for Image and Video Engineering (LIVE) and Tampere Image Database (TID) image databases indicate that the proposed metric has a good predictive performance.  相似文献   

15.
In this work, we bring together object tracking and digital watermarking to solve the spatio-temporal object adjacency problem in image sequences. Spatio-temporal relationships are established by embedding objects with unique digital watermarks and then by propagating the watermark frame by frame. Watermark propagation is accomplished by an existing object tracking module so that a tracked object acquires its watermark from the correspondences established by the object tracker. The spatio-temporally marked image sequences can then be searched to establish spatial and temporal adjacency among objects without using traditional spatio-temporal graphs. Borrowing from graph theory, we construct binary adjacency matrices among tracked objects and develop interpretation rules to establish a track history for each object. Track history can be used to determine the arrival of new objects in frames or the changing of spatial and temporal positions of objects with respect to each other as they move through time and space.  相似文献   

16.
Contrast is the difference in brightness and color that makes an object distinguishable. Contrast enhancement (CE) is a technique used to improve the visual quality of an image for human recognition. This study proposes a new methodology called high-dimensional model representation (HDMR) for enhancing contrast in digital images. The novelty of HDMR is that the method first decomposes the image into its dimensions, then represents the image using the superposition of decomposed components and finally enhances contrast in the image by adding certain HDMR components to the representation. HDMR has high performance as a CE technique in both grayscale and color images when compared with some state-of-the-art methods.  相似文献   

17.
基于视觉特性和小波分解的数字水印隐藏方法   总被引:36,自引:1,他引:36  
本文提出了一种隐藏数字水印的新方法,该方法所隐藏的不是传统的序列码或比特流,而是将水印作为一幅二值图像来处理;并结合人眼视觉模型(HVS)和图像的DWT多尺度分解来隐藏水印。实验表明这种新方法在降低原始图像变换后视觉失真和提取的被隐藏水印图像失真两方面都达到较好的效果,鲁棒性也较好,这是一种很有发展前景的数字水印隐藏新方法。  相似文献   

18.
Image fusion has been receiving increasing attention in the research community in a wide spectrum of applications. Several algorithms in spatial and frequency domains have been developed for this purpose. In this paper we propose a novel algorithm which involves the use of fractional Fourier domains which are intermediate between spatial and frequency domains. The proposed image fusion scheme is based on decomposition of source images (or its transformed version) into self-fractional Fourier functions. The decomposed images are then fused by maximum absolute value selection rule. The selected images are combined and inverse transformation is taken to obtain the final fused image. The proposed decomposition scheme and the use of some transformation before the decomposition step offer additional degrees of freedom in the image fusion scheme. Simulation results of the proposed scheme for different transformation of the source images for two different sets of images are also presented. It is observed through the simulation results that the use of taking the transformation before the decomposition step improves the quality of fused image. In particular the results of using the fractional Fourier transform and discrete cosine transform before the decomposition step are encouraging.  相似文献   

19.
This paper deals with the image quality assessment (IQA) task using a natural image statistics approach. A reduced reference (RRIQA) measure based on the bidimensional empirical mode decomposition is introduced. First, we decompose both, reference and distorted images, into intrinsic mode functions (IMF) and then we use the generalized Gaussian density (GGD) to model IMF coefficients of the reference image. Finally, we measure the impairment of a distorted image by fitting error between the IMF coefficients histogram of the distorted image and the estimated IMF coefficients distribution of the reference image, using the Kullback–Leibler divergence (KLD). Furthermore, to predict the quality, we propose a new support vector machine-based (SVM) classification approach as an alternative to logistic function-based regression. In order to validate the proposed measure, three benchmark datasets are involved in our experiments. Results demonstrate that the proposed metric compare favorably with alternative solutions for a wide range of degradation encountered in practical situations.  相似文献   

20.
In this paper, we present a complete system for the recognition and localization of a three-dimensional (3-D) model from a sequence of monocular images with known motion. The originality of this system is twofold. First, it uses a purely 3-D approach, starting from the 3-D reconstruction of the scene and ending by the 3-D matching of the model. Second, unlike most monocular systems, we do not use token tracking to match successive images. Rather, subpixel contour matching is used to recover more precisely complete 3-D contours. In contrast with the token tracking approaches, which yield a representation of the 3-D scene based on disconnected segments or points, this approach provides us with a denser and higher level representation of the scene. The reconstructed contours are fused along successive images using a simple result derived from the Kalman filter theory. The fusion process increases the localization precision and the robustness of the 3-D reconstruction. Finally, corners are extracted from the 3-D contours. They are used to generate hypotheses of the model position, using a hypothesize-and-verify algorithm that is described in detail. This algorithm yields a robust recognition and precise localization of the model in the scene. Results are presented on infrared image sequences with different resolutions, demonstrating the precision of the localization as well as the robustness and the low computational complexity of the algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号