共查询到20条相似文献,搜索用时 0 毫秒
1.
提出一种自拍视频中眼睛的校正方法,通过以下3个步骤:目标眼睛的检测和定位;眼睛中巩膜、虹膜和瞳孔图像的识别和定位;虹膜图像和瞳孔图像的二次投影,实现了在视频自拍和网络视频过程中,当使用者在注视显示装置而不正视捕获装置时,能够在显示装置上获得使用者正视视频的活动视频图像。 相似文献
2.
Vijayaraghavan Thirumalai Pascal Frossard 《Journal of Visual Communication and Image Representation》2013,24(6):649-660
This paper addresses the problem of correlation estimation in sets of compressed images. We consider a framework where the images are represented under the form of linear measurements due to low complexity sensing or security requirements. We assume that the images are correlated through the displacement of visual objects due to motion or viewpoint change and the correlation is effectively represented by optical flow or motion field models. The correlation is estimated in the compressed domain by jointly processing the linear measurements. We first show that the correlated images can be efficiently related using a linear operator. Using this linear relationship we then describe the dependencies between images in the compressed domain. We further cast a regularized optimization problem where the correlation is estimated in order to satisfy both data consistency and motion smoothness objectives with a Graph Cut algorithm. We analyze in detail the correlation estimation performance and quantify the penalty due to image compression. Extensive experiments in stereo and video imaging applications show that our novel solution stays competitive with methods that implement complex image reconstruction steps prior to correlation estimation. We finally use the estimated correlation in a novel joint image reconstruction scheme that is based on an optimization problem with sparsity priors on the reconstructed images. Additional experiments show that our correlation estimation algorithm leads to an effective reconstruction of pairs of images in distributed image coding schemes that outperform independent reconstruction algorithms by 2–4 dB. 相似文献
3.
Synthetic aperture radar (SAR) imagery is an important global all-weather surveillance and mapping satellite imagery system. As space-borne systems have a limited storage capacity, it is imperative to heavily compress SAR images, possible with lossy compression schemes. As a result, SAR images need to be enhanced in earth stations. The work reported in this paper aims to address the issue of compression artefact removal of SAR images in an adaptive manner. The SAR images, compressed using the JPEG utility at significantly low bit rates, are enhanced by adaptively removing coding artefacts and speckle noise. As edges carry significant information in satellite imagery, a significant edge image is used for edge enhancement with selective removal of noisy edges. Further, an image sharpness metric is proposed in this work to serve as an objective no-reference metric for measuring the sharpness of SAR images. 相似文献
4.
Noise degrades the performance of any image compression algorithm. However, at very low bit rates, image coders effectively filter noise that may he present in the image, thus, enabling the coder to operate closer to the noise free case. Unfortunately, at these low bit rates the quality of the compressed image is reduced and very distinctive coding artifacts occur. This paper proposes a combined restoration of the compressed image from both the artifacts introduced by the coder along with the additive noise. The proposed approach is applied to images corrupted by data-dependent Poisson noise and to images corrupted by film-grain noise when compressed using a block transform-coder such as JPEG. This approach has proved to be effective in terms of visual quality and peak signal-to-noise ratio (PSNR) when tested on simulated and real images. 相似文献
5.
In this paper, we propose a novel learning-based image restoration scheme for compressed images by suppressing compression artifacts and recovering high frequency (HF) components based upon the priors learnt from a training set of natural images. The JPEG compression process is simulated by a degradation model, represented by the signal attenuation and the Gaussian noise addition process. Based on the degradation model, the input image is locally filtered to remove Gaussian noise. Subsequently, the learning-based restoration algorithm reproduces the HF component to handle the attenuation process. Specifically, a Markov-chain based mapping strategy is employed to generate the HF primitives based on the learnt codebook. Finally, a quantization constraint algorithm regularizes the reconstructed image coefficients within a reasonable range, to prevent possible over-smoothing and thus ameliorate the image quality. Experimental results have demonstrated that the proposed scheme can reproduce higher quality images in terms of both objective and subjective quality. 相似文献
6.
Regularised restoration of vector quantisation compressed images 总被引:1,自引:0,他引:1
Choy S.S.O. Chan Y.-H. Siu W.-C. 《Vision, Image and Signal Processing, IEE Proceedings -》1999,146(3):165-171
The authors study the application of image restoration technology in improving the coding performance of a vector quantisation (VQ) image compression codec. Restoration of VQ-compressed images is rarely addressed in the literature, and direct applications of existing restoration techniques are generally inadequate to deal with this problem. A restoration algorithm is proposed, specific to VQ-compressed images, that makes good use of the codebook to derive useful a priori information for restoration. The proposed restoration algorithm is shown to be capable of improving the quality of a VQ-compressed image to a much greater extent, compared with other existing restoration approaches. As no extra information, other than the codebook, is required to carry out the restoration with the proposed algorithm, no transmission overhead is necessary and hence, it can be fully compatible with any VQ codec when used to improve coding performance 相似文献
7.
Compressed sensing is widely applied for compression and reconstruction of images and videos by projecting the pixel values to smaller dimensional measurements. These measurements are reconstructed at the receiver using various reconstruction procedures. Greedy algorithms are often used for such recovery. These solve the least squares problem to find the best match with minimum error. This is a time consuming and complex process, giving rise to a trade-off between reconstruction performance and algorithmic performance. This work proposes a non-iterative method, viz., non-iterative pseudo inverse based recovery algorithm (NIPIRA), for reconstruction of compressively sensed images and videos with small complexity and time consumption, provided the reconstruction quality is maintained. NIPIRA gives a minimum PSNR of 32 dB for very few measurements (M/N = 0.3125) and accuracy of above 97%. There is more than 92% of decrease in elapsed time compared with other iterative algorithms. NIPIRA is tested for its performance with respect to many other objective measures as well. The complexity of NIPIRA is s times less than existing recovery algorithms. 相似文献
8.
Projection-based spatially adaptive reconstruction ofblock-transform compressed images 总被引:8,自引:0,他引:8
Yongyi Yang Galatsanos N.P. Katsaggelos A.K. 《IEEE transactions on image processing》1995,4(7):896-908
At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach. 相似文献
9.
Wenbo Xu Yun Tian Jiaru Lin 《AEUE-International Journal of Electronics and Communications》2013,67(2):98-101
Recently, a segmented analog-to-information conversion (S-AIC) structure that can obtain more samples than the number of branches of mixers and integrators (BMIs) is proposed by Taheri and Vorobyov. To reduce the complexity of S-AIC, in this paper we propose a partial segmented AIC (PS-AIC) structure, where BMIs are divided into groups and each group only works within a partial period that is non-overlapping in time. The proposed PS-AIC offers an attractive tradeoff between the complexity and error performance. We also prove that the equivalent measurement matrix of PS-AIC satisfies the restricted isometry property. Simulations verify the effectiveness of PS-AIC. 相似文献
10.
《Signal Processing: Image Communication》2009,24(10):814-824
Recent developments in the video coding technology brought new possibilities of utilising inherently embedded features of the encoded bit-stream in applications such as video adaptation and analysis. Due to the proliferation of surveillance videos there is a strong demand for highly efficient and reliable algorithms for object tracking. This paper presents a new approach for the fast compressed domain analysis utilising motion data from the encoded bit-streams in order to achieve low-processing complexity of object tracking in the surveillance videos. The algorithm estimates the trajectory of video objects by using compressed domain motion vectors extracted directly from standard H.264/MPEG-4 Advanced Video Coding (AVC) and Scalable Video Coding (SVC) bit-streams. The experimental results show comparable tracking precision when evaluated against the standard algorithms in uncompressed domain, while maintaining low computational complexity and fast processing time, thus making the algorithm suitable for real time and streaming applications where good estimates of object trajectories have to be computed fast. 相似文献
11.
In the field of video protection, selective encryption (SE) is a scheme which ensures the visual security of a video by encrypting only a small part of the data. This paper presents a new SE algorithm for H.264/AVC videos in context-adaptive variable-length coding mode. This algorithm controls the amount of encrypted alternative coefficients (ACs) of the integer transform in the entropic encoder. Two visual quality measures, the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM), are used to measure the visual confidentiality level of each video frame and to control the amount of encrypted ACs. Moreover, a new psychovisual metric to measure the flickering is introduced, the so-called temporal structural similarity (TSSIM). This method can be applied on intra and inter frame video sequences. Several experimental results show the efficiency of the proposed method. 相似文献
12.
Capacity estimates for data hiding in compressed images 总被引:5,自引:0,他引:5
We present an information-theoretic approach to obtain an estimate of the number of bits that can be hidden in still images, or, the capacity of the data-hiding channel. We show how the addition of the message signal or signature in a suitable transform domain rather than the spatial domain can significantly increase the channel capacity. Most of the state-of-the-art schemes developed thus far for data-hiding have embedded bits in some transform domain, as it has always been implicitly understood that a decomposition would help. Though most methods reported in the literature use DCT or wavelet decomposition for data embedding, the choice of the transform is not obvious. We compare the achievable data hiding capacities for different decompositions like DCT, DFT, Hadamard, and subband transforms and show that the magnitude DFT decomposition performs best among the ones compared. 相似文献
13.
Adaptive data hiding based on VQ compressed images 总被引:2,自引:0,他引:2
Data hiding involves embedding secret data into various forms of digital media such as text, image, audio, and video. With the rapid growth of network communication, data-hiding techniques are widely used in protecting copyright, embedding captions and communicating secretly. The authors propose an adaptive algorithm to embed data into VQ compressed images. This method adaptively varies the embedding process according to the amount of hidden data. The proposed method provides more effective hiding and higher quality images than conventional methods. The results of experimental comparisons are also presented. 相似文献
14.
Spatially adaptive high-resolution image reconstruction of DCT-based compressed images 总被引:1,自引:0,他引:1
Sung Cheol Park Moon Gi Kang Segall C.A. Katsaggelos A.K. 《IEEE transactions on image processing》2004,13(4):573-585
The problem of recovering a high-resolution image from a sequence of low-resolution DCT-based compressed observations is considered in this paper. The introduction of compression complicates the recovery problem. We analyze the DCT quantization noise and propose to model it in the spatial domain as a colored Gaussian process. This allows us to estimate the quantization noise at low bit-rates without explicit knowledge of the original image frame, and we propose a method that simultaneously estimates the quantization noise along with the high-resolution data. We also incorporate a nonstationary image prior model to address blocking and ringing artifacts while still preserving edges. To facilitate the simultaneous estimate, we employ a regularization functional to determine the regularization parameter without any prior knowledge of the reconstruction procedure. The smoothing functional to be minimized is then formulated to have a global minimizer in spite of its nonlinearity by enforcing convergence and convexity requirements. Experiments illustrate the benefit of the proposed method when compared to traditional high-resolution image reconstruction methods. Quantitative and qualitative comparisons are provided. 相似文献
15.
Removing the blocking artifacts of block-based DCT compressed images 总被引:11,自引:0,他引:11
One of the major drawbacks of the block-based DCT compression methods is that they may result in visible artifacts at block boundaries due to coarse quantization of the coefficients. We propose an adaptive approach which performs blockiness reduction in both the DCT and spatial domains to reduce the block-to-block discontinuities. For smooth regions, our method takes advantage of the fact that the original pixel levels in the same block provide continuity and we use this property and the correlation between the neighboring blocks to reduce the discontinuity of the pixels across the boundaries. For texture and edge regions, we apply an edge-preserving smoothing filter. Simulation results show that the proposed algorithm significantly reduces the blocking artifacts of still and video images as judged by both objective and subjective measures. 相似文献
16.
Combined edge crispiness and statistical differencing fordeblocking JPEG compressed images 总被引:1,自引:0,他引:1
In this work, a new approach is proposed that deals with blocking effects in JPEG compressed images. High-frequency details of the coded images are mainly contaminated by quantization noise. Preserving the image details and reducing the effect of quantization noise as much as possible can improve the ability of any enhancing method. To achieve this goal along with the removal of the blocking effect, the high-frequency components of the image are first extracted by high pass filtering. The result is then scaled by a factor that depends on the compression ratio and subtracted from the observed image. This result is used to design an adaptive filter that depends on the statistical behavior of the preprocessed image. The adaptive filter is applied to the resultant image. The result shows high SNR, significant improvement in the separation between blocking noise and image features, and effective reduction of image blurring. Other steps are required to preserve the global and local edges of the processed image, remove blocking noise, and ensure smoothness without blurring. These steps are dedicated to remove blocking artifacts and to enhance feature regularities. The evaluation of this approach in comparison with other techniques is carried out both subjectively and qualitatively. 相似文献
17.
Argenti F. Benelli G. Mecocci A. 《Selected Areas in Communications, IEEE Journal on》1993,11(1):46-58
Digital high-definition TV (HDTV) signals are generally compressed to reduce transmission bandwidth requirements. A compression algorithm for the bit rate reduction of an HDTV image using the wavelet transform is presented. The major problems related to the transmission of a compressed HDTV signal are analyzed. Transmission is examined both on a noisy channel and an asynchronous transfer mode (ATM) network. The effects of channel noise on the reconstructed image are determined, and a solution to mitigate the degradation of the image quality is presented. A model for the output bit rate of the HDTV coder is derived and used to simulate the transmission of an ATM multiplexer so that the network's main performance parameters can be determined 相似文献
18.
Haowei Liu Ming-Ting Sun Ruei-Cheng Wu Shiaw-Shian Yu 《Journal of Visual Communication and Image Representation》2011,22(5):432-439
Most automatic event detection methods for video surveillance detect target events based on features extracted in the pixel domain. However, in practice, surveillance videos are often compressed. It is desirable to perform automatic event detection in the compressed domain directly so that the video does not need to be decoded for analysis purpose. In this paper, we investigate the use of motion trajectories for video activity detection in the compressed domain. We show it is possible to extract reliable motion trajectories directly from compressed H.264 video streams. To overcome the problems caused by unreliable motion vectors, we propose to include the information from the compressed domain prediction residuals to make the tracking more robust. We use a real world application of detecting vacant or occupied parking spaces to demonstrate the effectiveness of our proposed approach. We also demonstrate the robustness of our approach to different encoder settings, and lighting conditions. 相似文献
19.
Robust real-time segmentation of images and videos using a smooth-spline snake-based algorithm. 总被引:2,自引:0,他引:2
Frederic Precioso Michel Barlaud Thierry Blu Michael Unser 《IEEE transactions on image processing》2005,14(7):910-924
This paper deals with fast image and video segmentation using active contours. Region-based active contours using level sets are powerful techniques for video segmentation, but they suffer from large computational cost. A parametric active contour method based on B-Spline interpolation has been proposed in to highly reduce the computational cost, but this method is sensitive to noise. Here, we choose to relax the rigid interpolation constraint in order to robustify our method in the presence of noise: by using smoothing splines, we trade a tunable amount of interpolation error for a smoother spline curve. We show by experiments on natural sequences that this new flexibility yields segmentation results of higher quality at no additional computational cost. Hence, real-time processing for moving objects segmentation is preserved. 相似文献