首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 14 毫秒
1.
Surveys the field of super resolution (SR) processing for compressed video. The introduction of motion vectors, compression noise, and additional redundancies within the image sequence makes this problem fertile ground for novel processing methods. In conducting this survey, though, we develop and present all techniques within the Bayesian framework. This adds consistency to the presentation and facilitates comparison between the different methods. The article is organized as follows. We define the acquisition system utilized by the surveyed procedures. Then we formulate the HR problem within the Bayesian framework and survey models for the acquisition and compression systems. This requires consideration of both the motion vectors and transform coefficients within the compressed bit stream. We survey models for the original HR image intensities and displacement values. We discuss solutions for the SR problem and provide examples of several approaches.  相似文献   

2.
A general framework for anisotropic diffusion of multivalued images is presented. We propose an evolution equation where, at each point in time, the directions and magnitudes of the maximal and minimal rate of change in the vector-image are first evaluated. These are given by eigenvectors and eigenvalues of the first fundamental form in the given image metric. Then, the image diffuses via a system of coupled differential equations in the direction of minimal change. The diffusion "strength" is controlled by a function that measures the degree of dissimilarity between the eigenvalues. We apply the proposed framework to the filtering of color images represented in CIE-L*a*b* space.  相似文献   

3.
A modified regularization algorithm is proposed to enhance compressed video by restoring predictive-coded pictures. Since most video coding standards adopt a hybrid structure of macroblock-based motion compensation and block discrete cosine transform, the blocking artifacts occurs at both the block boundary and block interior, and the degradation process due to quantization is generated on differential images. Based on this observation, a new degradation model of differential images is presented first. Then the corresponding restoration algorithm directly processes the differential images before reconstructing decoded images. Two constraints, such as directional continuities on the block boundary and on the block interior, have been used for defining convex sets for restoring differential images. The proposed differential domain restoration algorithm is compared with the corresponding reconstructed domain algorithm using the same degradation model and equivalent set of constraints. The proposed algorithm outperforms the reconstructed domain algorithm in both analytic and experimental senses.  相似文献   

4.
嵌入式控制应用的快速增长,对当今的微控制器提出了极为苛刻的要求.由于大量的数字/模拟输入信号的复杂控制算法都必须在一个界定的较短响应时间内进行处理,而且生成适当的输出信号.  相似文献   

5.
Compressed video is degraded in quality due to the introduction of coding artifacts. A two-step subjective experiment was performed to evaluate the most visible artifacts and their relation to video quality for AVS and H.264 compressed video. In the first step, non-expert viewers were requested to score the image quality degradation as a function of compression ratio for various video sequences and to indicate which artifact was perceived during scoring. During the second step, eight trained viewers were asked to score the strength of three artifacts, i.e., blurring, blocking, and color distortion, which were reported as the most perceivable artifacts in the first step of the experiment. The quality performance between AVS and H.264 was also compared. The analysis of covariance indicated that the quality performance between AVS and H.264 was very close. A linear regression analysis showed that for the CIF videos 96% of the variance in quality degradation could be predicted by linearly combining the normalized strengths of the three most visible artifacts.  相似文献   

6.
提出一类适用于彩色数字图像处理的模糊自适应滤波器,具有一种广义结构,选择不同的距离函数,则支持不同形式的已有滤波器。仿真结果显示这种滤波器性能优良。  相似文献   

7.
8.
《信息技术》2015,(9):28-31
立体视频质量评价已成为影响立体成像技术的关键问题之一。文中提出了一种立体视频质量的客观评价方法。首先提取能够反映立体视频质量的评价指标,包括亮度对比度失真、结构相似度和深度保真度,然后采用回归分析确定各评价指标对立体视频质量贡献大小的权重,进而得到各评价指标和视频质量之间的数学模型。实验结果表明,该方法与主观评价有较高的一致性,更好地体现了人眼的视觉特性。  相似文献   

9.
Li  B. Sang  N. Cao  Z. Zhang  T. 《Electronics letters》2005,41(20):1107-1109
A new angiogram image enhancement algorithm based on anisotropic diffusion is proposed. This novel method can adaptively choose the conductance parameter, which has great significance on the whole diffusion. Experimental results show that this new method is more effective compared with the original anisotropic diffusion.  相似文献   

10.
We extend the well-known scalar image bilateral filtering technique to diffusion tensor magnetic resonance images (DTMRI). The scalar version of bilateral image filtering is extended to perform edge-preserving smoothing of DT field data. The bilateral DT filtering is performed in the Log-Euclidean framework which guarantees valid output tensors. Smoothing is achieved by weighted averaging of neighboring tensors. Analogous to bilateral filtering of scalar images, the weights are chosen to be inversely proportional to two distance measures: The geometrical Euclidean distance between the spatial locations of tensors and the dissimilarity of tensors. We describe the noniterative DT smoothing equation in closed form and show how interpolation of DT data is treated as a special case of bilateral filtering where only spatial distance is used. We evaluate different recent DT tensor dissimilarity metrics including the Log-Euclidean, the similarity-invariant Log-Euclidean, the square root of the J-divergence, and the distance scaled mutual diffusion coefficient. We present qualitative and quantitative smoothing and interpolation results and show their effect on segmentation, for both synthetic DT field data, as well as real cardiac and brain DTMRI data.  相似文献   

11.
12.
Image quality assessment (IQA) has been intensively studied, especially for the full-reference (FR) scenario. However, only the mean-squared error (MSE) is widely employed in compression. Why other IQA metrics work ineffectively? We first sum up three main limitations including the computational time, portability, and working manner. To address these problems, we then in this paper propose a new content-weighted MSE (CW-MSE) method to assess the quality of compressed images. The design principle of our model is to use adaptive Gaussian convolution to estimate the influence of image content in a block-based manner, thereby to approximate the human visual perception to image quality. Results of experiments on six popular subjective image quality databases (including LIVE, TID2008, CSIQ, IVC, Toyama and TID2013) confirm the superiority of our CW-MSE over state-of-the-art FR IQA approaches.  相似文献   

13.
一种基于模糊逻辑的MPEG压缩视频场景转换检测方法   总被引:5,自引:1,他引:5  
金红  周源华 《通信学报》2000,21(7):57-62
镜头边界的自动检测是实现基于内容的视频检索必不可少的第一步,目前大多数的场景转换检测方法都是基于非压缩视频的,而越来越多的视频数据却以压缩形式存在。本主文提出了一咱新的针对MPEG压缩视频的场景转换检测算法,它利用DC序列和运动向量计算像素差、直方图差、统计差和具有“真实”运动向量的宏块所占的比例,然后用模糊逻辑对上述参量加以综合隶属度用自适应的方法确定。实验表明这种镜头检测算法具有较高的检出率和  相似文献   

14.
This paper describes a multistage perceptual quality assessment (MPQA) model for compressed images. The motivation for the development of a perceptual quality assessment is to measure (in)visible differences between original and processed images. The MPQA produces visible distortion maps and quantitative error measures informed by considerations of the human visual system (HVS). Original and decompressed images are decomposed into different spatial frequency bands and orientations modeling the human cortex. Contrast errors are calculated for each frequency and orientation, and masked as a function of contrast sensitivity and background uncertainty. Spatially masked contrast error measurements are then made across frequency bands and orientations to produce a single perceptual distortion visibility map (PDVM). A perceptual quality rating (PQR) is calculated from the PDVM and transformed into a one to five scale, PQR(1-5), for direct comparison with the mean opinion score, generally used in subjective ratings. The proposed MPQA model is based on existing perceptual quality assessment models, while it is differentiated by the inclusion of contrast masking as a function of background uncertainty. A pilot study of clinical experiments on wavelet-compressed digital angiogram has been performed on a sample set of angiogram images to identify diagnostically acceptable reconstruction. Our results show that the PQR(1-5) of diagnostically acceptable lossy image reconstructions have better agreement with cardiologists' responses than objective error measurement methods, such as peak signal-to-noise ratio A Perceptual thresholding and CSF-based Uniform quantization (PCU) method is also proposed using the vision models presented in this paper. The vision models are implemented in the thresholding and quantization stages of a compression algorithm and shown to produce improved compression ratio performance with less visible distortion than that of the embedded zerotrees wavelet (EZWs).  相似文献   

15.
针对X射线散射、量化噪声、电器噪声在人体骨骼CT图像中存在噪声污染等问题,提出一种基于模糊各向异性扩散的骨科CT图像去噪方法,可有效的降低骨骼图像的噪声。一幅清晰的CT图像对于骨折部位的确定与骨折程度的判断有着重要的意义。故要使骨骼图像细节明显,需要对骨骼图像降低噪声。实验表明该方法能够求得一个较为合理的梯度阈值,在除去图像噪声的同时能够较好的保留图像的边缘和细节。  相似文献   

16.
JPEG2000 is known as an efficient standard to encode images. However, at very low bit-rates, artifacts or distortions can be observed in decoded images. In order to improve the visual quality of decoded images and make them perceptually acceptable, we propose in this work a new preprocessing scheme. This scheme consists in preprocessing the image to be encoded using a nonlinear filtering, considered as a prior phase to JPEG 2000 compression. More specifically, the input image is decomposed into low- and high-frequency sub-images using morphological filtering. Afterward, each sub-image is compressed using JPEG2000, by assigning different bit-rates to each sub-image. To evaluate the quality of the reconstructed image, two different metrics have been used, namely (a) peak signal to noise ratio, to evaluate the visual quality of the low-frequency sub-image, and (b) structural similarity index measure, to evaluate the visual quality of the high-frequency sub-image. Based on the reconstructed images, experimental results show that, at low bit-rates, the proposed scheme provides better visual quality compared to a direct use of JPEG2000 (excluding any preprocessing).  相似文献   

17.
The objective assessment method of network video quality is a challenge, because the video quality will be distorted by various factors, including transmission and compression. In order to improve the objective method, an objective assessment method based on fuzzy inference system of Mamdani is proposed. Firstly, six quality parametersare introduced. All the quality parameters are inputted to fuzzy logic controller system. Secondly, the outputs are used as next inputs and inferred by another fuzzy logic controller system to obtain the objective quality of network video. Lastly, the performance of proposed method is validated on four videos with different network environment. Meanwhile this method is compared with other methods. The experimental results show that the proposed method can improve the similarity between subjective and objective assessment.  相似文献   

18.
In this paper, we focus on the problem of speckle removal by means of anisotropic diffusion and, specifically, on the importance of the correct estimation of the statistics involved. First, we derive an anisotropic diffusion filter that does not depend on a linear approximation of the speckle model assumed, which is the case of a previously reported filter, namely, SRAD. Then, we focus on the problem of estimation of the coefficient of variation of both signal and noise and of noise itself. Our experiments indicate that neighborhoods used for parameter estimation do not need to coincide with those used in the diffusion equations. Then, we show that, as long as the estimates are good enough, the filter proposed here and the SRAD perform fairly closely, a fact that emphasizes the importance of the correct estimation of the coefficients of variation.  相似文献   

19.
A new methodology to measure coded image/video quality using the just-noticeable-difference (JND) idea was proposed in Lin et al. (2015). Several small JND-based image/video quality datasets were released by the Media Communications Lab at the University of Southern California in Jin et al. (2016) and Wang et al. (2016) [3]. In this work, we present an effort to build a large-scale JND-based coded video quality dataset. The dataset consists of 220 5-s sequences in four resolutions (i.e., 1920×1080,1280×720,960×540 and 640×360). For each of the 880 video clips, we encode it using the H.264/AVC codec with QP=1,,51 and measure the first three JND points with 30 + subjects. The dataset is called the “VideoSet”, which is an acronym for “Video Subject Evaluation Test (SET)”. This work describes the subjective test procedure, detection and removal of outlying measured data, and the properties of collected JND data. Finally, the significance and implications of the VideoSet to future video coding research and standardization efforts are pointed out. All source/coded video clips as well as measured JND data included in the VideoSet are available to the public in the IEEE DataPort (Wang et al., 2016 [4]).  相似文献   

20.
Manipulation and compositing of MC-DCT compressed video   总被引:16,自引:0,他引:16  
Many advanced video applications require manipulations of compressed video signals. Popular video manipulation functions include overlap (opaque or semitransparent), translation, scaling, linear filtering, rotation, and pixel multiplication. We propose algorithms to manipulate compressed video in the compressed domain. Specifically, we focus on compression algorithms using the discrete cosine transform (DCT) with or without motion compensation (MC). Such compression systems include JPEG, motion JPEG, MPEG, and H.261. We derive a complete set of algorithms for all aforementioned manipulation functions in the transform domain, in which video signals are represented by quantized transform coefficients. Due to a much lower data rate and the elimination of decompression/compression conversion, the transform-domain approach has great potential in reducing the computational complexity. The actual computational speedup depends on the specific manipulation functions and the compression characteristics of the input video, such as the compression rate and the nonzero motion vector percentage. The proposed techniques can be applied to general orthogonal transforms, such as the discrete trigonometric transform. For compression systems incorporating MC (such as MPEG), we propose a new decoding algorithm to reconstruct the video in the transform domain and then perform the desired manipulations in the transform domain. The same technique can be applied to efficient video transcoding (e.g., from MPEG to JPEG) with minimal decoding  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号