首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
压缩视频感知(Compressed Video Sensing,CVS)是一种利用压缩感知(Compressed Sensing,CS)以及分布式视频编码(DVC)的视频压缩方法,故又被称为分布式视频压缩感知。在CVS中,每帧图像经过块划分、压缩采样后对数据进行DPCM,最后使用均匀或者非均匀量化进行量化。目前,CVS量化器的设计大多是在采样数据或残差数据服从高斯分布的前提下设计的,通过Kolmogorov-Smirnov检验进一步分析压缩采样后的数据,利用劳埃德最佳量化器准则训练量化码书,设计出一种简单、高效的量化器。经实验,设计的量化器相比于传统的量化方法在BD-Rate上减少了约14.2%,在BDPSNR上提升了约0.11?dB,提高了CVS的压缩效率和重建质量。  相似文献   

2.
Wei  Henglu  Zhou  Wei  Zhang  Xiu  Zhou  Xin  Duan  Zhemin 《Multimedia Tools and Applications》2019,78(1):363-387

Transform/quantization (T/Q) is one of the high complex modules in high efficiency video coding (HEVC). In this paper, all zero block (AZB) detection algorithm for HEVC is proposed to reduce the complexity of T/Q. The proposed AZB detection algorithm is based on the quantization level of the maximum transform coefficient, which is suitable for both uniform quantizer (UQ) and rate-distortion optimized quantizer (RDOQ). Experimental results show that 46% complexity of T/Q for UQ and 42% for RDOQ can be reduced with negligible loss of video quality and compression efficiency, outperforming the state-of-the-art methods.

  相似文献   

3.

In this paper, we propose a new no-reference image quality assessment for JPEG compressed images. In contrast to the most existing approaches, the proposed method considers the compression processes for assessing the blocking effects in the JPEG compressed images. These images have blocking artifacts in high compression ratio. The quantization of the discrete cosine transform (DCT) coefficients is the main issue in JPEG algorithm to trade-off between image quality and compression ratio. When the compression ratio increases, DCT coefficients will be further decreased via quantization. The coarse quantization causes blocking effect in the compressed image. We propose to use the DCT coefficient values to score image quality in terms of blocking artifacts. An image may have uniform and non-uniform blocks, which are respectively associated with the low and high frequency information. Once an image is compressed using JPEG, inherent non-uniform blocks may become uniform due to quantization, whilst inherent uniform blocks stay uniform. In the proposed method for assessing the quality of an image, firstly, inherent non-uniform blocks are distinguished from inherent uniform blocks by using the sharpness map. If the DCT coefficients of the inherent non-uniform blocks are not significant, it indicates that the original block was quantized. Hence, the DCT coefficients of the inherent non-uniform blocks are used to assess the image quality. Experimental results on various image databases represent that the proposed blockiness metric is well correlated with the subjective metric and outperforms the existing metrics.

  相似文献   

4.
针对分布式多视点加深度格式(DMVD)的视频编码中深度图视频解码质量问题,提出一种结合子带层及子带系数的小波域分布式深度视频非均匀量化方案,通过给边缘分配更多比特来提升深度图的边缘质量。结合深度图经小波变换后系数分布特性,对第N层的低频小波系数采用均匀量化方案,对其他层高频小波系数采用非均匀量化方案。针对高频系数的非均匀量化,对处于"0"左右的高频系数采用较大的量化步长,随着高频系数幅度值的增大,量化步长逐渐减小,量化逐渐精细,从而提升深度图中的边缘细节质量。实验结果表明,对于边缘较多且变化较明显的"Dancer"和"PoznanHall2"深度序列,该算法能够有效地提高二者的边缘信息质量从而提高其率失真(R-D)性能,最高可达1.2 dB;而对于边缘区域较小且较为模糊的"Newspaper"和"Balloons"深度序列,系统的R-D性能也能被提升0.3 dB左右。  相似文献   

5.
6.
Suppose independent observations Xi, i=1,…,n are observed from a mixture model , where λ is a scalar and Q(λ) is a nondegenerate distribution with an unspecified form. We consider to estimate Q(λ) by nonparametric maximum likelihood (NPML) method under two scenarios: (1) the likelihood is penalized by a functional g(Q); and (2) Q is under a constraint g(Q)=g0. We propose a simple and reliable algorithm termed VDM/ECM for Q-estimation when the likelihood is penalized by a linear functional. We show this algorithm can be applied to a more general situation where the penalty is not linear, but a function of linear functionals by a linearization procedure. The constrained NPMLE can be found by penalizing the quadratic distance |g(Q)-g0|2 under a large penalty factor γ>0 using this algorithm. The algorithm is illustrated with two real data sets.  相似文献   

7.
In this paper, an upper bound expression for the Q-function approximation is proposed. This expression defines the class of approximations that are upper bounds under the particular condition derived in the paper. The proposed upper bound approximation of the Q-function and the approximations derived from it are applied in signal to quantization noise ratio (SQNR) calculation of variance-matched Gaussian source scalar quantization. The proposed Q-function approximation having a simple analytical form and being a parametric one is optimized in terms of its parameter with respect to relative error (RE) of approximation for the particular problem observed. Specifically, three different optimization methods are proposed, so that different Q-function approximations are provided in the paper. Moreover, the manner for expansion of the obtained results is provided in order to make our proposal applicable for facilitating any mathematical analysis involving Q-function calculation. The results indicate that the proposed Q-function approximations not only outperform the numerous previously reported approximations in terms of accuracy, but also provide the derivation of the reasonably accurate closed-form formula for SQNR of variance-matched scalar quantization for the Gaussian source, which is not achievable with the application of the previously reported Q-function approximations of similar analytical form complexity. The results presented in this paper are applicable to many signal processing and communication problems requiring Q-function calculation.  相似文献   

8.
Yan Yang 《Information Sciences》2007,177(22):4922-4933
This paper deals with a general α-decomposition problem of fuzzy relations, which can be stated as follows: given a fuzzy relation RF(X×Y), determine two fuzzy relations QF(X×Z) and TF(Z×Y) such that , where X (resp. Y) is a finite set. Firstly we point out that every fuzzy relation R is always generally α-decomposable, and give an algorithm to construct Q and T with for a given R. Secondly, we show that the general content ρ(R) with is equal to the chromatic number of the simple graph FR generated by R. Therefore, finding an exact algorithm for calculating ρ(R) is an NP-complete problem.  相似文献   

9.
Contourlet变换结合了方向滤波组,具备小渡变换不能表达的多方向性,能够很有效地捕获自然图像的边缘轮廓信息。在JPEG2000的压缩标准中,它采用了小渡变换和死区均匀量化。鉴于JPEG2000标准,本文提出一套新的编码方案——contoludet变换和最佳量化器(Lloyd—Max量化器)。同样,本文也将死区量化应用到最佳量化器中。  相似文献   

10.
In this paper, a more efficient and a more accurate algorithm is developed for designing asymptotically optimal unrestricted uniform polar quantization (UUPQ) of bivariate Gaussian random variables compared to the existing algorithms on this subject. The proposed algorithm is an iterative one defining the analytical model of asymptotically optimal UUPQ in only a few iterations. The UUPQ model is also improved via optimization of the last magnitude reconstruction level so that the mean squared error (MSE) is minimal. Moreover, for the straightforward performance assessment of our analytical UUPQ model an asymptotic formula for signal to quantization noise ratio (SQNR) is derived, which is reasonably accurate for any rate (R) greater than or equal to 2.5 bits/sample. It is demonstrated empirically that our asymptotically optimal UUPQ model outperforms the previous UUPQ models in terms of SQNR. Eventually, the transition from the analytical to the practically designed UUPQ model, as an important aspect in quantizer design, is considered in the paper and, as a result, a novel method to achieve this is provided. The proposed method is applicable to the practical design of any unrestricted polar quantization.  相似文献   

11.
In this paper, we consider the problem of a fault-free Hamiltonian cycle passing through prescribed edges in an n-dimensional hypercube Qn with some faulty edges. We obtain the following result: Let n?2, FE(Qn), E0E(Qn)\F with 1?|E0|?2n−3, |F|<n−(⌊|E0|/2⌋+1). If the subgraph induced by E0 is a linear forest (i.e., pairwise vertex-disjoint paths), then in the graph QnF all edges of E0 lie on a Hamiltonian cycle.  相似文献   

12.
Quantizer design for minimizing the mean square error has been developed independently by Lloyd and Max. They have tabulated the output and decision levels for Gaussian distribution function based on both uniform and nonuniform quantization. Subsequently this design has been extended to other standard distribution functions such as Rayleigh and Laplacian. Preliminary investigation of error analysis in image processing has shown that image reconstruction based on minimum MSE is not optimal. Quantizer design based on minimization of powers of quantization error other than two (MSE) yields less degraded images. This has led to the necessity of developing quantization tables for most frequently used distribution functions based on mean fourth power error (MFPE) and mean sixth power error (MSPE). The later criteria result in better quantizer design in regions of large luminance changes, contours, edges, etc. leading to subjectively higher quality images. This research focusses on optimal quantizer design for minimizing MFPE and MSPE. Quantization ranges and output levels based on both uniform and nonuniform spacing for Gaussian, Rayleigh and Laplacian distributions are developed. The tables, developed by numerical techniques are based on normalized standard deviation. All the computations are implemented with double precision accuracy (64 bits) with an algorithmic error range of 10?6–10?8 using IBM 370/155 digital computer. Plotting routines are implemented on Tektronix hardware (interactive graphics package)_using DEC-20 digital computer. Other relevant parameters such as distortion ratio (ratio of distortion of uniform to nonuniform quantizers), distortion and entropy as a function of quantization levels are illustrated. This tabular and graphical data will be useful in digital communications such as image processing.  相似文献   

13.
This paper is concerned with the stability analysis of a networked control system, wherein communication from the controller to the plant input is through a digital channel subject to packet-dropouts and finite-level quantization. No acknowledgments of receipt are available to the controller. To alleviate the effect of packet-dropouts, the controller transmits tentative plant input sequences. Within this setup, we derive a sufficient condition for small ? signal ? stability of the networked control system. This condition requires the maximum number of consecutive packet-dropouts to be bounded. We also elucidate the trade-off which exists between the disturbance attenuation and the step size of the quantizer and the maximum number of consecutive packet-dropouts.  相似文献   

14.
Color quantization is an important operation with many applications in graphics and image processing. Most quantization methods are essentially based on data clustering algorithms. However, despite its popularity as a general purpose clustering algorithm, k-means has not received much respect in the color quantization literature because of its high computational requirements and sensitivity to initialization. In this paper, we investigate the performance of k-means as a color quantizer. We implement fast and exact variants of k-means with several initialization schemes and then compare the resulting quantizers to some of the most popular quantizers in the literature. Experiments on a diverse set of images demonstrate that an efficient implementation of k-means with an appropriate initialization strategy can in fact serve as a very effective color quantizer.  相似文献   

15.
This paper addresses the problem of optimally inserting idle time into a single-machine schedule when the sequence is fixed and the cost of each job is a convex function of its completion time. We propose a pseudo-polynomial time algorithm to find a solution within some tolerance of optimality in the solution space, i.e., each completion time will belong to a small time interval z within which the optimal solution lies. Letting H be the planning horizon and |J| the number of jobs, the proposed algorithm is superior to the current best algorithm in terms of time-complexity when |J|<H/z.  相似文献   

16.
We consider the problem of determining the maximum and minimum of the Rényi divergence Dλ(P||Q) and Dλ(Q||P) for two probability distribution P and Q of discrete random variables X and Y provided that the probability distribution P and the parameter α of α-coupling between X and Y are fixed, i.e., provided that Pr{X = Y } = α.  相似文献   

17.
The aim of this paper is to find a quantization technique that has low implementation complexity and asymptotic performance arbitrarily close to the optimum. More specifically, it is of interest to develop a new vector quantizer design procedure for a memoryless Gaussian source that yields vector quantizers with excellent performance and the structure required for fast quantization. To achieve this, we combined a fast lattice-encoding algorithm with a geometric approach to generate a model of a geometric piecewise-uniform lattice vector quantizer. Expressions for granular distortion and the optimal number of outputs points in each region were derived. Both exact and approximative asymptotic analyses were carried out. During this process, the constant probability density function of the input signal vector was kept inside the whole region. The analysis demonstrated the existence of piecewise-constant approximations to the input-vector probability density function, which is optimal for the proposed geometric piecewise-uniform vector quantizer. The considered quantization technique is near optimal for a memoryless Gaussian source. In other words, this paper proposes a method for a near-optimum, low-complex vector quantizer design based on probability density function discretization. The presented methodology gives a signal-to-quantization noise ratio that in some cases differs from the optimum by 0.1 dB or less. Improvements of the considered model in performance and complexity over some of the existing techniques are also demonstrated.  相似文献   

18.
Distributed Video Coding (DVC) has been proposed for increasingly new application domains. This rise is apparently motivated by the very attractive features of its flexibility for building very low cost video encoders and the very high built-in error resilience when applied over noisy communication channels. Yet, the compression efficiency of DVC is notably lagging behind the state-of-the-art in video coding and compression, H.264/AVC in particular. In this context, a novel coding solution for DVC is presented in this paper, which promises to improve its rate-distortion (RD) performance towards the state-of-the-art. Here, Turbo Trellis Coded Modulation (TTCM), with its attractive coding gain in channel coding, is utilized and its resultant impact in both pixel domain and transform domain DVC framework is discussed herein. Simulations have shown a significant gain in the RD performance when compared with the state-of-the-art Turbo coding based DVC implementations.
A. GarridoEmail:
  相似文献   

19.
20.
In a graph G=(V,E), a bisection (X,Y) is a partition of V into sets X and Y such that |X|?|Y|?|X|+1. The size of (X,Y) is the number of edges between X and Y. In the Max Bisection problem we are given a graph G=(V,E) and are required to find a bisection of maximum size. It is not hard to see that ⌈|E|/2⌉ is a tight lower bound on the maximum size of a bisection of G.We study parameterized complexity of the following parameterized problem called Max Bisection above Tight Lower Bound (Max-Bisec-ATLB): decide whether a graph G=(V,E) has a bisection of size at least ⌈|E|/2⌉+k, where k is the parameter. We show that this parameterized problem has a kernel with O(k2) vertices and O(k3) edges, i.e., every instance of Max-Bisec-ATLB is equivalent to an instance of Max-Bisec-ATLB on a graph with at most O(k2) vertices and O(k3) edges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号