首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
提出了一种面向分级B帧编码的分级量化技术,通过发掘同一个Gop中B帧之间的时域相关性,按金字塔顺序在级别不同的B帧之间相应分配权重不同的量化步长,达到编码优化的目的。仿真实验证明,所提出的分级量化技术比传统的使用同一个量化步长因子的量化机制在保证相同重构图像质量的前提下,能较大幅度的节省码流。  相似文献   

2.
高冰冰  张长海  吕帅 《计算机科学》2010,37(11):252-256
介绍条件规划问题及其相关的求解系统,着重分析以逻辑为基拙的编码方式。针对基于量化布尔公式的转换方法进行详细分析,给出3种不同形式的量化布尔公式编码。最后,对这3种编码进行比较,分析基于命题逻辑公式与量化布尔公式这两种不同转换方式的优劣,讨论基于量化布尔公式的规划方法未来的研究方向和发展趋势。  相似文献   

3.
语音谱参数的增强双预测多级矢量量化的码本设计方法   总被引:1,自引:0,他引:1  
表征语音谱参数的线性预测编码(LPC)参数被广泛用于各种语音编码算法。甚低位率语音编码算法要求使用尽可能少的位率编码语音谱参数。文章提出了语音谱参数的增强双预测多级矢量量化算法(EDPMSVQ)的码本设计方法。这种改进的多级矢量量化方法充分利用语音谱参数的短时相关和长时相关特性,采用了有记忆的多级矢量量化算法(MSVQ),对语音谱参数的每一维分别使用不同的预测系数;并且通过利用相邻语音帧间语音谱参数的强相关和弱相关的不同特点,采用了分别对应于强相关和弱相关的两个预测值集合,进一步减小了语音谱参数编码位率。增强双预测多级矢量量化方法能够实现20位的语音谱参数近似“透明”量化,同时能够使语音谱参数量化时的计算复杂度略有减少,所需的存储空间大为减少。  相似文献   

4.
周汀  陈亮  章倩苓 《计算机学报》1999,22(12):1317-1319
提出一种新的图像矢量量化编码算法。该算法结合相关矢量量化编码算法与地址矢量量化编码算法,首先根据相邻块状态进行编码地址的相关预测,对相关预测失败的图像块采用地址码匹配编码,并采用自适应算术编码进行。测试结果表明,相对于无记忆矢量量化编码算法,比特率约下降38%-68%,相对于安平提出的矢量量化地址编码算法以及王卫等提出的相关矢量量化编码算法,比特率约下降25%以上。  相似文献   

5.
梯度是图像的一种的特征,而同时考虑不同方向上的梯度信息是一种更加有效利用梯度的方式,因此提出多方向梯度的纹理局部相位量化模式算法。多方向梯度的纹理局部相位量化模式首先从不同方向提取图像的梯度特征,然后对每个方向上的梯度特征采用局部相位量化方法进行编码,各方向梯度采用相位量化编码后的特征连接成一个匹配特征向量。为了充分利用图像的梯度信息,还探讨了块模式的局部相位量化方法。两个纹理数据库和一个掌纹数据库上的实验充分表明,对图像各方向上的梯度信息进行局部相位量化编码是一种有效的纹理特征提取算法。  相似文献   

6.
讨论了在语音编码中,应用神经网络技术进行矢量量化的算法。神经网络矢量量化算法可以压缩码本维数,提高码本搜索速度,从而优化矢量量化的效果。将这种优化的矢量量化算法应用于语音编码中,能降低运算复杂度,提高编码质量。  相似文献   

7.
边缘匹配矢量量化器(SMVQ)是有限状态矢量量化器(FSVQ)的一个分支。该量化器适合于对图像块间相关性高的图像进行压缩编码,其优点是在比特率相近的情况下,编码质量高于传统的穷尽搜索矢量量化编码器,但其缺点是计算量大和比特率固定。本文提出了一种改进的边缘匹配矢量量化器。测试结果表明,该算法是变比特率编码算法,它比边缘匹配矢量量化器的比特率低,编码速度快,编码质量得到提高。  相似文献   

8.
针对移动设备上实现触觉交互和触觉阅读文字的难题,提出一种触觉感知量化和编码模型,并设计了振动数量索引字符表和摩斯电码2种触觉文字编码方案.该模型首先对振动触觉的振动时长、间隔时长、连续振动次数进行量化设计;然后通过实验测定用户对振动量化感知的数值;再对不同的振动时长和振动次数进行编码;最后组合不同的编码来表达文字.对编码文字的用户识别率及满意度的可用性评估实验的结果表明,通过对振动触觉进行量化感知设计和编码可以有效地提高触觉阅读效率,比传统基于盲文等的编码方案具有更高的阅读效率及更好的易用性;文中模型可在移动设备上设计触觉交互界面,实现高效的触觉文字阅读系统.  相似文献   

9.
目的 基于哈希编码的检索方法是图像检索领域中的经典方法。其原理是将原始空间中相似的图片经哈希函数投影、量化后,在汉明空间中得到相近的哈希码。此类方法一般包括两个过程:投影和量化。投影过程大多采用主成分分析法对原始数据进行降维,但不同方法的量化过程差异较大。对于信息量不均衡的数据,传统的图像哈希检索方法采用等长固定编码位数量化的方式,导致出现低编码效率和低量化精度等问题。为此,本文提出基于哈夫曼编码的乘积量化方法。方法 首先,利用乘积量化法对降维后的数据进行量化,以便较好地保持数据在原始空间中的分布情况。然后,采用子空间方差作为衡量信息量的标准,并以此作为编码位数分配的依据。最后,借助于哈夫曼树,给方差大的子空间分配更多的编码位数。结果 在常用公开数据集MNIST、NUS-WIDE和22K LabelMe上进行实验验证,与原始的乘积量化方法相比,所提出方法能平均降低49%的量化误差,并提高19%的平均准确率。在数据集MNIST上,与同类方法的变换编码方法(TC)进行对比,比较了从32 bit到256 bit编码时的训练时间,本文方法的训练时间能够平均缩短22.5 s。结论 本文提出了一种基于多位编码乘积量化的哈希方法,该方法提高了哈希编码的效率和量化精度,在平均准确率、召回率等性能上优于其他同类算法,可以有效地应用到图像检索相关领域。  相似文献   

10.
王文涛 《计算机与数字工程》2007,35(11):106-107,179
提出一种运用非均匀量化的思想对小波低频子带系数进行量化编码的算法.首先逐级对小波低频系数的能量分布进行判别,然后根据小波系数的幅值特性自适应地对最低频小波系数采用较大的量化步长进行量化编码,而对其余的高频小波系数采用较小的量化步长.试验结果表明,在低位率编码时提高了编码性能.  相似文献   

11.
近来,基于小波变换的各种压缩算法大量出现。但在低比特率情况下,由于小波系数的量化将导致各种量化噪声,严重破坏了压缩图像的质量。针对这种情况,提出了一种基于复原理论的小波变换压缩图像后处理算法。在建立的后处理模型上,应用约束最小二乘法求解。通过试验,给出了最优参数的指导值选取。试验结果表明,该算法可以提高低比特率下压缩图像的视觉效果,同时PSNR也有一定的提高。和其它算法的对比试验表明,该算法在低比特率情况下有一定的优越性。  相似文献   

12.
一种基于网格编码量化的高光谱图像无损压缩方法   总被引:3,自引:1,他引:3       下载免费PDF全文
由于遥感图像的数据量非常庞大,给有限的存储空间和传输带宽带来很大的压力,同时,由于高光谱图像获取代价昂贵,具有广泛的应用领域,且压缩时一般不能丢失任何信息,即要求无损压缩,因此没有有效的压缩方法,高光谱图像的普及应用将受到极大的限制.网格编码量化(TCQ)借鉴了网格编码调制(TCM)中信号集合扩展、信号集合划分和网格状态转移的思想,其虽具有良好的均方误差(MSE)性能,而且计算复杂度适中,但目前TCQ主要被应用于图像的有损压缩,为了对高光谱图像进行有效的无损压缩,通过将TCQ引入高光谱图像的无损压缩,并根据高光谱图像的特点,提出了一种基于小波变换和TCQ的高光谱图像无损压缩方法.实验结果表明,与JPEG2000和JPEG-LS中无损压缩算法相比,该算法对高光谱图像具有更好的压缩性能.  相似文献   

13.
Reversible hiding in DCT-based compressed images   总被引:2,自引:0,他引:2  
This paper presents a lossless and reversible steganography scheme for hiding secret data in each block of quantized discrete cosine transformation (DCT) coefficients in JPEG images. In this scheme, the two successive zero coefficients of the medium-frequency components in each block are used to hide the secret data. Furthermore, the scheme modifies the quantization table to maintain the quality of the stego-image. Experimental results also confirm that the proposed scheme can provide expected acceptable image quality of stego-images and successfully achieve reversibility.  相似文献   

14.
This paper proposes a novel scalable authentication scheme that utilizes the progressive enhancement functionality in JPEG 2000 scalable image coding. The proposed method first models the wavelet-based quality scalable coding to identify the effect of the quantization and de-quantization on wavelet coefficient magnitudes and the data embedded within such coefficients as a watermark. A relationship is then established between the watermark extraction rule and the embedding rule, using the magnitudes of the reconstructed and original coefficients. It ranks the wavelet coefficients according to their ability to retain the embedded watermark data intact under various quantization levels corresponding to quality enhancements. Then watermark data is embedded into wavelet coefficients according to their rank followed by JPEG 2000 embedded coding. At the decoder as more and more quality and resolution layers are decoded the authentication metric is improved, thus resulting in gradually increasing complexity of the authentication process according to the number of quality and resolution enhancements. The low complexity authentication is available at low quality low resolution decoding, enabling real-time authentication for resource constrained applications without affecting the authentication metric. Compared to the existing methods, the proposed method results in highly robust scalable authentication of JPEG 2000 coded images.  相似文献   

15.
《Real》2002,8(4):265-275
We examine color quantization of images using trellis-coded quantization (TCQ) in both the RGB domain and YUV domain. Together with a simple dithering scheme, an eight-bit trellis-coded color quantizer reproduces images that are visually indistinguishable from the 24-bit originals. It can be viewed as a predictive trellis-coded color quantization scheme. We also study trellis-coded vector quantization (TCVQ) for color quantization. Our proposed TCQ-based color quantization schemes are universal in the sense that no training or look-up table is needed. The complexity of TCQ is linear with respect to image size, making trellis-coded color quantization suitable for color printing, real-time interactive graphics and window-based display environment.  相似文献   

16.
To ensure the integrity of images compressed using block truncation coding (BTC), a tamper detection and image recovery scheme is proposed in this paper. In this scheme, the size of the authentication data can be adaptively selected according to the user’s requirement. The authentication data is embedded in the value differences of the quantization levels in each BTC-compressed image block. Multiple copies of the recovery data are embedded into the bit maps of the smooth blocks. Experimental results show that the proposed scheme performs well in terms of detection precision and the embedded image quality. Meanwhile, the tampered areas can be roughly recovered by using the proposed scheme.  相似文献   

17.
Color quantization is a common image processing technique where full color images are to be displayed using a limited palette of colors. The choice of a good palette is crucial as it directly determines the quality of the resulting image. Standard quantization approaches aim to minimize the mean squared error (MSE) between the original and the quantized image, which does not correspond well to how humans perceive the image differences. In this article, we introduce a color quantization algorithm that hybridizes an optimization scheme based with an image quality metric that mimics the human visual system. Rather than minimizing the MSE, its objective is to maximize the image fidelity as evaluated by S‐CIELAB, an image quality metric that has been shown to work well for various image processing tasks. In particular, we employ a variant of simulated annealing with the objective function describing the S‐CIELAB image quality of the quantized image compared with its original. Experimental results based on a set of standard images demonstrate the superiority of our approach in terms of achieved image quality.  相似文献   

18.
集成超分辨率重建的图像压缩编码新型框架及其实现   总被引:1,自引:1,他引:0  
提出一种集成超分辨率重建的图像压缩编码新型框架。在编码端对输入图像以因子2进行下采样,对下采样图像用JPEG标准编解码,而后采用事先通过外部训练库训练得到的字典,对解码后的图像进行基于学习的超分辨率重建。为了进一步提高解码重建图像质量,在算法框架中设计了反馈环节,即在编码端用原始图像减去超分辨率重建图像得到残差辅助图像,在解码端用该残差辅助图像弥补在超分辨率图像重建环节中损失的高频细节信息,在保证残差辅助图像较低编码比特率的情况下,大幅度提高了解码重建图像质量。此外,还实现了框架图像编码控制量化参数的单一化,实用性较强。实验结果表明,算法较JPEG标准在相同峰值信噪比的情况下,编码比特率大幅度降低,压缩倍数提高较多。  相似文献   

19.
In this paper an image data compression scheme based on Periodic Haar Piecewise-Linear (PHL) transform and quantization tables is proposed. Effectiveness of the compression for different classes of images is evaluated. The comparison of the compression quality using PHL and DCT transforms is given.  相似文献   

20.
Most existing BTC (Block Truncation Coding) based watermarking algorithms do not fully exploit visual perception of the host images. These schemes cannot obtain visual quality of stego-images and recover original images without distortion. To solve this issue, a new reversible visible watermarking scheme based on AMBTC (Absolute Moment Block Truncation Coding) domain is proposed. First, the proposed scheme uses adaptive pixel circular shift operation that adapts to local properties of the image to embed the visible watermark into two level (one-bit) nonparametric quantization levels of AMBTC according to the parity of the bit plane of AMBTC triple. The watermark signal can be extracted according to the parity of the Bit plane. The experimental results prove that the algorithm can achieve high visual quality of stego-images and recover original BTC-compressed image losslessly. Moreover, it is robust against common signal processing attacks. The visible watermarking algorithm can be applied to copyright of digital images in real-time environment because of the low time consumption due to the simplicity of AMBTC.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号