首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
小字母表的高性能算术编码   总被引:1,自引:0,他引:1  
薛晓辉  高文 《计算机学报》1997,20(11):974-981
本文基于改进的算术编码,提出了适用于小字母表的高性能算术编码算法。编码部分和模型部分都在小字母表场合作了特别设计。在编码部分,我们将改进的算术编码进一步改造成无乘法算术编码器,分析表明,冗余码长不于最新的Printz等的结果,编码效率接近百分之百。在模型部分,我们提出了自适应高阶统计模型的快速算法。实验结果表明,算法实现了对小字母表的高效率快速压缩。  相似文献   

2.
服务环境中的动态性会对故障诊断算法性能造成影响.为了降低这种影响,分析了服务环境中的动态性,提出多层管理模型建模服务系统:二分贝叶斯网络建立依赖模型和二元对称信道建模噪声.针对故障自动修复机制导致的动态故障集环境,在故障持续时间统计的基础上修正当前窗口内先验故障概率;针对动态模型环境,基于当前窗口内原始模型和观察症状时间建立期望模型.仿真结果显示,算法可以有效地诊断动态环境下的互联网服务故障.  相似文献   

3.
催化裂化生产仿真建模研究   总被引:1,自引:0,他引:1  
总结了催化裂化反再系统数学建模的特殊技巧.根据洛阳石化实际原料油的来源相对固定的情况,采取了"机理模型加经验统计模型互相弥补、稳态模型加瞬态动态特性补偿"的方法.阐述了6种在实践中比较实用的模型参数确定的独特经验:如何取得有效数据、用机理模型计算结果与数学统计模型的计算结果比较来确定参数、模型参数的长期实验校正等.所建立的模型的稳态精度和动态精度较高并且计算速度满足快速仿真的要求,在实践中取得了较好的经济效益.  相似文献   

4.
一种带密钥的阈下信道通信算法   总被引:3,自引:0,他引:3  
李顺东  覃征 《计算机学报》2003,26(1):125-128
提出了一种新的建立带密钥阈下信道的方法,利用此方法建立的属下信道,计算量小,带宽比较宽。主要结果有两个方面:证明了不同字母表上的语言具有相同的信息表达能力,利用这个事实,通过不同的字母表上的语言信息变换隐藏建立一种带密钥的阈下信道通信模型。通过该模型的实例化,可以建立实用的阈下信道。模型实例化的过程就是确定具体密钥的过程,为建立实用的阈个信道提供了严格的理论与实用技术。在隐藏通信中具有一定的应用前景。  相似文献   

5.
利用GPS 进行车辆动态定位的自适应模型研究   总被引:7,自引:0,他引:7       下载免费PDF全文
提出一种GPS动态定位系统模型,并将其应用于车辆的导航定位系统,获得了明显效果.将GPS的误差等效为马尔柯夫过程,采用描述机动载体运动的"当前"统计模型,建立了一种利用GPS对车辆进行动态导航定位的滤波模型及自适应卡尔曼滤波算法.仿真结果表明,应用所提出的强跟踪动态定位模型和算法,与改进前相比车辆导航定位系统的精度、实用性均得到了明显提高.  相似文献   

6.
动态内偏最小二乘(DiPLS)方法是基于数据驱动的潜结构投影的动态扩展算法, 用于动态特征提取和关键 性能指标预测. 在大型装备系统中, 传感器采集的当前时刻样本受历史样本的影响且可能包含较大噪声. 在动态特 征提取中, 因DiPLS算法未按降序提取主成分, 导致残差空间仍存在较大变异, 动态和静态信息难以有效分离, 影响 故障检测性能. 为此, 本文提出了一种基于动态内全潜结构投影的故障检测方法(DiTPLS). 首先, 使用动态内偏最小 二乘方法和向量自回归模型建立动态模型并检测故障, 用于捕捉质量相关动态信息; 基于结构化动态主成分分析 算法建立一种改进的动态潜在变量模型, 用于残差分解, 提取质量无关的动态信息和静态信息, 并构造合适的统计 量进行故障检测. 数值仿真和田纳西–伊斯曼过程实验验证了DiTPLS算法的有效性.  相似文献   

7.
基于动态主元分析法的传感器故障检测   总被引:2,自引:0,他引:2  
提出了一种基于动态主元分析的传感器故障检测方法.利用数据矩阵前t时刻和当前时刻的数据,建立多变量多时刻的自回归统计模型.计算主元数据矩阵,建立动态主元模型.以测量速度最慢的传感器的测量周期为统一采样周期,4个连续采样周期为一个诊断周期,建立动态三维测量矩阵,采用残差的平方预报误差的指数加权移动平均(Squared prediction error-Exponentially weighted moving average,SPE-EWMA)模型检测传感器故障.在只存在传感器故障的前提下,模拟发动机开车过程中几种典型的渐变性故障和突变性故障,实验结果表明,算法实时跟踪了各种检测指标的变化,准确检测出故障传感器.  相似文献   

8.
高炉槽下过筛计算机控制系统通过对皮带称称量值的预报,建立动态统计预报模型,实现对振动给料器的控制;并且开发出高炉上料仿真程序,缩短现场调试时间.  相似文献   

9.
高全胜  洪炳熔 《软件学报》2007,18(9):2356-2364
利用运动捕获数据,通过学习获得虚拟人运动的统计模型,从而创建真实、可控的虚拟人运动.提出了一种方法:通过对原始运动数据聚类,提取出局部动态运动特征--动态纹理,并用线性动态系统描述,有选择地注释有明确含义的线性动态系统,构建注释动态纹理图.利用这一统计模型,可生成真实感强、可控的虚拟人运动.结果表明,这种方法在交互环境中能够生成流畅、自然的人体运动.  相似文献   

10.
杨占栋  解梅 《计算机工程》2011,37(24):150-151
在进行人脸识别时,光照、表情、角度等因素的影响会大幅增加数据计算的时空复杂度。为此,提出一种新的图像外观统计模型,在动态形状模型中引入灰度共生矩阵(GLCM),通过计算图像形状对齐情况下的GLCM,建立半动态外观模型。基于ORL人脸数据库的实验结果表明,该模型相比动态外观模型,识别准确率更高,速度更快。  相似文献   

11.
汉语文本动态字母表0阶模型算术编码   总被引:1,自引:1,他引:0  
本文探讨汉语文本的0阶统计模型的构造方法,提出了一个卓有成效的汉语文本压缩算法。仅仅凭借这一最初级的模型,汉语文本的编码效率已经超过LZ与Huffman编码的混合算法。由于0阶统计模型是各种高阶统计模型的基础,所以本文对汉语以及其他大字符集文种(如日文、朝鲜文)的文本压缩研究具有重要的参考意义。  相似文献   

12.
Hung‐Yan Gu 《Software》2005,35(11):1027-1039
In this paper, a large‐alphabet‐oriented scheme is proposed for both Chinese and English text compression. Our scheme parses Chinese text with the alphabet defined by Big‐5 code, and parses English text with some rules designed here. Thus, the alphabet used for English is not a word alphabet. After a token is parsed out from the input text, zero‐, first‐, and second‐order Markov models are used to estimate the occurrence probabilities of this token. Then, the probabilities estimated are blended and accumulated in order to perform arithmetic coding. To implement arithmetic coding under a large alphabet and probability‐blending condition, a way to partition count‐value range is studied. Our scheme has been programmed and can be executed as a software package. Then, typical Chinese and English text files are compressed to study the influences of alphabet size and prediction order. On average, our compression scheme can reduce a text file's size to 33.9% for Chinese and to 23.3% for English text. These rates are comparable with or better than those obtained by popular data compression packages. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
Luis Rueda 《Information Sciences》2006,176(12):1656-1683
Adaptive coding techniques have been increasingly used in lossless data compression. They are suitable for a wide range of applications, in which on-line compression is required, including communications, internet, e-mail, and e-commerce. In this paper, we present an adaptive Fano coding method applicable to binary and multi-symbol code alphabets. We introduce the corresponding partitioning procedure that deals with consecutive partitionings, and that possesses, what we have called, the nearly-equal-probability property, i.e. that satisfy the principles of Fano coding. To determine the optimal partitioning, we propose a brute-force algorithm that searches the entire space of all possible partitionings. We show that this algorithm operates in polynomial-time complexity on the size of the input alphabet, where the degree of the polynomial is given by the size of the output alphabet. As opposed to this, we also propose a greedy algorithm that quickly finds a sub-optimal, but accurate, consecutive partitioning. The empirical results on real-life benchmark data files demonstrate that our scheme compresses and decompresses faster than adaptive Huffman coding, while consuming less memory resources.  相似文献   

14.
Binary wavelet transform (BWT) has several distinct advantages over the real wavelet transform (RWT), such as the conservation of alphabet size of wavelet coefficients, no quantization introduced during the transform and the simple Boolean operations involved. Thus, less coding passes are engaged and no sign bits are required in the compression of transformed coefficients. However, the use of BWT for the embedded grayscale image compression is not well established. This paper proposes a novel Context-based Binary Wavelet Transform Coding approach (CBWTC) that combines the BWT with a high-order context-based arithmetic coding scheme to embedded compression of grayscale images. In our CBWTC algorithm, BWT is applied to decorrelate the linear correlations among image coefficients without expansion of the alphabet size of symbols. In order to match up with the CBWTC algorithm, we employ the gray code representation (GCR) to remove the statistical dependencies among bi-level bitplane images and develop a combined arithmetic coding scheme. In the proposed combined arithmetic coding scheme, three highpass BWT coefficients at the same location are combined to form an octave symbol and then encoded with a ternary arithmetic coder. In this way, the compression performance of our CBWTC algorithm is improved in that it not only alleviate the degradation of predictability caused by the BWT, but also eliminate the correlation of BWT coefficients in the same level subbands. The conditional context of the CBWTC is properly modeled by exploiting the characteristics of the BWT as well as taking advantages of non-causal adaptive context modeling. Experimental results show that the average coding performance of the CBWTC is superior to that of the state-of-the-art grayscale image coders, and always outperforms the JBIG2 algorithm and other BWT-based binary coding technique for a set of test images with different characteristics and resolutions.  相似文献   

15.
提出了一种基于局部最小熵的预测模型构造方法,能够更好地区分待编码位的不同概率分布,从而实现对小波系数的高效压缩。首先,根据小波系数间的相关性选择预测系数,并构造相关性预测函数来综合多个系数的预测效果;以熵值的最小化作为准则,采用逐步筛选法对预测函数划分的多个分类进行选择合并,建立了一种局部最优的预测分类模型;结合熵编码实现对小波系数的高效压缩。实验结果表明,与图像压缩标准JPEG2000相比,所提方法的恢复图像主客观质量均有改善,客观质量平均提高0.4 dB。  相似文献   

16.
For the case of a set of equally probable words to be encoded, by a coding alphabet in which each new symbol is more costly than the last, it is clear that the average word cost (equivalent to the total in this case) of an exhaustive prefix code varies with the subset chosen from the possible alphabet. The present paper establishes the nature of the variation and discovers the average work length is non-decreasing to a point, and then non-increasing beyond, thus making simple any search for a best alphabet. The above result is established first for an alphabet with costs {1,2,3,…}, which is important in information retrieval applications, then for arbitrary, but strictly increasing costs and for arbitrary, non-decreasing costs.  相似文献   

17.
分形编码时间的线性控制和预测   总被引:1,自引:0,他引:1  
以图像的布朗维数为纹理特征对编码中的图像块进行聚类和排序,实现了对每个值域块所需比较定义域块数目的精确控制。进一步,通过排除平坦块、使用均值图像建立定义域块库等技巧,得到了一种可以通过定义域块比较数目对编码时间进行线性控制和预测的快速分形编码方法。实验表明,与已有分类和聚类方法相比,在相同的压缩比下,本文方法可取得更好的加速效果和解码质量。  相似文献   

18.
分布式视频编解码技术的研究进展   总被引:3,自引:0,他引:3  
对于视频压缩领域,分布式编码是一种新出现的应用机制,是基于20世纪70年代Slepian和Wolf以及Wyner和Ziv提出的信息理论而建立的。分布式视频编码技术与传统编码技术相比,从原理到实现方法上都是全新的。本文在介绍分布式编码基本原理的基础上,着重介绍了分布式视频编码技术各个环节的最新研究进展,并对发展趋势进行了展望。  相似文献   

19.
基于小波变换的嵌入式零树小波编码是一个简单有效的影像编码算法,通过分析该算法特性,针对其零树构建过程中较长的时间消耗,改进其方法以缩短编码时间;并针对现行地理信息可视化平台影像显示过程的跳变现象,结合人眼视觉特性,设计了一种渐进网络传输方案。  相似文献   

20.
Huffman algorithm allows for constructing optimal prefix‐codes with O(n·logn) complexity. As the number of symbols ngrows, so does the complexity of building the code‐words. In this paper, a new algorithm and implementation are proposed that achieve nearly optimal coding without sorting the probabilities or building a tree of codes. The complexity is proportional to the maximum code length, making the algorithm especially attractive for large alphabets. The focus is put on achieving almost optimal coding with a fast implementation, suitable for real‐time compression of large volumes of data. A practical case example about checkpoint files compression is presented, providing encouraging results. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号