共查询到20条相似文献,搜索用时 0 毫秒
1.
For today’s very large scale integrated circuits, test data volume is recognized as a major contributor to the cost of manufacturing
testing. Test data compression addresses this problem by reducing the test data volume without affecting the overall system
performance. This paper proposes a new test data compression technique using selective sparse storage. Test sets are partitioned
into four kinds of blocks of uniform length, all-0 blocks, all-1 blocks, sparse blocks and characterless blocks. Blocks are
encoded appropriately based on the occurrence of them. They are encoded into 0, 10, 110 + number of the sparse bits + locations
of all the sparse bits, and 111 + the block itself, respectively. Two algorithms are proposed for how to select the sparse
blocks from test sets. A theoretical analysis for our selective sparse storage shows the new compression technique outperforms
the conventional test data compression approaches. Experimental results illustrate the flexibility and efficiency of the new
method, which is consistent with the theoretical analysis. 相似文献
2.
《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2008,16(11):1429-1440
3.
分形插值图象放大和压缩编码 总被引:2,自引:0,他引:2
本文讨论了随机分形插值方法及其在图象放大和图象压缩编码中的应用。实验结果表明,用分形插值方法实现图象放大和压缩编码能获得良好的结果。 相似文献
4.
ECG Data Compression Using Fourier Descriptors 总被引:3,自引:0,他引:3
5.
一种基于正交小波包变换的数据压缩方法 总被引:5,自引:0,他引:5
本文将最优小波包基变换应用到数据压缩方法的研究工作中,针对不同频率的小波包系数之不同统计特性设计了相应的标量量化和矢量量化方法,并对一组实际测量数据运用我们建议的方法进行压缩,在失真率较小的情况下,得到较高的压缩比. 相似文献
6.
提出近似重复矢量(Approximate Repeat Vector,ARV)模型用于DNA序列冗余片段的描述.通过将数据生物信息学特征引入压缩预处理,并使用ARV矢量构造编码码本,提出了非对称DNA序列压缩算法BioLZMA-2.算法引入基于粒子群优化的Memetic改进方法CLIPSO-MA用于压缩码本的智能优化设计,有效提升了编码性能.在标准测试序列上的实验结果表明,BioLZMA-2可获得比现有DNA序列数据压缩方法更高的压缩率. 相似文献
7.
8.
基于小波包变换的SAR原始数据压缩 总被引:1,自引:0,他引:1
该文介绍了小波包变换和网格编码量化的优点,结合合成孔径雷达(SAR)原始数据的特点,提出了基于小波包变换和网格编码量化的SAR原始数据压缩算法。利用模拟和实际数据分析了该压缩算法的性能,通过与其他原始数据压缩算法的比较说明了这一方法的有效性。 相似文献
9.
Test data compression is an efficient methodology in reducing large test data volume for system-on-a-chip designs. In this
paper, a variable-to-variable length compression method based on encoding runs of compatible patterns is presented. Test data
in the test set is divided into a number of sequences. Each sequence is constituted by a series of compatible patterns in
which information such as pattern length and number of pattern runs is encoded. Theoretical analyses on the evolution of the
proposed Multi-Dimensional Pattern Run-Length Compression (MD-PRC) are made respectively from one-Dimensional-PRC to three-Dimensional-PRC.
To demonstrate the effectiveness of the proposed method, experiments are conducted on both larger ISCAS’89 benchmarks and
the industrial circuits with large number of don’t cares. Results show this method can achieve significant compression in
test data volume and have good adaptation to industrial-size circuits. 相似文献
10.
Test data volume amount is increased multi-fold due to the need of quality assurance of various parts of the circuit design at deep submicron level. Huge memory is required to store this enormous test data which not only increases the cost of the ATE but also the test application time. This paper presents an optimal selective count compatible run length (OSCCPRL) encoding scheme for achieving maximum compression for reduction of the test cost. OSCCPRL is a hybrid technique that amalgamates the benefits of other two techniques: 10 Coded run length (10 C) and Selective CCPRL (SCCPRL) proposed here. These techniques work on improvement of the 9 C and CCPRL techniques. In OSCCPRL, entire data is segmented in blocks and further compressed using inter block and intra block level merging techniques. SCCPRL technique is used for encoding the compatible blocks while the 10C is used to do encoding at sub block (half block length) level. In case, if no compatibility is found at block/sub block level then the unique pattern is held as such in the encoded data along with the necessary categorization bits. The decompression architecture is described and it is shown how by just the addition of few states of FSM, better test data compression can be achieved as compared to previous schemes. The simulation results performed for various ISCAS benchmarks circuits prove that the proposed OSCCPRL technique provides an average compression efficiency of around 80 %. 相似文献
11.
为了提高帧存储的压缩性能,该文提出一种基于方向插值预测变长编码(DIPVLC)的帧存有损压缩算法。首先根据自适应纹理方向插值获取参考像素,从而得到预测残差,然后优化率失真模型对预测残差进行量化,最后通过游程哥伦布算法对量化残差进行变长编码。实验结果显示,与内容感知自适应量化(CAAQ)的帧存压缩算法相比,该文算法不但PSNR下降更少,而且压缩率提高了10.05%,同时编码时间减少了10.62%。 相似文献
12.
13.
The recently developed technique of arithmetic coding, in conjunction with a Markov model of the source, is a powerful method of data compression in situations where a linear treatment is inappropriate. Adaptive coding allows the model to be constructed dynamically by both encoder and decoder during the course of the transmission, and has been shown to incur a smaller coding overhead than explicit transmission of the model's statistics. But there is a basic conflict between the desire to use high-order Markov models and the need to have them formed quickly as the initial part of the message is sent. This paper describes how the conflict can be resolved with partial string matching, and reports experimental results which show that mixed-case English text can be coded in as little as 2.2 bits/ character with no prior knowledge of the source. 相似文献
14.
15.
In practical applications, we often have to deal with high-order data, for example, a grayscale image and a video clip are intrinsically a 2nd-order tensor and a 3rd-order tensor, respectively. In order to satisty these high-order data, it is conventional to vectorize these data in advance, which often destroys the intrinsic structures of the data and includes the curse of dimensionality. For this reason, we consider the problem of high-order data representation and classification, and propose a tensor based fisher discriminant analysis(FDA), which is a generalized version of FDA, named as GFDA. Experimental results show our GFDA outperforms the existing methods, such as the 2-directional 2-dimensional principal component analysis((2D)2PCA), 2-directional 2-dimensional linear discriminant analysis((2D)2LDA), and multilinear discriminant analysis(MDA), in high-order data classification under a lower compression ratio. 相似文献
16.
In practical applications, we often have to deal with high-order data, for example, a grayscale image and a video clip are intrinsically a 2nd-order tensor and a 3rd-order tensor, respectively. In order to satisty these high-order data, it is conventional to vectorize these data in advance, which often destroys the intrinsic structures of the data and includes the curse of dimensionality. For this reason, we consider the problem of high-order data representation and classification, and propose a tensor based fisher discriminant analysis (FDA), which is a generalized version of FDA, named as GFDA. Experimental results show our GFDA outperforms the existing methods, such as 2-directional 2-dimensional principal component analysis ((2D)2PCA), 2-directional 2-dimensional linear discriminant analysis ((2D)2LDA), and multilinear discriminant analysis (MDA), in high-order data classification under lower compression ratio. 相似文献
17.
基于光谱特征编码的快速矢量量化三维谱象数据压缩 总被引:3,自引:0,他引:3
本文用矢量量化技术将成象光谱仪的三维谱象数据空间景象中每一个象元对应的光谱定义为一个矢量,用基于光谱特征的二进制光编编码方法对各光谱矢量进行编码,用编码后的光谱矢量来进行快速码字匹配,这种三维谱象数据压缩方法不公认吏处理速度大大加快(当码书取256码字时比常规的矢量量化方法快30倍,码书取4096码字时快43倍)而且压缩后恢复图象精度还有所提高,本文定义的矢量构成方法不仅可有效地保存光谱特性,而且 相似文献
18.
On Using Exponential-Golomb Codes and Subexponential Codes for System-on-a-Chip Test Data Compression 总被引:1,自引:0,他引:1
We examine the use of exponential-Golomb codes and subexponential codes can be used for the compression of scan test data in core-based system-on-a-chip (SOC) designs. These codes are well-known in the data compression domain but their application to SOC testing has not been explored before. We show that these codes often provide slighly higher compression than alternative methods that have been proposed recently. 相似文献
19.
一维分形插值图像编码是用插值点数据构造分形曲线来拟合数字图像的灰度曲线从而实现压缩。其解码过程就是求用插值点数据构造的迭代函数系统(IFS)的吸引子,由于图像数据以及分形插值迭代规律的特殊性,使得随机迭代算法和通常的固定迭代算法并不适用。本文设计了快速且节省内存的解码算法,并进行了复杂度分析。同时,本文的算法作为分形插值方法的一部分,同样可以用在分形插值法的其他应用领域。 相似文献