共查询到20条相似文献,搜索用时 15 毫秒
1.
Jia LiAuthor Vitae Xiao LiuAuthor VitaeYubin ZhangAuthor Vitae Yu HuAuthor VitaeXiaowei LiAuthor Vitae Qiang XuAuthor Vitae 《Integration, the VLSI Journal》2011,44(3):205-216
Ever-increasing test data volume and excessive test power are two of the main concerns of VLSI testing. The “don’t-care” bits (also known as X-bits) in given test cube can be exploited for test data compression and/or test power reduction, and these techniques may contradict to each other because the very same X-bits are likely to be used for different optimization objectives. This paper proposes a capture-power-aware test compression scheme that is able to keep capture-power under a safe limit with low test compression ratio loss. Experimental results on benchmark circuits validate the effectiveness of the proposed solution. 相似文献
2.
Test data compression using alternating variable run-length code 总被引:1,自引:0,他引:1
Bo YeAuthor Vitae Qian ZhaoAuthor VitaeDuo ZhouAuthor Vitae Xiaohua WangAuthor VitaeMin LuoAuthor Vitae 《Integration, the VLSI Journal》2011,44(2):103-110
This paper presents a unified test data compression approach, which simultaneously reduces test data volume, scan power consumption and test application time for a system-on-a-chip (SoC). The proposed approach is based on the use of alternating variable run-length (AVR) codes for test data compression. A formal analysis of scan power consumption and test application time is presented. The analysis showed that a careful mapping of the don’t-cares in pre-computed test sets to 1s and 0s led to significant savings in peak and average power consumption, without requiring slower scan clocks. The proposed technique also reduced testing time compared to a conventional scan-based scheme. The alternating variable run-length codes can efficiently compress the data streams that are composed of both runs 0s and 1s. The decompression architecture was also presented in this paper. Experimental results for ISCAS'89 benchmark circuits and a production circuit showed that the proposed approach greatly reduced test data volume and scan power consumption for all cases. 相似文献
3.
This paper proposes a new high-capacity reversible data hiding scheme in encrypted images. The content owner first divides the cover image into blocks. Then, the block permutation and the bitwise stream cipher processes are applied to encrypt the image. Upon receiving the encrypted image, the data hider analyzes the image blocks and adaptively decides an optimal block-type labeling strategy. Based on the adaptive block encoding, the image is compressed to vacate the spare room, and the secret data are encrypted and embedded into the spare space. According to the granted authority, the receiver can restore the cover image, extract the secret data, or do both. Experimental results show that the embedding capacity of the proposed scheme outperforms state-of-the-art schemes. In addition, security level and robustness of the proposed scheme are also investigated. 相似文献
4.
The paper proposes a new test data compression scheme for testing embedded cores with multiple scan chains. The new compression scheme allows broadcasting identical test data to several scan chains whenever the cells in the same depth are compatible for the current application test pattern. Thus, it efficiently utilizes the compatibility of the scan cells among the scan chain segments, increases test data run in broadcast mode and reduces test data volume and test application time effectively. It does not need complex compressing algorithm and costly hardware. Experimental results demonstrate the efficiency and versatility of the proposed method. 相似文献
5.
随着人们更多地使用携带式消费电子产品,电子产品中的电力消耗问题已经渐渐成为视频编解码器设计中关注的最主要的设计问题.特别是在最新的编码标准H.264/AVC中,由于采用了多种新的先进的压缩策略,编码器达到了更高的压缩效率的同时,由于这些新的性能,使H.264/AVC的解码器需要对外部存储进行大量的读取.所以,内存读取带宽成为对于整个系统成本的关键问题,具体如在使用电池提供高清视频播放的消费者电子产品中,需要以更低的电力提供更好更长时间的视频.在这个研究中,提出了针对于视频压缩解码系统中内存读写带宽问题所设计的可调的参考帧压缩算法设计的方案,通过降低系统读取外部内存的带宽而达到降低视频解码系统电力消耗的目的. 相似文献
6.
Shan Xiaohong Bi Guangguo 《电子科学学刊(英文版)》2007,24(1):54-59
This paper explores the potential to use accurate but outdated channel estimates for adaptive modulation. The work is novel in that the research is conditioned on block by block adaptation. First, we define a new quantity, the Tolerable Average Use Delay (TAUD), which can indicate the ability of an adaptation scheme to tolerate the delay of channel estimation results. We find that for the variable-power schemes, TAUD is a constant and dependent on the target Bit Error Rate (BER), average power and Doppler frequency; while for the constant-power schemes, it depends on the adaptation block length as well. At last, we investigate the relation between the delay tolerating performance and the spectral efficiency and give the system design criterion. The delay tolerating performance is improved at the price of lower data rate. 相似文献
7.
S. Sivanantham M. Padmavathy Ganga Gopakumar P.S. Mallick J. Raja Paul Perinbam 《Integration, the VLSI Journal》2014
In this paper, we present two multistage compression techniques to reduce the test data volume in scan test applications. We have proposed two encoding schemes namely alternating frequency-directed equal-run-length (AFDER) coding and run-length based Huffman coding (RLHC). These encoding schemes together with the nine-coded compression technique enhance the test data compression ratio. In the first stage, the pre-generated test cubes with unspecified bits are encoded using the nine-coded compression scheme. Later, the proposed encoding schemes exploit the properties of compressed data to enhance the test data compression. This multistage compression is effective especially when the percentage of do not cares in a test set is very high. We also present the simple decoder architecture to decode the original data. The experimental results obtained from ISCAS'89 benchmark circuits confirm the average compression ratio of 74.2% and 77.5% with the proposed 9C-AFDER and 9C-RLHC schemes respectively. 相似文献
8.
Zhirong Gao Chengyi Xiong Lixin Ding Cheng Zhou 《Journal of Visual Communication and Image Representation》2013,24(7):885-894
The emerging compressive sensing (CS) theory has pointed us a promising way of developing novel efficient data compression techniques, although it is proposed with original intention to achieve dimension-reduced sampling for saving data sampling cost. However, the non-adaptive projection representation for the natural images by conventional CS (CCS) framework may lead to an inefficient compression performance when comparing to the classical image compression standards such as JPEG and JPEG 2000. In this paper, two simple methods are investigated for the block CS (BCS) with discrete cosine transform (DCT) based image representation for compression applications. One is called coefficient random permutation (CRP), and the other is termed adaptive sampling (AS). The CRP method can be effective in balancing the sparsity of sampled vectors in DCT domain of image, and then in improving the CS sampling efficiency. The AS is achieved by designing an adaptive measurement matrix used in CS based on the energy distribution characteristics of image in DCT domain, which has a good effect in enhancing the CS performance. Experimental results demonstrate that our proposed methods are efficacious in reducing the dimension of the BCS-based image representation and/or improving the recovered image quality. The proposed BCS based image representation scheme could be an efficient alternative for applications of encrypted image compression and/or robust image compression. 相似文献
9.
A new scheme of test data compression based on run-length, namely equal-run-length coding (ERLC) is presented. It is based on both types of runs of 0's and 1's and explores the relationship between two consecutive runs. It uses a shorter codeword to represent the whole second run of two equal length consecutive runs. A scheme for filling the don't-care bits is proposed to maximize the number of consecutive equal-length runs. Compared with other already known schemes, the proposed scheme achieves higher compression ratio with low area overhead. The merits of the proposed algorithm are experimentally verified on the larger examples of the ISCAS89 benchmark circuits. 相似文献
10.
In this paper, we propose an entropy minimization histogram mergence (EMHM) scheme that can significantly reduce the number of grayscales with nonzero pixel populations (GSNPP) without visible loss to image quality. We proved in theory that the entropy of an image is reduced after histogram mergence and that the reduction in entropy is maximized using our EMHM. The reduction in image entropy is good for entropy encoding considering that the minimum average code word length per source symbol is the entropy of the source signal according to Shannon’s first theorem. Extensive experimental results show that our EMHM can significantly reduce the code length of entropy coding, such as Huffman, Shannon, and arithmetic coding, by over 20% while preserving the image subjective and objective quality very well. Moreover, the performance of some classic lossy image compression techniques, such as the Joint Photographic Experts Group (JPEG), JPEG2000, and Better Portable Graphics (BPG), can be improved by preprocessing images using our EMHM. 相似文献
11.
文章提出一种基于FDR码改进分组的SoC测试数据压缩方法.经过对原始测试集无关位的简单预处理,提高确定位0在游程中的出现频率.在FDR码的基础上,改进其分组方式,通过理论证明其压缩率略高于FDR编码,尤其是短游程的压缩率.用C语言编写程序模拟两种编码方法的软件实现程序,实验结果证明了改进分组的FDR编码方法的有效性和高压缩性. 相似文献
12.
《Journal of Visual Communication and Image Representation》2014,25(2):454-465
Reversible data hiding is a method that not only embeds secret data but also reconstructs the original cover image without distortion after the confidential data are extracted. In this paper, we propose novel reversible data hiding scheme that can embed high capacity of secret bits and recover image after data extraction. Our proposed scheme depends on the locally adaptive coding scheme (LAC) as Chang&Nguyen’s scheme and SMVQ scheme. Experimental results show that the compression rate of our proposed scheme is 0.33 bpp on average. To embed secret bits we propose the normal-hiding scheme and the over-hiding scheme which have an average embedding rate of 2.01 bpi and 3.01 bpi, more than that of Chang&Nguyen’s scheme (1.36 bpi). The normal-hiding scheme and the over-hiding scheme also has high embedding efficiency of 0.28 and 0.36 on average, which are better than that of Chang&Kieu’s scheme (0.12), Chang&Nguyen’s scheme (0.18) and Chang&Nguyen’s scheme (0.16). 相似文献
13.
In this paper, we demonstrate that we can effectively use the results from the field of adaptive self‐organizing data structures in enhancing compression schemes. Unlike adaptive lists, which have already been used in compression, to the best of our knowledge, adaptive self‐organizing trees have not been used in this regard. To achieve this, we introduce a new data structure, the partitioning binary search tree (PBST) which, although based on the well‐known binary search tree (BST), also appropriately partitions the data elements into mutually exclusive sets. When used in conjunction with Fano encoding, the PBST leads to the so‐called Fano binary search tree (FBST), which, indeed, incorporates the required Fano coding (nearly equal probability) property into the BST. We demonstrate how both the PBST and the FBST can be maintained adaptively and in a self‐organizing manner. The updating procedure that converts a PBST into an FBST, and the corresponding new tree‐based operators, namely the shift‐to‐left and the shift‐to‐right operators, are explicitly presented. The encoding and decoding procedures that also update the FBST have been implemented and rigorously tested. Our empirical results on the files of the well‐known benchmarks, the Calgary and Canterbury Corpora, show that the adaptive Fano coding using FBSTs, the Huffman, and the greedy adaptive Fano coding achieve similar compression ratios. However, in terms of encoding/decoding speed, the new scheme is much faster than the latter two in the encoding phase, and they achieve approximately the same speed in the decoding phase. We believe that the same philosophy, namely that of using an adaptive self‐organizing BST to maintain the frequencies, can also be utilized for other data encoding mechanisms, even as the Fenwick scheme has been used in arithmetic coding. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
14.
S. Alighale 《International Journal of Electronics》2013,100(10):1458-1466
P4 polyphase code is well known in pulse compression technique. For P4 code with length 1000, peak sidelobe level (PSL) and integrated sidelobe level (ISL) are ?36dB and ?16dB, respectively. In order to increase the performance, there are different reduction techniques to reduce the sidelobes of P4 code. This paper presents a novel sidelobe reduction technique that reduces the PSL and ISL to ?127dB and ?104dB, respectively. Also, other sidelobe reduction techniques such as Woo filter are investigated and compared with the novel proposed technique. Simulations and results show that the proposed technique produces a better peak side lobe ratio (PSL) and integrated side lobe ratio (ISL) than other techniques. 相似文献
15.
Today, smart cities represent an effective digital platform for facilitating our lives by shifting all stakeholders toward more sustainable behavior. Consequently, the field of smart cities has become an increasingly important research area. The smart city comprises a huge number of hybrid networks, with each network containing an enormous number of nodes that transmit massive amounts of data, thus giving rise to many network problems, such as delay and loss of connectivity. Decreasing the amount of such transmitted data is a great challenge. This paper presents a data overhead reduction scheme (DORS) for heterogeneous networks in smart city environments that comprise five different methods: median, nonlinear least squares, compression, data merging, and prioritization. Each method is applied according to the current status of quality of service. To measure the performance of the proposed model, a simulation environment is constructed for a smart city using network simulation package, NS2. The obtained results indicate that DORS has the capability to decrease the size of transmitted data in the simulated smart city environment while attaining a notable performance enhancement in terms of data reduction rate, end‐to‐end delay, packet loss ratio, throughput, and energy consumption ratio. 相似文献
16.
在保证图像质量的前提下,为了提高可逆信息隐藏的容量,提出了一种基于差值扩展的可逆信息隐藏新算法。该方法通过差值直方图平移,将由阈值丁确定的特定区间外的一部分差值平移到该区问内,同时仅对属于该特定区间的差值进行定位图标注,从而不仅增大可嵌入信息的总量,而且得到远小于现有方法的定位图。特别是在阈值,较小的情况下,效果更加明显。实验表明,该算法在保持图像低失真的前提下,与其他算法相比较,取得了较大的信息嵌入量。 相似文献
17.
《Journal of Visual Communication and Image Representation》2014,25(6):1425-1431
In this paper, we present an efficient histogram shifting (HS) based reversible data hiding scheme for copyright protection of multimedia. Firstly, an improved HS based multi-layer embedding process for rhombus prediction is employed by introducing a control parameter to explore the correlation of prediction errors. A rate-distortion model for HS embedding is then developed for optimal side information selection, which is especially suitable for low payload reversible data hiding when only a single layer embedding is required. Finally, a modified location map is constructed to facilitate the compression of location map and further increase the embedding capacity. Compared with similar schemes, experimental results demonstrate the superior performance of the proposed scheme in the terms of embedding capacity and stego-image quality. 相似文献
18.
A fast progressive image transmission scheme using block truncation coding by pattern fitting 总被引:2,自引:0,他引:2
Bibhas Chandra Dhara Bhabatosh Chanda 《Journal of Visual Communication and Image Representation》2012,23(2):313-322
In this paper, we have proposed a novel progressive image transmission scheme. In the present method, the concept of the BTC-PF is used for faster decoding. Here, images are decomposed into a number of blocks based on smoothness criterion. The smooth blocks are encoded by block means and the others are by BTC-PF method. To encode a block by BTC-PF method, the codebook is organized like a full search progressive transmission tree which helps greatly in efficient progressive transmission. The present method provides good image quality at low bit-rate and faster decoding compared to other spatial domain progressive transmission methods. We extend this method for color images also. In color image coding, each color plane is encoded separately and then the encoded information of the planes are transmitted in interleaving manner to obtain color images right from the early stages. 相似文献
19.
In this work we have explored the hybrid deep learning architecture for recognizing the tampering from the videos. This hybrid architecture explores the features from the authentic videos to categorize the tampered portions from the forged videos. Initially, the process begins by compressing the input video using the Discrete cosine transform (DCT) based double compression approach. Then, the filtering process is carried out to improve the quality of compressed frame using the bilateral filtering. Then, the modified segmentation approach is applied to segment the frames into different regions. The features from these segmented portions are extracted and fed into hybrid DNN-AGSO (deep neural network- Adaptivf RELATED WORKSe Galactic Swarm Optimization) using Gabor wavelet transform (GWT) technique. Three different datasets are used to evaluate the overall performance they are, VTD, MFC-18, and VIRAT by MATLAB platform. The recognition rate achieved by VTD, MFC-18, and VIRAT datasets are 96%, 95.2%, and 93.47% respectively. 相似文献
20.
In this paper, a new high-performance reversible data hiding method for vector quantization (VQ) indices is proposed. The codebook is firstly sorted using the unidirectional static distance-order technique to improve the correlation among the neighboring indices. The two-dimensional structure of image and the high correlation among the neighboring blocks are used to update the self-organized list L in the improved locally adaptive coding scheme (ILAS). Then a new embedding rule according to the complexity of the region at which the current block locates and the position of current block index in the list L is proposed to obtain a better embedding capacity. The experimental results demonstrate that our proposed method has a better performance in terms of compression rate, embedding capacity and embedding rate compared with the related data hiding methods. 相似文献