首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
To overcome the limitation of the automatic test equipment (ATE), test data compression/decompression schemes become a more important issue of testing for a system-on-chip (SoC). In order to alleviate the limitation of previous works, a new hybrid test data compression/decompression scheme for an SoC is developed. The new scheme is based on analyzing the factors that influence test parameters: compression ratio and hardware overhead. To improve compression ratio, the proposed scheme, called the Modified Input reduction and CompRessing One block (MICRO), uses the modified input reduction, the one block compression, a novel mapping, and reordering algorithms. Unlike previous approaches using the cyclic scan register architecture, the proposed scheme is to compress original test data and to decompress the compressed test data without the cyclic scan register architecture. Therefore, the proposed scheme leads to high-compression ratio with low-hardware overhead. Experimental results on ISCAS '89 and ITC '99 benchmark circuits prove the efficiency of the new method.  相似文献   

2.
The transmitter/receiver system for bandwidth or data-rate compression of television signals, described herein, is a prototype model of the experimental system of Cherry et al. [1]. The system is suitable for both black-and-white or half-tone pictures, in realistic noise conditions. The system parameters my be adjusted so that an optimum run-length encoding may be found; the great advantages of run-length quantizing are shown, especially with regard to practical instrumentation, leading to the use of buffer stores of modest capacity. One particular cheap form of receiver operates on a quantized-variable-velocity principle and, being much more simple and cheap than the transmitter, is suitable for use in situations requiring many receivers.  相似文献   

3.
Image compression with a hybrid wavelet-fractal coder   总被引:10,自引:0,他引:10  
A hybrid wavelet-fractal coder (WFC) for image compression is proposed. The WFC uses the fractal contractive mapping to predict the wavelet coefficients of the higher resolution from those of the lower resolution and then encode the prediction residue with a bitplane wavelet coder. The fractal prediction is adaptively applied only to regions where the rate saving offered by fractal prediction justifies its overhead. A rate-distortion criterion is derived to evaluate the fractal rate saving and used to select the optimal fractal parameter set for WFC. The superior performance of the WFC is demonstrated with extensive experimental results.  相似文献   

4.
In this work we study a routing scheme combined with an end-to-end rerouting procedure. We focus in particular on a new rerouting strategy called Shared Robust Rerouting (ShRR). This strategy combines three other restoration techniques, namely path diversity, end-to-end rerouting with stub release and global rerouting, in order to achieve cost-effectiveness. Computational results on the bandwidth overhead required by the proposed scheme are provided, as well as a comparison with some conventional restoration schemes.  相似文献   

5.
A multiple transmit antenna system based on hybrid beamforming and space-time coding technologies is examined. The reduction factor of the required transmitted energy achievable by the use of hybrid scheme is quantified for any given outage capacity. We show that although a sole space-time coding configuration is superior asymptotically (i.e., for extremely low outage requirements), a hybrid beamforming/space-time coding configuration can be a more effective solution for modest outage requirements. It may provide a useful design guideline for wireless systems, especially for the downlink where multiple transmit antenna scheme is feasible.  相似文献   

6.
In this paper, a new wavelet-based hybrid electrocardiogram (ECG) data compression technique is proposed. Firstly, in order to fully utilize the two correlations of heartbeat signals, 1-D ECG data are segmented and aligned to a 2-D data arrays. Secondly, 2-D wavelet transform is applied to the constructed 2-D data array. Thirdly, the set partitioning hierarchical trees (SPIHT) method and the vector quantization (VQ) method are modified, according to the individual characteristic of different coefficient subband and the similarity between the subbands. Finally, a hybrid compression method of the modified SPIHT and VQ is employed to the wavelet coefficients. Records selected from the MIT/BIH arrhythmia database are tested. The experimental results show that the proposed method is suitable for various morphologies of ECG data, and that it achieves high compression ratio with the characteristic features well preserved.  相似文献   

7.
The prediction by partial matching (PPM) data compression algorithm developed by J. Cleary and I. Witten (1984) is capable of very high compression rates, encoding English text in as little as 2.2 b/character. It is shown that the estimates made by Cleary and Witten of the resources required to implement the scheme can be revised to allow for a tractable and useful implementation. In particular, a variant is described that encodes and decodes at over 4 kB/s on a small workstation and operates within a few hundred kilobytes of data space, but still obtains compression of about 2.4 b/character for English text  相似文献   

8.
Li  Bo  Lin  Chuang  Chanson  Samuel T. 《Wireless Networks》1998,4(4):279-290
In this paper, we propose and analyze the performance of a new handoff scheme called hybrid cutoff priority scheme for wireless networks carrying multimedia traffic. The unique characteristics of this scheme include support for N classes of traffic, each may have different QoS requirements in terms of number of channels needed, holding time of the connection and cutoff priority. The proposed scheme can handle finite buffering for both new calls and handoffs. Futhermore, we take into consideration the departure of new calls due to caller impatience and the dropping of queued handoff calls due to unavailability of channels during the handoff period. The performance indices adopted in the evaluation using the Stochastic Petri Net (SPN) model include new call and handoff blocking probabilities, call forced termination probability, and channel utilization for each type of traffic. Impact on the performance measures by various system parameters such as queue length, traffic input and QoS of different traffic has also been studied. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

9.
An analysis is presented of the selective-repeat type II hybrid AR Q (automatic-repeat-request) scheme, using convolutional coding and exploiting code combining. With code combining, at successive decoding attempts for a data packet, the decoder for error correction operates on a combination of all received sequences for that packet rather than only on the two most recent received ones as in the conventional type II hybrid ARQ scheme. It is shown by means of analysis and computer simulations that with code combining, a significant throughput is achievable, even at very high channel error rates  相似文献   

10.
为超短波电台接入网提供了一种混合信道接入方案,方案借鉴了民用接入技术的一些先进理念,并结合军用跳频电台网络的特殊业务应用需求,采用了一种静态TDMA、动态TDMA和频分多址FDMA的混合信道接入策略,实现了多用户多业务传输的QOS保障,满足了军用业务的特殊传输需求。  相似文献   

11.
12.
Two-band analysis-synthesis filters or wavelet filters are used pervasively for compressing natural images. Both FIR and IIR filters have been studied in this context, the former being the most popular. In this paper, we examine the compression performance of these two-band filters in a dyadic wavelet decomposition and attempt to isolate features that contribute most directly to the performance gain. Then, employing the general exact reconstruction condition, hybrid FIR-IIR analysis-synthesis filters are designed to maximize compression performance for natural images. Experimental results are presented that compare performance with the popular biorthogonal filters in terms of peak SNR, subjective quality, and computational complexity.  相似文献   

13.
This paper presents an efficient compression algorithm for animated three-dimensional (3D) meshes. First, a segmentation approach is applied to achieve the motion estimation. The main idea is to exploit the temporal coherence of the geometry component by using the heat diffusion properties. The motion of the resulting regions is accurately described by 3D affine transforms. These transforms are computed at the first frame to match the subsequent ones. Second, in order to achieve a good compression performance, an efficient rate control mechanism is proposed to quantize the temporal prediction errors. At this stage, a rate-distortion model is used for quantizing the residual information. Comparative coding tests, for irregular 3D mesh sequences, were conducted to evaluate the coding efficiency of the proposed compression scheme. Simulation results show very promising performances.  相似文献   

14.
Presents a video coding approach that requires a very low bit rate and achieves good visual quality. The approach allows easy and cheap hardware implementation. Intra- and interframe correlations are fully exploited through a spatio-temporal interpolation applied to a nonuniform 3D grid  相似文献   

15.
Xue  X. Fan  C. 《Electronics letters》1993,29(10):839-841
An address-predicted vector quantiser (APVQ) is proposed for image coding, which exploits correlation among intervectors by means of predicting. Compared with the general vector quantiser, the APVQ can obtain higher compression ratio, while keeping the same picture quality.<>  相似文献   

16.
We present a new compression method which compresses 8×8 picture blocks by fixed-length codewords. The compression operation is performed on the discrete cosine transforms, DCT, of each block. As a result, our method combines the distinct advantage of being fixed-length with the high image quality obtained by the DCT based compression methods. Our method has excellent error-resistance characteristics since it does not have the synchronization and error propagation problems inherent in variable-length coding methods  相似文献   

17.
A linear quadtree compression scheme for image encryption   总被引:5,自引:0,他引:5  
A private key encryption scheme for a two-dimensional image data is proposed in this work. This scheme is designed on the basis of lossless data compression principle. The proposed scheme is developed to have both data encryption and compression performed simultaneously. For the lossless data compression effect, the quadtree data structure is used to represent the image; for the encryption purpose, various scanning sequences of image data are provided. The scanning sequences comprise a private key for encryption. Twenty four possible combinations of scanning sequences are defined for accessing four quadrants, thereby making available 24n × 4n(n − 1)/2 possibilities to encode an image of resolution 2n × 2n. The security of the proposed encryption scheme therefore relies on the computational infeasibility of an exhaustive search approach. Three images of 512 × 512 pixels are used to verify the feasibility of the proposed scheme. The testing results and analysis demonstrate the characteristics of the proposed scheme. This scheme can be applied for problems of data storage or transmission in a public network.  相似文献   

18.
Kallel  S. 《Electronics letters》1992,28(12):1097-1098
An efficient stop-and-wait ARQ protocol proposed by Sastry (1975) is modified to include a parity retransmission type II hybrid ARQ scheme. Unlike the Sastry scheme in which simple repeats of a data packet are transmitted, with the type II hybrid ARQ scheme, the data packet to be transmitted is encoded with a rate 1/2 code, and repetitions alternate between the two sequences obtained at the output of the encoder. It is found that the throughput can be substantially increased.<>  相似文献   

19.
A novel image compression technique using classified vector quantiser and singular value decomposition is proposed for the efficient representation of still images. The proposed method is called improved hybrid classified vector quantisation. The proposed technique was benchmarked with the standard vector quantiser generated using the k-means algorithm, and JPEG-2000. Simulation results indicate that the proposed approach alleviates edge degradation and can reconstruct good visual quality images with higher peak signal-to-noise ratio than the benchmarked techniques  相似文献   

20.
This paper proposes a modified block-adaptive prediction-based neural network scheme for lossless data compression. A variety of neural network models from a selection of different network types, including feedforward, recurrent, and radial basis configurations are implemented with the scheme. The scheme is further expanded with combinations of popular lossless encoding algorithms. Simulation results are presented, taking characteristic features of the models, transmission issues, and practical considerations into account to determine optimized configuration, suitable training strategies, and implementation schemes. Estimations are used for comparisons of these characteristics with the existing schemes. It is also shown that the adaptations of the improvised scheme increases performance of even the classical predictors evaluated. In addition, the results obtained support that the total processing time of the two-stage scheme can, in certain cases, be faster than just using lossless encoders. Findings of the paper may be beneficial for future work, such as, in the hardware implementations of dedicated neural chips for lossless compression.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号