首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hardware implementation of data compression algorithms is receiving increasing attention due to exponentially expanding network traffic and digital data storage usage. In this paper, we propose several serial one-dimensional and parallel two-dimensional systolic-arrays for Lempel-Ziv data compression. A VLSI chip implementing our optimal linear array is fabricated and tested. The proposed array architecture is scalable. Also, multiple chips (linear arrays) can be connected in parallel to implement the parallel array structure and provide a proportional speedup  相似文献   

2.
Compression systems of real signals (images, video, audio) generate sources of information with different levels of priority which are then encoded with variable-length codes (VLCs). This paper addresses the issue of robust transmission of such VLC encoded heterogeneous sources over error-prone channels. VLCs are very sensitive to channel noise: when some bits are altered, synchronization losses can occur at the receiver. This paper describes a new family of codes, called multiplexed codes, that confine the de-synchronization phenomenon to low-priority data while reaching asymptotically the entropy bound for both (low- and high-priority) sources. The idea consists of creating fixed-length codes for high-priority information and of using the inherent redundancy to describe low-priority data, hence the name multiplexed codes. Theoretical and simulation results reveal a very high error resilience at almost no cost in compression efficiency.  相似文献   

3.
A great interest has been gained in recent years by a new error-correcting code technique, known as “turbo coding”, which has been proven to offer performance closer to the Shannon's limit than traditional concatenated codes. In this paper, several very large scale integration (VLSI) architectures suitable for turbo decoder implementation are proposed and compared in terms of complexity and performance; the impact on the VLSI complexity of system parameters like the state number, number of iterations, and code rate are evaluated for the different solutions. The results of this architectural study have then been exploited for the design of a specific decoder, implementing a serial concatenation scheme with 2/3 and 3/4 codes; the designed circuit occupies 35 mm2, supports a 2 Mb/s data rate, and for a bit error probability of 10-6, yields a coding gain larger than 7 dB, with ten iterations  相似文献   

4.
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.  相似文献   

5.
An algorithm for a VLSI chip floor plan is presented. It uses initial block placement obtained by the AR (attractive and repulsive force) method, and performs iterative block packing by gradually moving and reshaping blocks with chip boundary shrinking. By the use of several types of experimental data, it is shown that the method is very effective for handling various types of blocks and is well suited to interactive chip layout design.  相似文献   

6.
In this article, low-density parity-check (LDPC) codes are applied to lossy source coding and we study how the asymptotic performance of MacKay's (see ibid, vol.45, p.399-431, Mar. 1999 and vol.47, p.2101, July, 2001) LDPC codes depends on the sparsity of the parity-check matrices in the source coding of the binary independent and identically distributed (i.i.d.) source with Pr{x=1}=0.5. In the sequel, it is shown that an LDPC code with column weight O(logn) for code length n can attain the rate-distortion function asymptotically.  相似文献   

7.
A lossless data compression and decompression (LDCD) algorithm based on the notion of textural substitution has been implemented in silicon using a linear systolic array architecture. This algorithm employs a model in which the encoder and decoder each have a finite amount of memory which is referred to as the dictionary. Compression is achieved by finding matches between the dictionary and the input data stream whereby a substitution is made in the data stream by an index referencing the corresponding dictionary entry. The LDCD system is built using 30 application-specific integrated circuits (ASICs), each containing 126 identical processing elements (PEs) which perform both the encoding and decoding function at clock rates up to 20 MHz  相似文献   

8.
The authors present three VLSI chips-a processor (PU) chip, a cache memory (CU) chip, and a network control (NU) chip-for a large-scale parallel inference machine. The PU chip has been designed to be adapted to logic programming languages such as PROLOG. The CU chip implements a hardware support called `trial buffer' which is suitable for the execution of the PROLOG-like languages. The NU chip makes it possible to connect 256 processing elements in a mesh network. The parallel inference machine (PIM/m) runs a PROLOG-like network-based operating system called PIMOS as well as many applications and has a peak performance of 128 mega logical inferences per second (MLIPS). The PU chip containing 384000 transistors is fabricated in a 0.8 μm double-metal CMOS technology. The CU chip and the NU chip contain 610000 and 329000 transistors, respectively. They are fabricated in a 1.0 μm double-metal CMOS technology. A cell-based design method is used to reduce the layout design time  相似文献   

9.
In contrast to wireline communication, the physical bandwidth of RF wireless communication systems is relatively limited and is unlikely to grow significantly in the future. Hence it is advantageous to increase the effective bandwidth of communication channels at the expense of complex processing at both the sending and receiving entities. In this paper we present a real-time, low-area, and low-power VLSI lossless data compressor based on the first Lempel–Ziv algorithm (Ziv and Lempel, 1977) to improve the performance of wireless local area networks. Our architecture can achieve an average compression rate of 50 Mbps thus providing sufficient performance for all current and most foreseeable future wireless LANs. Since the architecture including a dictionary contains less than 40 K transistors and consumes approximately 70 mW in 1.2 CMOS, it enables low-cost, adaptive, and transparent data compression to be employed in wireless LANs. Its small size allows it to be implemented on an ASIC, as part of a new DSP, or in configurable FPGA technology. To estimate the impact of VLSI compression, we use network simulations to analyze the performance and the power consumption of the compression in the context of a WLAN protocol. In particular, we consider the proposed IEEE WLAN protocol standard 802.11 (IEEE Standard Group, 1994). The compression ratio is modeled as a random variable with a Gaussian distribution based on empirical studies (Cressman, 1994; Pawlikowski et al., 1995). Our results show that efficient real-time data compression can greatly improve the throughput and the delay of a medium-to-heavily loaded network while minimizing the average power vs. throughput ratio.  相似文献   

10.
An overview is given of a silicon-gate NMOS fabrication process used to realize a 450000 transistor, 32-bit single-chip CPU that operates at a worst case 18 MHz clock frequency. The technology utilizes 1.5-/spl mu/m lines and 1.0-/spl mu/m spaces on all critical levels, and provides tungsten dual layer metallization. The device and interconnect structure for this 8-mask process is outlined as a sequence through the process flow. Linewidth and alignment statistics are given for the optical reduction-projection step-and-repeat lithography used in this technology.  相似文献   

11.
An array processing chip integrating 128 bit-serial processing elements (PEs) on a single die is discussed. Each PE has a 16-function logic unit, a single-bit adder, a 32-b variable-length shift register, and 1 kb of local RAM. Logic in each PE provides the capability to mask PEs individually. A modified grid interconnection scheme allows each PE to communicate with each of its eight nearest neighbors. A 32-b bus is used to transfer data to and from the array in a single cycle. Instruction execution is pipelined, enabling all instructions to be executed in a single cycle. The 1-μm CMOS design contains over 1.1-million transistors on an 11.0-mm×11.7-mm die  相似文献   

12.
Kitazawa  H. Ueda  K. 《Electronics letters》1984,20(3):137-139
A chip area estimation method is presented, which consists of intrablock area calculation based on empirically obtained block data and interblock channel area calculation. The method is used in a chip floor program for hierarchical standard-cell VLSI layout design. By applying to several practical circuits, it is shown that the estimation error is within ±10%.  相似文献   

13.
A multisymbol data compression method using a binary arithmetic coder (BAC) is proposed. After representing symbols in a binary format, each bit is sequentially coded by the BAC using a chain rule. The proposed method provides a slightly better compression ratio and much faster processing time than Witten's multisymbol arithmetic coder  相似文献   

14.
SAR image compression is very important in reducing the costs of data storage and transmission in relatively slow channels. The authors propose a compression scheme driven by texture analysis, homogeneity mapping and speckle noise reduction within the wavelet framework. The image compressibility and interpretability are improved by incorporating speckle reduction into the compression scheme. The authors begin with the classical set partitioning in hierarchical trees (SPIHT) wavelet compression scheme, and modify it to control the amount of speckle reduction, applying different encoding schemes to homogeneous and nonhomogeneous areas of the scene. The results compare favorably with the conventional SPIHT wavelet and the JPEG compression methods  相似文献   

15.
The design of data dissemination protocol has been a great challenge due to the highly dynamic and unreliable wireless channel in vehicular ad hoc networks (VANET). In literature, several interesting solutions are proposed to perform data dissemination for this environment. But these solutions either use architectures requiring centralised coordination or global network knowledge or large intermediate buffers. In this paper, we propose a decentralised technique that overcomes above requirements and provides reliable and scalable communication in both dense and sparse traffic for VANET. Random walks are used in the proposed technique to disseminate data from one vehicle to other vehicles in the network. We use raptor codes to provide low decoding complexity and more scalability for data dissemination. Simulation results demonstrate that the proposed technique has better fault tolerance with lower complexity than general random-walk-based dissemination process and more scalability as compared to the other protocols.  相似文献   

16.
The design of a VLSI memory measurement chip which provides the WE 32001 microprocessor with an extensive set of memory management capabilities is described. The chip is implemented in 1.75 /spl mu/m twin-tub CMOS II technology and contains 92000 transistors. Highlights of the technology are the use of twin tubs for independently optimized n- and p-channel transistors, local oxidation and self-aligned channel-stops for parasitic field protection, and the use of an n/SUP +/ substrate for latchup protection. In addition, a composite layer of TaSi over n/SUP +/ polysilicon is used to achieve a fivefold reduction in sheet resistance over the conventional n/SUP +/ polysilicon. Electrical channel lengths for n-channel and p-channel transistors are nominally 1.5 /spl mu/m.  相似文献   

17.
《Electronics letters》1996,32(8):733-735
The authors present a novel architecture designed to reduce the storage for decision vectors at the traceback block in the Viterbi decoder. By decreasing the rate of decision vector generation, the data storage requirement has been reduced by 29.9% in the proposed architecture compared to conventional traceback Viterbi decoders. The overall area has been reduced by ~25% when implemented in VLSI  相似文献   

18.
Hyperspectral data compression using a fast vector quantization algorithm   总被引:4,自引:0,他引:4  
A fast vector quantization algorithm for data compression of hyperspectral imagery is proposed in this paper. It makes use of the fact that in the full search of the generalized Lloyd algorithm (GLA) a training vector does not require a search to find the minimum distance partition if its distance to the partition is improved in the current iteration compared to that of the previous iteration. The proposed method has the advantage of being simple, producing a large computation time saving and yielding compression fidelity as good as the GLA. Four hyperspectral data cubes covering a wide variety of scene types were tested. The loss of spectral information due to compression was evaluated using the spectral angle mapper and a remote sensing application.  相似文献   

19.
The authors describe two chips which form the basis of a high-speed lossless image compression/decompression system. They present the transform and coding algorithms and the main architectural features of the chips and outline some performance specifications. Lossless compression can be achieved by a transformation process followed by entropy coding. The two application-specific integrated circuits (ASICs) perform S-transform image decomposition and the Lempel-Ziv (L-Z) type of entropy coding. The S-transform, besides decorrelating the image, provides a convenient method of hierarchical image decomposition. The data compressor/decompressor IC is a fast and efficient implementation of the L-Z algorithm. The chips can be used independently or together for image compression  相似文献   

20.
A general reversive subband coding system with 2-D infinite impulse response filters is proposed. The system considered guarantees perfect image reconstruction (free of phase distortions). Application of wave digital filters is considered. A new technique of high-frequency source encoding is proposed. The experiments with real images prove high efficiency of the technique proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号