共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the open problem of designing fault-secure parallel encoders for various systematic linear ECC. The main idea relies on generating not only the check bits for error correction but also, separately and in parallel, the check bits for error detection. Then, the latter are compared against error detecting check bits which are regenerated from the error correcting check bits. The detailed design is presented for encoders for CRC codes. The complexity evaluation of FPGA implementations of encoders with various degrees of parallelism shows that their fault-secure versions compare favorably against their unprotected counterparts both with respect to complexity and the maximal frequency of operation. Future research will include the design of FS decoders for CRC codes as well as the generalization of the presented ideas to design of FS encoders and decoders for other systematic linear ECC like nonbinary BCH codes and Reed-Solomon codes. 相似文献
2.
Johannesson R. Wan Z.-x. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1993,39(4):1219-1233
The authors review the work of G.D. Forney, Jr., on the algebraic structure of convolutional encoders upon which some new results regarding minimal convolutional encoders rest. An example is given of a basic convolutional encoding matrix whose number of abstract states is minimal over all equivalent encoding matrices. However, this encoding matrix can be realized with a minimal number of memory elements neither in controller canonical form nor in observer canonical form. Thus, this encoding matrix is not minimal according to Forney's definition of a minimal encoder. To resolve this difficulty, the following three minimality criteria are introduced: minimal-basic encoding matrix, minimal encoding matrix, and minimal encoder. It is shown that all minimal-basic encoding matrices are minimal and that there exist minimal encoding matrices that are not minimal-basic. Several equivalent conditions are given for an encoding matrix to be minimal. It is proven that the constraint lengths of two equivalent minimal-basic encoding matrices are equal one by one up to a rearrangement. All results are proven using only elementary linear algebra 相似文献
3.
Bit-rate control for MPEG encoders 总被引:1,自引:0,他引:1
Gertjan Keesman Imran Shah Rene Klein-Gunnewiek 《Signal Processing: Image Communication》1995,6(6):545-560
Bit-rate control is a central problem in designing image sequence compression systems. In this paper we describe a new approach to bit-rate control for inter-frame encoders such as MPEG encoders. This approach uses concepts from control theory. Its central feature is a surprisingly simple but effective model for the encoder, which consists of a gain element, a delay element and additive noise. In our system we control the bit-rate with a PI-controller which is set to achieve two objectives: (1) we want the picture quality to be as uniform as possible, and (2) we want to use as closely as possible the available amount of bits. It is demonstrated in the paper that these two objectives, when considered separately, lead to contradictory settings of the controller. This dilemma can be solved by using Bit Usage Profiles that indicate how the bits have to be spread over the pictures. The effectiveness of the approach is demonstrated by designing a bit-rate control for an MPEG encoder that has a nearly constant bit-rate per group of pictures (GOP). Such a bit-rate control is of high value for applications like magnetic recording, where a constant bit-rate per GOP is required in order to realize playback trick modes, e.g. the fast forward mode. 相似文献
4.
M. Dimmler C. Dayer 《Mechatronics, IEEE/ASME Transactions on》1996,1(4):278-283
Rotor position and speed information is essential for the control of motors, In fact, several encoders for motors with diameters greater than 13 mm already exist on the market, but most of the solutions are too expensive or unsuitable for smaller applications. Considering the low cost of small drives and their growing number in automation systems, there is a need for encoder solutions which do not increase overall system cost, but offer a controllable device, After a brief summary of existing angular encoders, their miniaturization problems are highlighted, followed by suggestions of suitable architectures for small devices. 相似文献
5.
Chaichanavong P. Marcus B.H. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2003,49(5):1231-1250
A constrained system is presented by a finite-state labeled graph. For such systems, we focus on block-type-decodable encoders, comprising three classes known as block, block-decodable, and deterministic encoders. Franaszek (1968) gives a sufficient condition which guarantees the equality of the optimal rates of block-decodable and deterministic encoders for the same block length. We introduce another sufficient condition, called the straight-line condition, which yields the same result. Run-length limited RLL(d,k) and maximum transition run MTR(j,k) constraints are shown to satisfy both conditions. In general, block-type-decodable encoders are constructed by choosing a subset of states of the graph to be used as encoder states. Such a subset is known as a set of principal states. For each type of encoder and each block length, a natural problem is to find a set of principal states which maximizes the code rate. We show how to compute the asymptotically optimal sets of principal states for deterministic encoders and how they are related to the case of large but finite block lengths. We give optimal sets of principal states for MTR(j,k)-block-type-decodable encoders for all codeword lengths. Finally we compare the code rate of nonreturn to zero inverted (NRZI) encoders to that of corresponding nonreturn to zero (NRZ) and signed NRZI encoders. 相似文献
6.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1980,26(5):534-539
A specific criterion is presented to check the noncatastrophic property of encoders for a convolutional code with a certain type of automorphism. In most cases to which the criterion is applicable, only a very small amount of computation is required. 相似文献
7.
Sinusoidal-encoder-based digital tachometers are often limited by nonidealities in both encoder construction and interface electronics. A probabilistically based compensation technique is presented which dispenses with the need for specialized calibration equipment. A code-density array, obtained during a learning phase, is utilized to yield a compensation function which approximates to the average relationship over the mechanical cycle between the calculated electrical angle (as determined by an arctangent-based algorithm) and the actual angle. An extended version of this probabilistically compensated sinusoidal encoder technique is used to compensate for variations in the encoder characteristics as it rotates through a mechanical cycle. An analysis of the learning-time requirements of the system is presented. Practical results, utilizing performance measures common in the testing of analog-to-digital converters, confirm the utility of the method. An example of the benefits which accrue from the inclusion of the enhanced sensor in closed-loop systems is also provided 相似文献
8.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1982,28(6):828-840
We resolve some open problems concerning the encoding of a pair of correlated sources with respect to a fidelity criterion. Two encoders are used. One encoder observes only the output of the first source, while the other is supplied both with the second source output and with partial information about the first source at some predetermined rate. A general coding theorem is proved which established that{cal S} ast , a certain region defined in terms of "single-letter" information theoretic quantities, is an inner bound to the region of all attainable vectors of rates and distortions. For certain cases of interest the converse is proved, too, thereby establishing the rate-distortion region for these cases. 相似文献
9.
An adaptive cost block matching (ACBM) motion estimation technique is proposed to achieve a good compromise between the visual quality of decoded video sequences and the computational cost. Simulation results demonstrate that the ACBM algorithm is able to obtain a better rate-distortion performance that the one given by the full-search algorithm, with reductions of up to 90% in the computational cost. 相似文献
10.
Optical incremental encoders are extensively used for position measurements in motion systems. The position measurements suffer from quantization errors. Velocity and acceleration estimations obtained by numerical differentiation largely amplify the quantization errors. In this paper, the time stamping concept is used to obtain more accurate position, velocity and acceleration estimations. Time stamping makes use of stored events, consisting of the encoder counts and their time instants, captured at a high resolution clock. Encoder imperfections and the limited resolution of the capturing rate of the encoder events result in errors in the estimations. In this paper, we propose a method to extend the observation interval of the stored encoder events using a skip operation. Experiments on a motion system show that the velocity estimation is improved by 54% and the acceleration estimation by 92%. 相似文献
11.
A new and improved image coding standard, called JPEG2000, has been developed. JPEG2000 is the state-of-the-art image coding standard that results from the joint efforts of the International Standards Organization (ISO) and the International Telecommunications Union. In this article, we describe the most important parameters of this new standard and present several "tips and tricks" to help resolve the design tradeoffs that JPEG2000 application developers are likely to encounter in practice. The new standard outperforms the older JPEG standard by approximately 2 dB of peak signal-to-noise ratio (PSNR) for several images across all compression ratios. The JPEG2000's superiority from the previous standard largely depends on the standard's security aspects, interactive protocols and application program interfaces for network access, wireless transmission, wavelet transform, and embedded block coding with optimal truncation (EBCOT). 相似文献
12.
Oohama Y. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1996,42(3):837-847
On the coding for correlated sources we extend the Slepian-Wolf (1973) data compression system (called the SW system) to define a new system (called the SWL system), where two separate encoders of the SW system are mutually linked. Determining the optimal error exponent for all rates inside the admissible rate region remains an open problem for the SW system. We completely solve this problem for the SWL system, and show that the optimal exponents can be achieved by universal codes. Furthermore, it is shown that the linkage of two encoders does not extend the admissible rate region and does not even improve the exponent of correct decoding outside this region. The zero error data transmission problem for the SWL system is also considered. We determine the zero error rate region, the admissible rate region under the condition that the decoding error probability is strictly zero, and show that this region can be attained by universal codes. Furthermore, we make it clear that the linkage of encoders enlarges the zero error rate region. It is interesting to note that the above results for the SWL system correspond in some sense to the previous results for the discrete memoryless channel with feedback 相似文献
13.
Haccoun D. Lavoie P. Savaria Y. 《Selected Areas in Communications, IEEE Journal on》1988,6(3):457-457
Several new architectures for high-speed convolution encoders and threshold decoders are developed. In particular, it is shown that new architectures featuring both parallelism and pipelining are promising from a speed point of view. These architectures are practical for a wide range of coding rates and constant lengths. Two integrated circuits featuring these architectures have been designed and fabricated in a CMOS 3-μm technology. The two circuits have been tested and can be used to build convolutional encoders and definite threshold decoders operating at data rates above 100 Mb/s. It is shown that with these architectures, encoders and threshold decoders could easily be designed to operate at data rates above 1 Gb/s 相似文献
14.
Distortion-free compressibility of individual pictures by finite-state encoders is investigated. In a recent paper (see IEEE Trans. Inform. Theory, vol.32, no.1, p.1-8, 1986) the compressibility of a given picture I was defined and shown to be the asymptotically attainable lower bound on the compression ratio that can be achieved for I by any finite-state encoder. Here, a different and more direct approach is taken to prove similar results, which are summarized in a converse-to-coding theorem and a constructive-coding-theorem that leads to a universal asymptotically optimal compression algorithm 相似文献
15.
Full optical encoders/decoders for photonic IP routers 总被引:1,自引:0,他引:1
Photonic Internet protocol (IP) routers can support higher throughput and have lower end-to-end transfer delay, with respect to electrical IP routers. The IP addresses are directly mapped onto the optical layer and processed by performing optical correlations in the time domain between the incoming code and the entries of the address bank; a different tapped delay-line encoder/decoder (E/D) is used for each photonic label. In the present paper, an innovative compact low-loss full E/D is presented that generates/processes a set of orthogonal codes (OCs) simultaneously. A tree of Mach-Zehnder interferometers (MZIs) furnishes the whole set of photonic labels at the same time; it is also possible to add/drop labels without modifying the remaining code sequences. The correlation performances of the OCs can be further enhanced by inserting additional phase shifters in the MZI's arms. 相似文献
16.
Bit rates generated from block-based hybrid video coding algorithms are highly variable. In some applications, constant bit rate is more desirable. A rate control algorithm making use of the prediction error of the current frame is proposed and shown to give an almost constant output bit rate. H.263 is used in simulation experiments to test the algorithm 相似文献
17.
Ruckenstein G. Roth R.M. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2001,47(5):1796-1812
An input-constrained channel S is defined as the set of words generated by a finite labeled directed graph. It is shown that every finite-state encoder with finite anticipation (i.e., with finite decoding delay) for S can be obtained through state-splitting rounds applied to some deterministic graph presentation of S, followed by a reduction of equivalent states. Furthermore, each splitting round can be restricted to follow a certain prescribed structure. This result, in turn, provides a necessary and sufficient condition on the existence of finite-state encoders for S with a given rate p:q and a given anticipation a. A second condition is derived on the existence of such encoders; this condition is only necessary, but it applies to every deterministic graph presentation of S. Based on these two conditions, lower bounds are derived on the anticipation of finite-state encoders. Those lower bounds improve on previously known bounds and, in particular, they are shown to be tight for the common rates used for the (1,7)-runlength-limited (RLL) and (2,7)-RLL constraints 相似文献
18.
This paper discusses several approaches to the redundancy reduction of clinical electroencephalographic (EEG) data. The encoders presented here are basically of two types. The first type compresses EEG data with very little visual or spectral information present in the original lost in the compression/decompression process. The other type compresses EEG data with some loss of visual reconstruction quality (although still acceptable) but with the advantage of achieving high compression ratios, and providing a convenient and natural way for subsequent automated EEG diagnosis. In deriving redundancy reduction encoders, the efficiency of several digital compression techniques has been compared by encoding EEG data. The general approach adopted, however, is not restricted to this class of data, but can be applied (with minor modifications) to other data of similar characteristics. 相似文献
19.
Bit-serial Reed - Solomon encoders 总被引:10,自引:0,他引:10
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1982,28(6):869-874
20.
Kimura A. Uyematsu T. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2004,50(1):183-193
Coding problems for correlated information sources were first investigated by Slepian and Wolf. They considered the data compression system, called the SW system, where two sequences emitted from correlated sources are separately encoded to codewords, and sent to a single decoder which has to output the original sequence pairs with a small probability or error. In this paper, we investigate the coding problem of a modified SW system allowing two encoders to communicate with zero rate. First, we consider the fixed-length coding and clarify that the admissible rate region for general sources is equal to that of the original SW system. Next, we investigate the variable-length coding having the asymptotically vanishing probability of error. We clarify the admissible rate region for mixed sources characterized by two ergodic sources and show that this region is strictly wider than that for fixed-length codes. Further, we investigate the universal coding problem for memoryless sources in the system and show that the SW system with linked encoders has much more flexibility than the original SW system. 相似文献