首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
In this paper, a new Multilevel Spatial Modulation technique is proposed. It combines computationally efficient multilevel oding and spatial modulation based on trellis codes to increase coding gain, diversity gain, and bandwidth efficiency. The trellis complexity of the single-stage system increases exponentially, whereas in the proposed multilevel system the complexity increases linearly. The proposed system is analyzed with optimal Viterbi and suboptimal sequential decoding algorithms. The results show that sequential decoding saves 75% of the computational power with a loss of 2 dB SNR approximately, when compared with optimal Viterbi decoding, over both fast- and slow-fading channel conditions. Since the antenna index is used as a source of information in spatial modulation, the number of antennae required increases with the throughput and packing a large number of antennas make cross-correlation unavoidable. In this paper, a low complexity modified decoding technique is also proposed for the multilevel spatial modulation system, in which the correlated received signals are equally combined and decoded by the multistage decoder using the Viterbi algorithm. This technique exploits the receiver antenna correlation and makes the decoding complexity independent of number of antennas. The simulation results indicate that the proposed low complexity algorithm gives approximately 8–10 dB gain when compared with optimal Viterbi decoder with equivalent computational complexity when the eight highly correlated signals are equally combined. This may be a suitable solution for mobile handsets where size and computational complexity are the major issues.  相似文献   

2.
Nagaraj  S. Bell  M. 《Communications, IET》2008,2(7):895-902
A novel technique for improving coding and diversity gains in orthogonal frequency division multiplexing (OFDM) systems is proposed. Multidimensional symbol design based on complex field codes with interleaving across frequency has been known for some time now. However, such symbols cannot be concatenated to convolutional codes owing to the prohibitive complexity of decoding. A novel way of designing multidimensional symbols that allow to concatenate them to convolutional codes while maintaining a low decoding complexity is shown. The proposed multidimensional symbols are based on tailbiting convolutional codes and the design of codes is discussed with desirable properties. Also the design of bit interleaved coded modulation-type and trellis-coded modulation-type codes over these multidimensional symbols is shown. Simulations show that the proposed coding scheme provides significant performance and/or complexity improvements over existing alternatives and also provides more degrees of freedom for channel-based link adaptation.  相似文献   

3.
从距离谱的角度对迫零在Turbo码中的作用进行了理论分析。通过仿真,研究了迫零对Turbo码性能的影响。仿真结果表明,Turbo码编码器迫零处理有助于改善Turbo码的性能。  相似文献   

4.
5.
It is well known that absolute encoders using natural binary codes are prone to reading errors because more bits can change between adjacent scale sectors. Some solutions, such as V-scan, were proposed to solve this problem, but they required too many additional reading heads and decoding circuits to be competitive with the reduced complexity obtained when using the Gray code. The author describes a novel natural binary absolute encoder using an original scanning technique that solves more efficiently the problem of the inherent code reading errors. It is shown that for the same nominal resolution, the complexity of the encoder is similar to, if not better than, that of the Gray-type encoder  相似文献   

6.
Cui  Z. Wang  Z. 《Communications, IET》2008,2(8):1061-1068
A practical low-complexity decoding of low-density parity-check codes is studied. A fast decoding scheme for weighted bit-flipping (WBF) based algorithms is first proposed. Then, an optimised 2 bit decoding scheme and its VLSI architecture are presented. It is shown that the new approach has significantly better decoding performance while having comparable hardware complexity compared with WBF-based algorithms.  相似文献   

7.
List decoding is a novel method for decoding Reed-Solomon (RS) codes that generates a list of candidate transmitted messages instead of one unique message as with conventional algebraic decoding, making it possible to correct more errors. The Guruswami-Sudan (GS) algorithm is the most efficient list decoding algorithm for RS codes. Until recently only a few papers in the literature suggested practical methods to implement the key steps (interpolation and factorisation) of the GS algorithm that make the list decoding of RS codes feasible. However, the algorithm's high decoding complexity is unsolved and a novel complexity-reduced modification to improve its efficiency is presented. A detailed explanation of the GS algorithm with the complexity-reduced modification is given with simulation results of RS codes for different list decoding parameters over the AWGN and Rayleigh fading channels. A complexity analysis is presented comparing the GS algorithm with our modified GS algorithm, showing the modification can reduce complexity significantly in low error weight situations. Simulation results using the modified GS algorithm show larger coding gains for RS codes with lower code rates, with more significant gains being achieved over the Rayleigh fading channels.  相似文献   

8.
In this paper we present, by means of an example, a systematic procedure to synthesize combined modulation/error correcting trellis codes, suitable for Viterbi decoding. This synthesis is based on firstly selecting a suitable linear convolutional code, secondly by analysing the state system of this code to determine the important Hamming distance building properties, and finally by mapping a code with the desired restrictions on its sequences onto this state system. As an example we develop a R = 3/6 dc free (b,l,c) = (0,3,2) code withd_{min} = 4. This code improves on the best codes in [1]. Codes havingb geq 1, and which will thus be more suitable for magnetic recording, can also be synthesized following the proposed procedure.  相似文献   

9.
A new distance-enhancing code for partial-response magnetic recording channels eliminates most frequent errors, while keeping the two-step code trellis time invariant. Recently, published trellis codes either have lower code rates or result in time-varying trellises with a period of nine, thus requiring a higher complexity of detectors and code synchronization. The new code introduces dependency between code words in order to achieve the same coding constraints as the 8/9 time-varying maximum transition runlength (TMTR) code, with the same code rate, but resulting in a trellis that has a period of 2. This code has been applied to the E2PR4 and a 32-state generalized partial response (GPR) ISI target. The resulting two-step trellises have 14 and 28 states, respectively. Coding gain is demonstrated for both targets in additive white Gaussian noise  相似文献   

10.
The authors deal with the sum-product algorithm (SPA) based on the hyperbolic tangent (tanh) rule when it is applied for decoding low-density parity-check (LDPC) codes. Motivated by the finding that, because of the large number of multiplications required by the algorithm, an overflow in the decoder may occur, two novel modifications of the tanh function (and its inverse) are proposed. By means of computer simulations, both methods are evaluated using random-based LDPC codes with binary phase shift keying (BPSK) signals transmitted over the additive white Gaussian noise (AWGN) channel. It is shown that the proposed modifications improve the bit error rate (BER) performance up to 1 dB with respect to the conventional SPA. These results have also shown that the error floor is removed at BER lower than 10-6. Furthermore, two novel approximations are presented to reduce the computational complexity of the tanh function (and its inverse), based on either a piecewise linear function or a quantisation table. It is shown that the proposed approximations can slightly improve the BER performance (up to 0.13 dB) in the former case, whereas small BER performance degradation is observed (<0.25 dB) in the latter case. In both cases, however, the decoding complexity is reduced significantly  相似文献   

11.
The subject of decoding Reed-Solomon codes is considered. By reformulating the Berlekamp and Welch key equation and introducing new versions of this key equation, two new decoding algorithms for Reed-Solomon codes will be presented. The two new decoding algorithms are significant for three reasons. Firstly the new equations and algorithms represent a novel approach to the extensively researched problem of decoding Reed-Solomon codes. Secondly the algorithms have algorithmic and implementation complexity comparable to existing decoding algorithms, and as such present a viable solution for decoding Reed-Solomon codes. Thirdly the new ideas presented suggest a direction for future research. The first algorithm uses the extended Euclidean algorithm and is very efficient for a systolic VLSI implementation. The second decoding algorithm presented is similar in nature to the original decoding algorithm of Peterson except that the syndromes do not need to be computed and the remainders are used directly. It has a regular structure and will be efficient for implementation only for correcting a small number of errors. A systolic design for computing the Lagrange interpolation of a polynomial, which is needed for the first decoding algorithm, is also presented.This research was supported by a grant from the Canadian Institute for Telecommunications Research under the NCE program of the Government of Canada  相似文献   

12.
The Hamming distance properties are investigated, and some experimental results obtained with the following R = 1/2 binary dc free modulation codes are presented: the (b, l, C) = (1, 5, 3) Miller2code and codes with (b, l, C), respectively, (1, 4, 3), (0, 3, 3), and (0, 1, 2). A R = 3/6, (b, l, C) = (0, 3, 2) combined error-correcting/ modulation code is also investigated. State systems, power spectral densities, and the bit error rates after computer simulated decoding of these codes on both the binary symmetric channel and a burst erasure channel are presented.  相似文献   

13.
As it is often the case in public-key cryptography, the first practical identification schemes were based on hard problems from number theory (factoring, discrete logarithms). The security of the proposed scheme depends on an NP-complete problem from the theory of error correcting codes:the syndrome decoding problem which relies on the hardness of decoding a binary word of given weight and given syndrome. Starting from Stern’s scheme [18], we define a dual version which, unlike the other schemes based on the SD problem, uses a generator matrix of a random linear binary code. This allows, among other things, an improvement of the transmission rate with regards to the other schemes. Finally, by using techniques of computation in a finite field, we show how it is possible to considerably reduce: — the complexity of the computations done by the prover (which is usually a portable device with a limited computing power), — the size of the data stored by the latter. Received March 10, 1995; revised version December 1, 1995  相似文献   

14.
 A bound on the minimum distance of Tanner codes / expander codes of Sipser and Spielman is obtained. Furthermore, a generalization of a decoding algorithm of Zémor to Tanner codes is presented. The algorithm can be implemented using the same complexity as that of Zémor and has a similar error-correction capability. Explicit families of Tanner codes are presented for which the decoding algorithm is applicable. Received: March 2, 2001; revised version: November 28, 2001 Key words: Tanner codes, Expander codes, LDPC codes, Decoding, Minimum distance, Expander graphs, Ramanujan graphs, N-gons, Multi-partite graphs.  相似文献   

15.
The current study proposes decoding algorithms for low density parity check codes (LDPC), which offer competitive performance-complexity trade-offs relative to some of the most efficient existing decoding techniques. Unlike existing low-complexity algorithms, which are essentially reduced complexity variations of the classical belief propagation algorithm, starting point in the developed algorithms is the gradient projections (GP) decoding technique, proposed by Kasparis and Evans (2007). The first part of this paper is concerned with the GP algorithm itself, and specifically with determining bounds on the step-size parameter, over which convergence is guaranteed. Consequently, the GP algorithm is reformulated as a message passing routine on a Tanner graph and this new formulation allows development of new low-complexity decoding routines. Simulation evaluations, performed mainly for geometry-based LDPC constructions, show that the new variations achieve similar performances and complexities per iteration to the state-of-the-art algorithms. However, the developed algorithms offer the implementation advantages that the memory-storage requirement is significantly reduced, and also that the performance and convergence speed can be finely traded-off by tuning the step-size parameter.  相似文献   

16.
极化码(Polar code)因其高可靠性、实用的线性编、译码复杂度和理论上唯一可达香农极限等特点,成为信道编码领域新的研究热点.其编、译码方法的研究扩展至多种信道类型和应用领域,但在水声信道中的理论证明和应用研究相对较少且滞后.针对具有显著多途、多普勒扩散和有限带宽等复杂特性的水声信道,文章提出了与之相匹配的极化码信...  相似文献   

17.
We consider quasi-cyclic codes over the ring $\mathbb{F }_2+u\mathbb{F }_2+v\mathbb{F }_2+uv\mathbb{F }_2$ , a finite non-chain ring that has been recently studied in coding theory. The Gray images of these codes are shown to be binary quasi-cyclic codes. Using this method we have obtained seventeen new binary quasi-cyclic codes that are new additions to the database of binary quasi-cyclic codes. Moreover, we also obtain a number of binary quasi-cyclic codes with the same parameters as best known binary linear codes that otherwise have more complicated constructions.  相似文献   

18.
As it is often the case in public-key cryptography, the first practical identification schemes were based on hard problems from number theory (factoring, discrete logarithms). The security of the proposed scheme depends on an NP-complete problem from the theory of error correcting codes: the syndrome decoding problem which relies on the hardness of decoding a binary word of given weight and given syndrome. Starting from Stern’s scheme [18], we define a dual version which, unlike the other schemes based on the SD problem, uses a generator matrix of a random linear binary code. This allows, among other things, an improvement of the transmission rate with regards to the other schemes. Finally, by using techniques of computation in a finite field, we show how it is possible to considerably reduce:
  • - the complexity of the computations done by the prover (which is usually a portable device with a limited computing power).
  • - the size of the data stored by the latter.
  •   相似文献   

    19.
    It is well-known that the performance of the relay-based decode-and-forward (DF) cooperative networks outperforms the performance of the amplify-and-forward cooperative networks. However, this performance improvement is accomplished at the expense of adding more signal processing complexity (precoding/decoding) at each relay node. In this study, the authors tackle this signal processing complexity issue by proposing a Jacket-based fast method for reducing the precoding/decoding complexity in terms of time computation. Jacket transforms have shown to find applications in signal processing and coding theory. Jacket transforms are defined to be n x n matrices A = (ajk) over a field F with the property AA = n/n, where A is the transpose matrix of the element-wise inverse of A, that is, A = (a-1kj,/sub>), which generalise Hadamard transforms and centre weighted Hadamard transforms. In particular, exploiting the Jacket transform properties, the authors propose a new eigenvalue decomposition (EVD) method with application in precoding and decoding of distributive multi-input multi-output channels in relay-based DF cooperative wireless networks in which the transmission is based on using single-symbol decodable space-time block codes. The authors show that the proposed Jacket-based method of EVD has significant reduction in its computational time as compared to the conventional-based EVD method. Performance in terms of computational time reduction is evaluated quantitatively through mathematical analysis and numerical results.  相似文献   

    20.
    Kashani  Z.H. Shiva  M. 《Communications, IET》2007,1(6):1256-1262
    Energy consumption of low-density parity-check (LDPC) codes in different implementations is evaluated. Decoder's complexity is reduced by finite precision representation of messages, that is, quantised LDPC decoder, and replacement of function blocks with look-up tables. It is shown that the decoder's energy consumption increases exponentially with the number of quantisation bits. For the sake of low-power consumption, 3-bit magnitude and 1-sign bit representation for messages are used in the decoder. It is concluded that high-rate Gallager codes are as energy efficient as the Reed-Solomon codes, which till now have been the first choice for wireless sensor networks (WSNs). Finally, it is shown that using LDPC codes in WSNs can be justified even more by applying the idea of trading the transmitter power with the decoder energy consumption. By exploiting the trade-off inherent in iterative decoding, the network lifetime is increased up to four times with the 3-6 regular LDPC code. Hence, it is inferred that the LDPC codes are more efficient than the block and the convolutional codes.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号