首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Known coset codes are adapted for use on partial response channels or to generate signals with spectral nulls. By using coset precoding and running digital sum feedback, any desired tradeoff can be achieved between the power and spectra of the relevant sequences, up to the optimum tradeoff possible. A fundamental theorem specifying this optimum tradeoff is given. A maximum-likelihood-sequence-estimation (MLSE) decoder for the original code may be used for the adapted code, and such a decoder then attains the minimum squared distance of the original code. These methods sometimes generate codes with greater minimum squared distance than that of the original code; this distance can be attained by augmented decoders, although such decoders inherently require long decoding delays and may be subjected to quasi-catastrophic error propagation. The authors conclude that, at least for sequences supporting large numbers of bits per symbol, coset codes can be adapted to achieve effectively the same performance and complexity on partial response channels, or for sequences with spectral nulls, as they do in the ordinary memoryless case  相似文献   

2.
High rate transmission can be realized using multiple orthogonal codes (MOC), as proposed in the third-generation wide-band code-division multiple-access (W-CDMA) standard. However, the linear sum of MOC channels is no longer constant amplitude, and a highly linear, power-inefficient amplifier may be required for transmission. Recently, a nonlinear block coding technique called precoding is introduced to maintain a constant amplitude signal after superposition of MOC channels. This is achieved by adding redundancy. In this paper, we first describe a multidimensional signaling scheme that recovers some information rate loss by precoding. Second, we propose a self-interference (SI) cancellation scheme resulting from a code diversity between the in-phase and quadrature subchannels among MOC channels. In a typical wireless channel with multipath fading, this type of SI can be detrimental especially when the number of parallel MOC channels is large. Third, we show that the error detection capability of precoding can be combined with code diversity, resulting in a diversity gain. In addition, we show that the diversity gain can be achieved using antenna diversity to assure the degree of freedom in code diversity, and even with the large number of MOC channels, the error performance can be maintained reliably while outperforming the variable spreading factor scheme in W-CDMA  相似文献   

3.
We present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined  相似文献   

4.
2n modified prime codes are designed for all-optical code-division multiple access (CDMA) networks using very simple encoders and decoders. The proposed code is obtained from an original 2n prime code of prime number P. By padding P-1 zeros in each `subsequence' of codewords in the corresponding 2n prime code. The cross-correlation constraint of the resulting 2n modified prime code is equal to one, as opposed to two for a 2n prime code. For a given bit error rate (BER), the proposed code can thus be used to support a larger number of active users in the fibre optic CDMA network than a 2n prime code. Moreover, using the former can also reduce code length and weight compared with employing the latter to achieve the same BER  相似文献   

5.
Passive optical fast frequency-hop CDMA communications system   总被引:21,自引:0,他引:21  
This paper proposes an all-fiber fast optical frequency-hop code division multiple access (FFH-CDMA) for high-bandwidth communications. The system does not require an optical frequency synthesizer allowing high communication bit rates. Encoding and decoding are passively achieved by Bragg gratings, Multiple Bragg gratings replace a frequency synthesizer, achieving a hopping rate in tens of GHz. A main lobe sine apodization can be used in writing the gratings to enhance the system capacity and the spectrum efficiency. All network users can use the same tunable encoder/decoder design. The simultaneous utilization of the time and frequency domains offers notable flexibility in code selection. Simulations show that the encoder efficiently performs the FFH spread spectrum signal generation and that the receiver easily extracts the desired signal from a received signal for several multiple access interference scenarios. We measure the system performance in terms of bit error rate, as well as auto-to cross-correlation contrast. A transmission rate of 500 Mb/s per user is supported in a system with up to 30 simultaneous users at 10-9 bit error rate. We compare FFH-CDMA to several direct sequence-CDMA systems in terms of bit error rate versus the number of simultaneous users. We show that an optical FFH-CDMA system requires new design criteria for code families, as optical device technology differs significantly from that of radio frequency communications  相似文献   

6.
Crossfield  M. 《IEE Review》2001,47(1):31-34
This article is concerned exclusively with data tags which are machine readable and which do not require contact between the tag and the reader. Such devices are of considerable commercial interest, either because they speed up a data collection process, or because they provide a new function, or both. The goal of very low cost, vibration-free tagging was finally realised with SGL's 1995 invention of flying null (FN) technology. An FN data tag is the magnetic equivalent of an optical bar code. In the case of FN, however, the bars in the code are made from a very soft (low coercivity), high permeability, magnetic alloy, and the tag is read using a special magnetic reader. Very little material is required and there are no special packaging requirements, so FN tags can be made for a fraction of the cost of other tags. The reader is the key to the technology. It creates a narrow region of zero field (a null) in space, surrounded by regions where the field strength is sufficient to saturate the magnetic material used in the tag. In a typical implementation, it also applies a low-amplitude alternating magnetic field to the interrogation region, so that a soft magnetic element in the null region is driven into and out of saturation, thereby radiating harmonics of the interrogation frequency. These harmonics can be detected, and their time of occurrence related to the position of the element with respect to the null. In typical systems, a spatial resolution of better than 50 μm can be achieved for a reader-to-tag separation of many millimetres  相似文献   

7.
Templates are constructed to extend arbitrary additive error correcting or constrained codes, i.e., additional redundant bits are added in selected positions to balance the moment of the codeword. The original codes may have error correcting capabilities or constrained output symbols as predetermined by the usual communication system considerations, which are retained after extending the code. Using some number theoretic constructions in the literature, insertion/deletion correction can then be achieved. If the template is carefully designed, the number of additional redundant bits for the insertion/deletion correction can be kept small-in some cases of the same order as the number of parity bits in a Hamming code of comparable length.  相似文献   

8.
We address an open question, regarding whether a lattice code with lattice decoding (as opposed to maximum-likelihood (ML) decoding) can achieve the additive white Gaussian noise (AWGN) channel capacity. We first demonstrate how minimum mean-square error (MMSE) scaling along with dithering (lattice randomization) techniques can transform the power-constrained AWGN channel into a modulo-lattice additive noise channel, whose effective noise is reduced by a factor of /spl radic/(1+SNR/SNR). For the resulting channel, a uniform input maximizes mutual information, which in the limit of large lattice dimension becomes 1/2 log (1+SNR), i.e., the full capacity of the original power constrained AWGN channel. We then show that capacity may also be achieved using nested lattice codes, the coarse lattice serving for shaping via the modulo-lattice transformation, the fine lattice for channel coding. We show that such pairs exist for any desired nesting ratio, i.e., for any signal-to-noise ratio (SNR). Furthermore, for the modulo-lattice additive noise channel lattice decoding is optimal. Finally, we show that the error exponent of the proposed scheme is lower bounded by the Poltyrev exponent.  相似文献   

9.
Random codes: minimum distances and error exponents   总被引:1,自引:0,他引:1  
Minimum distances, distance distributions, and error exponents on a binary-symmetric channel (BSC) are given for typical codes from Shannon's random code ensemble and for typical codes from a random linear code ensemble. A typical random code of length N and rate R is shown to have minimum distance N/spl delta//sub GV/(2R), where /spl delta//sub GV/(R) is the Gilbert-Varshamov (GV) relative distance at rate R, whereas a typical linear code (TLC) has minimum distance N/spl delta//sub GV/(R). Consequently, a TLC has a better error exponent on a BSC at low rates, namely, the expurgated error exponent.  相似文献   

10.
Distributed classification fusion using error-correcting codes (DCFECC) has recently been proposed for wireless sensor networks operating in a harsh environment. It has been shown to have a considerably better capability against unexpected sensor faults than the optimal likelihood fusion. In this paper, we analyze the performance of a DCFECC code with minimum Hamming distance fusion. No assumption on identical distribution for local observations, as well as common marginal distribution for the additive noises of the wireless links, is made. In addition, sensors are allowed to employ their own local classification rules. Upper bounds on the probability of error that are valid for any finite number of sensors are derived based on large deviations technique. A necessary and sufficient condition under which the minimum Hamming distance fusion error vanishes as the number of sensors tends to infinity is also established. With the necessary and sufficient condition and the upper error bounds, the relation between the fault-tolerance capability of a DCFECC code and its pair-wise Hamming distances is characterized, and can be used together with any code search criterion in finding the code with the desired fault-tolerance capability. Based on the above results, we further propose a code search criterion of much less complexity than the minimum Hamming distance fusion error criterion adopted earlier by the authors. This makes the code construction with acceptable fault-tolerance capability for a network with over a hundred of sensors practical. Simulation results show that the code determined based on the new criterion of much less complexity performs almost identically to the best code that minimizes the minimum Hamming distance fusion error. Also simulated and discussed are the performance trends of the codes searched based on the new simpler criterion with respect to the network size and the number of hypotheses  相似文献   

11.
Using a mathematical proof, the authors establish that in element-by-element greedy algorithms based on extended set representation of optical orthogonal codes (OOCs), smaller delay elements rejected during a construction step can be accepted in later steps. They design a novel algorithm that exploits this property and call it the rejected delays reuse (RDR) greedy algorithm. They show that employing the RDR method leads to code lengths that are shorter than those achieved for OOCs constructed using the classical greedy algorithm for the same code weight and the same number of simultaneous codes constraints. They then define a quantitative measure (factor) for OOCs efficiency based on its ability to expand subwavelength-switching capacity. They call this factor the expansion efficiency factor. They use this factor to show that reducing the code length, for the same code constraints, enhances the capacity of subwavelength optical code switched networks.  相似文献   

12.
In an incoherent direct-sequence optical code-division multiple-access (DS-OCDMA) system, multiple-access interference (MAI) is one of the principal limitations. To mitigate MAI, the parallel interference cancellation (PIC) technique can be used to remove nondesired users' contribution. In this paper, we study four DS-OCDMA receivers based on the PIC technique with hard limiters placed before the nondesired users or before the desired user receiver, or both. We develop, for the ideal synchronous case, the theoretical upper bound of the error probability for the four receivers. Significant performance improvement is obtained by comparison with conventional receivers in the case of optical orthogonal codes. The paper highlights that the number of active users with null error probability is doubled, compared with conventional receivers. Finally, we show that, thanks to their good performances, the PIC structures permit considerably reducing the minimal code length required to have 30 users with bit-error rate$≪10^-9$. So, the hardware constraints are relaxed for realistic application.  相似文献   

13.
构造接近香农极限的低密度校验码   总被引:3,自引:0,他引:3  
低密度校验(LDPC)码的性能优劣在很大程度上取决于该码的最小环长(Girth)和最小码距。本文采用几何构造方法构造最小环长为8的LDPC码,联合随机搜索算法改善其码重分布,所构造的LDPC码在码长为4k、编码效率为0.95时,距离香农极限仅1.1dB。  相似文献   

14.
基于说话人分类技术的分级说话人识别研究   总被引:3,自引:0,他引:3       下载免费PDF全文
刘文举  孙兵  钟秋海 《电子学报》2005,33(7):1230-1233
识别正确率和抗噪性能固然是说话人识别的研究重点,但识别响应速度也是决定系统实用化的关键所在.本文成功地提出了基于说话人分类技术的分级说话人辨识方法,极大地提高了系统运行速度,随着注册说话人数的增多,较之传统的说话人辨识方法,其优势更加明显.同时在说话人确认中,该方法的使用,进一步提高了确认的正确率,有效地降低了错误接受和错误拒绝率.本文提出的可信度打分方法,也一定程度上改进了系统的性能.实验表明:基于说话人分类技术的说话人辨识方法使系统的运行速度平均提高了3.5倍,对说话人确认等误识率和最小误识率平均下降了53.75%.  相似文献   

15.
In-band full-duplex (FD) is being considered as a promising technology for the next generation wireless communication systems. In this paper, the performance of orthogonal frequency division multiplexing (OFDM) modulation system with different symbol duration and the code spreading system with different spreading sequence lengths under time-varying self-interference (SI) channel in FD mode is investigated respectively. Typically, the SI channel is estimated during the SI cancellation duration and used for SI suppression in the whole data transmission duration. First, the expressions of the residual SI power during the data transmission duration are derived under the classical, the uniform, and the two-way Doppler SI channels. Second, the signal-to-interference-and-noise-ratio after the SI mitigation is obtained. Third, the symbol error rates for the OFDM modulation and the code spreading systems are given. Simulation results show that OFDM symbol length should be selected longer when the symbol duration is significantly lower than the SI coherent time while the length of the coding spreading system should be chosen shorter.  相似文献   

16.
Let a q-ary linear (n,k)-code be used over a memoryless channel. We design a soft-decision decoding algorithm that tries to locate a few most probable error patterns on a shorter length s ∈ [k,n]. First, we take s cyclically consecutive positions starting from any initial point. Then we cut the subinterval of length s into two parts and examine T most plausible error patterns on either part. To obtain codewords of a punctured (s,k)-code, we try to match the syndromes of both parts. Finally, the designed codewords of an (s,k)-code are re-encoded to find the most probable codeword on the full length n. For any long linear code, the decoding error probability of this algorithm can be made arbitrarily close to the probability of its maximum-likelihood (ML) decoding given sufficiently large T. By optimizing s, we prove that this near-ML decoding can be achieved by using only T≈q(n-k)k(n+k)/ error patterns. For most long linear codes, this optimization also gives about T re-encoded codewords. As a result, we obtain the lowest complexity order of q(n-k)k(n+k)/ known to date for near-ML decoding. For codes of rate 1/2, the new bound grows as a cubic root of the general trellis complexity qmin{n-k,k}. For short blocks of length 63, the algorithm reduces the complexity of the trellis design by a few decimal orders  相似文献   

17.
In this paper, we present aperformance analysis of a wireless multimedia direct-sequence code-divisionmultiple-access(DS/CDMA) system based on different error control schemes and an optimal power control algorithm over multipath Rayleigh fading channels.The error control schemes consist of Forward Error Correction (FEC), diversity, and Automatic Repeat reQuest (ARQ). The concatenated codes with a Reed–Solomon outer code andconvolutional inner code are used as FEC. Since a multimedia system is required to support services with different rates and Quality of Services (QoS), different error control schemes are used to satisfy the requirements of different media. In particular, a power control algorithm which can optimize the capacityperformance of the integrated system is presented. Numerical results will show that power optimization can increase the capacity and decrease the total transmission power. By incorporating diversity and hybrid ARQ along with appropriate code ratesin the optimal power controlled system, dramatic increase in system capacitycan also be achieved.  相似文献   

18.
In this paper, we study the performance of different graph-based error-correcting codes over Gilbert–Elliot channels. We propose a hybrid coding scheme in which each code bit is checked by both a parity check and a Hamming code. A hybrid code can be represented by a code-to-code graph, which can be used to optimize the code. Asymptotic minimum distance properties of the hybrid code are derived, and it is shown that the expected minimum distance of the hybrid code increases linearly with respect to the code length. Simulation results show that for a typical Gilbert–Elliott channel, hybrid codes can outperform regular low-density parity-check codes by more than an order of magnitude, in terms of bit-error rate.  相似文献   

19.
In this paper, we study the use of channel coding in a direct‐sequence code‐division multiple‐access (DS‐CDMA) system that employs space‐time adaptive minimum‐mean square‐error (MMSE) interference suppression over Rayleigh fading channels. It is shown that the employment of adaptive antenna arrays at the receiver can assist in attenuating multiuser interference and at the same time speeds‐up the convergence rate of the adaptive receiver. In this work, we assess the accuracy of the theoretical results developed for the uncoded and convolutionally coded space‐time multiuser detector when applied to the adaptive case. It is found that the use of antenna arrays brings the receiver performance very close to its multiuser counterpart. Using performance error bounds, we show that a user‐capacity gain of approximately 200% can easily be achieved for the space‐time adaptive detector when used with a rate 1/2 convolutional code (CC) and a practical channel interleaver. This capacity gain is only 10% less than the gain achieved for the more complicated multiuser‐based receiver. Finally, we perform a comparison between convolutional and turbo coding where we find that the latter outperforms the former at all practical bit‐error rates (BER). Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
该文提出一种用于测试数据压缩的自适应EFDR(Extended Frequency-Directed Run-length)编码方法。该方法以EFDR编码为基础,增加了一个用于表示后缀与前缀编码长度差值的参数N,对测试集中的每个测试向量,根据其游程分布情况,选择最合适的N值进行编码,提高了编码效率。在解码方面,编码后的码字经过简单的数学运算即可恢复得到原测试数据的游程长度,且不同N值下的编码码字均可使用相同的解码电路来解码,因此解码电路具有较小的硬件开销。对ISCAS-89部分标准电路的实验结果表明,该方法的平均压缩率达到69.87%,较原EFDR编码方法提高了4.07%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号