首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper proposes a transmit-diversity system using a pair of orthogonal pulses. The system uses a set of orthonormal-basic functions, which contains four shaped-sinusoidal pulses with the same frequency. The first two elements in the set are shaped sine and cosine pulses. The second two elements are the same sine and cosine pulses but they are shaped with the Hilbert transform of the shaping pulse of the first two elements. The modulator in the proposed system produces two modulated symbols for each data symbol. It uses the first two elements in the proposed set in modulating the first modulated symbol and the second two elements in modulating the second modulated symbol. The modulated symbols are transmitted though two antennas. The diversity order of the proposed system is twice the number of antennas in the receiver. In the proposed system, no space-time coding is used and the channel gains change every symbol period. This is different from the Multiple-Input-Multiple-Output (MIMO) system. The receiver of the proposed system consists of two matched filters for each receiving antenna. No special detectors or interference cancelation techniques are used because there is no interference between the outputs of the matched filters.  相似文献   

2.
带禁止符号的算术码序列译码算法   总被引:1,自引:1,他引:0  
徐向明  彭坦  崔慧娟  唐昆 《通信技术》2009,42(4):154-155
如何有效检测错误以及如何构造编码树型结构是算术码抗误码性能研究的两个关键性问题。文章利用多禁止符号实现快速、高效检错,并结合删减编码树型结构分支点的序列译码算法,在降低序列译码复杂度的同时提高了算术码的抗误码性能。仿真结果表明,在同样误包率的条件下,多禁止符号的抗误码性能优于单禁止符号0.5dB。而且在数据包长固定的条件下,禁止符号冗余度的选择和序列译码堆栈空间大小密切相关,联合优化后可以达到性能最优。  相似文献   

3.
JPEG2000 is a recently standardized image compression algorithm. The heart of this algorithm is the coding scheme known as embedded block coding with optimal truncation (EBCOT). This contributes the majority of processing time to the compression algorithm. The EBCOT scheme consists of a bit-plane coder coupled to a MQ arithmetic coder. Recent bit-plane coder architectures are capable of producing symbols at a higher rate than the existing MQ arithmetic coders can absorb. Thus, there is a requirement for a high throughput MQ arithmetic coder. We examine the existing MQ arithmetic coder architectures and develop novel techniques capable of absorbing the high symbol rate from high performance bit-plane coders, as well as providing flexible design choices.  相似文献   

4.
Wavelet difference reduction (WDR) has recently been proposed as a method for efficient embedded image coding. In this paper, the WDR algorithm is analysed and four new techniques are proposed to either reduce its complexity or improve its rate distortion (RD) performance. The first technique, dubbed modified WDR-A (MWDR-A), focuses on improving the efficiency of the arithmetic coding (AC) stage of the WDR. Based on experiments with the statistics of the output symbol sequence, it is shown that the symbols can either be arithmetic coded under different contexts or output without AC. In the second technique, MWDR-B, the AC stage is dropped from the coder. By employing MWDR-B, up to 20% of coding time can be saved without sacrificing the RD performance, when compared to WDR. The third technique focuses on the improvement of RD performance using context modelling. A low-complexity context model is proposed to exploit the statistical dependency among the wavelet coefficients. This technique is termed context-modelled WDR (CM-WDR), and acts without the AC stage to improve the RD performance by up to 1.5 dB over WDR on a set of test images, at various bit rates. The fourth technique combines CM-WDR with AC and achieves a 0.2 dB improvement over CM-WDR in terms of PSNR. The proposed techniques retain all the features of WDR, including low complexity, region-of-interest capability, and embeddedness.  相似文献   

5.
该文针对深空通信、移动通信等资源受限网络中的信息有效性、可靠性和安全性传输,提出一种基于混沌密钥控制的联合信源信道与安全算术码编译码算法。该算法在编码端通过混沌映射1控制在算术码内嵌入多个禁用符号,将信道编码检错与密码流的扰乱相结合;同时,通过混沌映射2控制信源符号的算术编码,将信源编码与信息安全相结合,实现了联合信源信道与信息安全编译码。实验结果表明,该算法与现有的同类算法相比,当误包率为10-3时,改善编译码性能0.4 dB,同时增强了可靠性和安全性。  相似文献   

6.
We address the problem of constructing an adaptive arithmetic code in the case where the source alphabet is large and there are lots of different symbols with equal counts of occurrence. For an alphabet of N symbols and r distinct symbol weights we describe a code for which the number of operations needed for encoding and decoding is equal to clogr+c/sub 1/ instead of clogN+c/sub 2/ as in previous arithmetic codes, c, c/sub 1/, c/sub 2/ are constants. When r is small relative to N-which is the case for most practical coding problems on large alphabets-the encoding and decoding speed of the suggested code will be substantially greater than with known methods.  相似文献   

7.
8.
In this paper, we propose a new approach for block-based lossless image compression by defining a new semiparametric finite mixture model-based adaptive arithmetic coding. Conventional adaptive arithmetic encoders start encoding a sequence of symbols with a uniform distribution, and they update the frequency of each symbol by incrementing its count after it has been encoded. When encoding an image row by row or block by block, conventional adaptive arithmetic encoders provide the same compression results. In addition, images are normally non-stationary signals, which means that different areas in an image have different probability distributions, so conventional adaptive arithmetic encoders which provide probabilities for the whole image are not very efficient. In the proposed compression scheme, an image is divided into non-overlapping blocks of pixels, which are separately encoded with an appropriate statistical model. Hence, instead of starting to encode each block with a uniform distribution, we propose to start with a probability distribution which is modeled by a semiparametric mixture obtained from the distributions of its neighboring blocks. The semiparametric model parameters are estimated through maximum likelihood using the expectation–maximization algorithm in order to maximize the arithmetic coding efficiency. The results of comparative experiments show that we provide significant improvements over conventional adaptive arithmetic encoders and the state-of-the-art lossless image compression standards.  相似文献   

9.
Although the JPEG2000 compression standard has high coding efficiency,its error resistance and security can’t meet the requirements of practical application.Based on this,a fast bidirectionally-decodable arithmetic coding method with chaotic redundancy and threshold control was proposed.At the encoder,the chaotic map controlled the probabilities of multiple redundant symbols to enhance the security of arithmetic coding.At the decoder,threshold control and bidirectional decoding were combined to realize fast decoding based on maximum a posteriori estimation.Simulation results show that the proposed method improves the reconstructed image quality with better error resistance and security.  相似文献   

10.
Optimal designs for space-time linear precoders and decoders   总被引:13,自引:0,他引:13  
We introduce a new paradigm for the design of transmitter space-time coding that we refer to as linear precoding. It leads to simple closed-form solutions for transmission over frequency-selective multiple-input multiple-output (MIMO) channels, which are scalable with respect to the number of antennas, size of the coding block, and transmit average/peak power. The scheme operates as a block transmission system in which vectors of symbols are encoded and modulated through a linear mapping operating jointly in the space and time dimension. The specific designs target minimization of the symbol mean square error and the approximate maximization of the minimum distance between symbol hypotheses, under average and peak power constraints. The solutions are shown to convert the MIMO channel with memory into a set of parallel flat fading subchannels, regardless of the design criterion, while appropriate power/bits loading on the subchannels is the specific signature of the different designs. The proposed designs are compared in terms of various performance measures such as information rate, BER, and symbol mean square error  相似文献   

11.
Reed-Solomon (RS) error-correcting codes are often proposed for communication systems requiring burst and/or erasure correction capabilities. In most cases, the modulation symbol size is fixed a priori. Therefore, the effective application of RS coding implies proper matching of the code symbol size with the modulation symbol size. Previous results have assumed that the RS codeword symbol size is an integer multiple of the modulation symbol size or have been based on simple approximations. In this paper, the exact symbol error probability with K-bit M-ary modulation symbols and q-bit RS codeword symbols is presented  相似文献   

12.
余轮  许明  周霆  陈东侠 《电讯技术》2007,47(3):48-51
IS95具有两个速率集,在同一个速率集中,不同的速率其数据帧的长度不同,为了使进入正交调制的数据帧的长度相同,其采用了信号重复器.信号重复器进行简单的重复作用,使得带宽不能充分利用.文中提出的方法利用一个信号发生器,产生已知的信号进入卷积编码,卷积编码后的数据帧长度同样符合要求,而信号发生器产生的已知信号在译码端可以作为译码时的约束条件进行约束Viterbi译码.在BSC信道模型的仿真结果证明,其在较高误码率的情况下具有较好的性能.  相似文献   

13.
刘军清  李天昊 《通信学报》2007,28(9):112-118
对信源信道自适应联合编码方法进行了研究,提出了一种新的基于纠错算术码的联合信源信道编解码系统。该系统在编码端利用算术码内嵌禁用符号实现信源信道一体式编码,即利用马尔科夫信源模型和根据信道状态信息自适应地调整禁用符号概率大小从而调整编码码率来实现信道自适应;在解码端,推导出了基于MAP的解码测度数学公式并基于此测度公式提出了一种改进的堆栈序列估计算法。与传统的信道自适应编码算法不同,该自适应编码算法只需调整一个参数:禁用符号,且理论上可获得连续可变的编码码率。实验结果表明,与经典的Grangetto联合编码系统以及分离编码系统相比,所提出的编码系统具有明显改善的性能增益。  相似文献   

14.
15.
The conventional remedy to time and/or frequency variability of radio channels is diversity. Redundant coding is a kind of diversity, as each coded symbol can be recovered from other symbols. Only linear binary block codes are considered. Any binary random variable can be represented by its algebraic value,a real number whose sign indicates its most likely value and whose absolute value measures the probability of this value. The algebraic value of a received binary symbol is itself a random variable, whose distribution obeys a particular constraint. The algebraic value associated with the maximum likelihood decision on a binary symbol, given a set of independent received replicas of it, and that associated with the sum modulo 2 of binary random variables are also considered. The symbol-by-symbol decoding is then analysed in the case of threshold decoding, then in the general case. An approximate bound on the decoding error probability for additive Gaussian noise and coherent demodulation is used to assess the advantage of coding when unequalenergy symbols are received, according to a deterministic or a Rayleigh distribution. Simulation results are given for the Hamming (15,11) code. Coding affords a significant advantage provided the channel is good enough, while conventional diversity always provides gain.  相似文献   

16.
This paper presents a cooperative spectrum sharing protocol using non‐orthogonal multiple access in cognitive radio networks. A 2‐phase protocol comprising of a primary transmitter‐receiver pair and a secondary transmitter‐receiver pair is considered. In the proposed protocol, 3 data symbols can be transmitted during the 2 phases; this is unlike the traditional decode‐and‐forward relaying where 1 data symbol can be transmitted and the conventional superposition coding–based overlay spectrum sharing and the cooperative relaying system using non‐orthogonal multiple access where 2 data symbols can be transmitted, under a single‐relay scenario. We have investigated performance of our proposed protocol in terms of ergodic sum capacity and outage probability along with analytical derivations over independent Rayleigh fading channels. We also compared our proposed protocol with the traditional decode‐and‐forward relaying, conventional superposition coding–based overlay spectrum sharing, and the cooperative relaying system using non‐orthogonal multiple access schemes to demonstrate efficacy of the proposed protocol. The simulation and analytical results are presented to confirm efficiency of the proposed spectrum sharing protocol.  相似文献   

17.
Closed-form blind symbol estimation in digital communications   总被引:6,自引:0,他引:6  
We study the blind symbol estimation problem in digital communications and propose a novel algorithm by exploiting a special data structure of an oversampled system output. Unlike most equalization schemes that involve two stages-channel identification and channel equalization/symbol estimation-the proposed approach accomplishes direct symbol estimation without determining the channel characteristics. Based on a deterministic model, the new method can provide a closed-form solution to the symbol estimation using a small set of data samples, which makes it particularly suitable for wireless applications with fast changing environments. Moreover, if the symbols belong to a finite alphabet, e.g., BPSK or QPSK, our approach can be extended to handle the symbol estimation for multiple sources. Computer simulations and field RF experiments were conducted to demonstrate the performance of the proposed method. The results are compared to the Cramer-Rao lower bound of the symbol estimates derived in this paper  相似文献   

18.
A new class of codes for frame synchronization is proposed. Commonly, the beginning of every fixed or variable-length frame is identified by a given contiguous sequence called a prefix. To avoid the occurrence of the prefix elsewhere in the frame, a prefix synchronizable code (PS-code) is used. PS-codes have the property that the prefix does not occur in any codeword or in any concatenation of codewords in any position other than the first position. The new codes, termed partial-prefix synchronizable codes (PPS-codes), use a fixed sequence of symbols that is interspersed with symbols that carry information. The contiguous sequence from the first fixed symbol to the last fixed symbol is called a “partial-prefix.” Consequently, not one but a set of possible prefixes is used, and none of these prefixes is allowed to occur at any other than the first position of a codeword. The cardinality of PPS-codes is determined, and coding algorithms are proposed which have a computational complexity proportional to the length of the codewords. It is demonstrated that in comparison with PS-codes, PPS-codes have similar coding and prefix detection complexity, but they have a larger code size and have better error control capabilities  相似文献   

19.
A two-stage decoding procedure for pragmatic trellis-coded modulation (TCM) is introduced. It applies a transformation from the received I-channel and Q-channel samples onto points in a two-dimensional (2-D) signal space that contains a coset constellation. For pragmatic TCM over M-PSK signal sets with ν coded bits per symbol, ν=1, 2, the signal points in the coset constellations represent cosets of a B/QPSK signal subset-associated with the coded bits-in the original M-PSK signal constellation. A conventional Viterbi decoder operates on the transformed symbols to estimate the coded bits. After reencoding these bits, the uncoded bits are estimated in a second stage, on a symbol-by-symbol basis, with decisions based on the location of the received symbols. In addition to requiring no changes in the Viterbi decoder core, it is shown that the proposed method results in savings of up to 40% in the memory required to store (or in the size of the logic required to compute) metrics and transformed symbols  相似文献   

20.
An efficient coding system for long source sequences   总被引:2,自引:0,他引:2  
The Elias source coding scheme is modified to permit a source sequence of practically unlimited length to be coded as a single codeword using arithmetic of only limited precision. The result is shown to be a nonblock arithmetic code of the first in, first out (FIFO) type-- source symbols are decoded in the same order as they were encoded. Codeword lengths which are near optimum for the specified statistical properties of the source can be achieved. Explicit encoding and decoding algorithms are Provided which effectively implement the coding scheme. Applications to data compression and cryptography are suggested.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号