首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This work considers space-time channel coding for systems with multiple-transmit and a single-receive antenna, over space uncorrelated block-fading (quasi-static) channels. Analysis of the outage probability over such channels reveals the existence of a threshold phenomenon. The outage probability can be made arbitrary small by increasing the number of transmit antennas, only if the E/sub b//N/sub 0/ is above a threshold which depends on the coding rate. Furthermore, it is shown that when the number of transmit antennas is increased, the /spl epsi/-capacity of a block-fading Rayleigh channel tends to the Shannon capacity of an additive white Gaussian noise channel. This paper also presents space-time codes constructed as a serial concatenation of component convolutional codes separated by an interleaver. These schemes provide full transmit diversity and are suitable for iterative decoding. The rate of these schemes is less than 1 bit/s/Hz, but can be made arbitrary close to 1 bit/s/Hz by the use of Wyner-Ash codes as outer components. Comparison of these schemes with structures from literature shows that performance gains can be obtained at the expense of a small decrease in rate. Computer simulation results over block-fading Rayleigh channels show that the frame-error rate of several of these schemes is within 2-3 dB from the theoretical outage probability.  相似文献   

2.
Variable-rate data transmission schemes in which constellation points are selected according to a nonuniform probability distribution are studied. When the criterion is one of minimizing the average transmitted energy for a given average bit rate, the best possible distribution with which to select constellations points is a Maxwell-Boltzmann distribution. In principle, when constellation points are selected according to a Maxwell-Boltzmann distribution, the ultimate shaping gain (πe/6 or 1.53 dB) can be achieved in any dimension. Nonuniform signaling schemes can be designed by mapping simple variable-length prefix codes onto the constellation. Using the Huffman procedure, prefix codes can be designed that approach the optimal performance. These schemes provide a fixed-rate primary channel and a variable-rate secondary channel, and are easily incorporated into standard lattice-type coded modulation schemes  相似文献   

3.
The Universal Mobile Telecommunications System (UMTS) provides users variable data rate services, which adopts wide-band code-division multiple-access (WCDMA) as the radio access technology. In WCDMA, orthogonal variable spreading factor (OVSF) codes are assigned to different users to preserve the orthogonality between users' physical channels. The data rate supported by an OVSF code depends on its spreading factor (SF). An OVSF code with smaller SF can support higher data rate services than that with larger SFs. Randomly assigning the OVSF code with a large SF to a user may preclude a larger number of OVSF codes with small SFs, which may cause lots of high data rate call requests to be blocked. Therefore the OVSF code assignment affects the performance of the UMTS network significantly. In this paper, we propose two OVSF code assignment schemes CADPB1 and CADPB2 for UMTS. Both schemes are simple and with low system overhead. The simulation experiments are conducted to evaluate the performances for our schemes. Our study indicates that our proposed schemes outperform previously proposed schemes in terms of the weighted blocking probability and fairness index. Our schemes improve the call acceptance rate by slightly introducing call waiting time.  相似文献   

4.
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error-correcting codes, called the inner and outer codes. Its error performance is analyzed for a binary symmetric channel with a bit-error rate ϵ<1/2. It is shown that, if the inner and outer codes are chosen properly, high reliability can be attained even for a high-channel bit-error rate. Specific examples with inner codes ranging from high rates to low rates and Reed-Solomon codes as outer codes are considered, and their error probabilities evaluated. They all provide high reliability even for high bit-error rates, say 10-1-10 -2. Several example schemes are being considered for such satellite and spacecraft downlink error control  相似文献   

5.
In this paper we propose two new cooperative relaying schemes which enable distributed implementation of the orthogonal space time block coding (OSTBC) with 3/4 code rate. These proposed schemes are compared with cooperative relaying scheme with virtual OSTBC and 1/2 code rate. The considered schemes are used for creating virtual 4 × 1 multiple input–single output (MISO) channel, and include one base station with two antennas, two relay stations each with a single antenna and one mobile station with a single antenna. Obtained results show that in the case of the same bit rate the proposed schemes provide better bit error rate (BER) performances than virtual OSTBC with 1/2 code rate. On the other side, for the same symbol constellations the proposed schemes have almost identical BER performances as virtual OSTBC with 1/2 code rate, but with a benefit of 1.5 times increased code rate.  相似文献   

6.
The concept of punctured convolutional codes is extended by punctuating a low-rate 1/N code periodically with period P to obtain a family of codes with rate P/(P+l), where l can be varied between 1 and (N-1)P. A rate-compatibility restriction on the puncturing tables ensures that all code bits of high rate codes are used by the lower-rate codes. This allows transmission of incremental redundancy in ARQ/FEC (automatic repeat request/forward error correction) schemes and continuous rate variation to change from low to high error protection within a data frame. Families of RCPC codes with rates between 8/9 and 1/4 are given for memories M from 3 to 6 (8 to 64 trellis states) together with the relevant distance spectra. These codes are almost as good as the best known general convolutional codes of the respective rates. It is shown that the same Viterbi decoder can be used for all RCPC codes of the same M. the application of RCPC codes to hybrid ARQ/FEC schemes is discussed for Gaussian and Rayleigh fading channels using channel-state information to optimise throughput  相似文献   

7.
Decomposition constructions for secret-sharing schemes   总被引:7,自引:0,他引:7  
The paper describes a very powerful decomposition construction for perfect secret-sharing schemes. The author gives several applications of the construction and improves previous results by showing that for any graph G of maximum degree d, there is a perfect secret-sharing scheme for G with information rate 2/(d+1). As a corollary, the maximum information rate of secret-sharing schemes for paths on more than three vertices and for cycles on more than four vertices is shown to be 2/3  相似文献   

8.
While static open loop rate controls may be adequate for handling continuous bit rate (CBR) traffic, relatively smooth data traffic, and relatively low speed bursty data traffic over broadband integrated networks, high speed bursty data sources need more dynamic controls. Burst level resource allocation is one such dynamic control. Potential benefits and other issues for burst level resource parameter negotiations for bursty data traffic over high speed wide area packet networks have been discussed earlier.1–6 A detailed analysis of an adaptive buffer/window negotiation scheme for long file transfers using these concepts is presented in Reference 1. In this paper we discuss two burst level buffer/window negotiation schemes for short intermittent file transfers, focusing on the specific needs of such traffic streams. We develop closed network of queues models to reflect the behaviour of the proposed schemes. These models, while being simple, capture essential details of the control schemes. Under fairly general assumptions, the resulting network of queues is of product form and can be analysed using the mean value analysis. We use such an analysis to compare the proposed schemes and to determine appropriate sizes of trunk buffers to achieve the desired balance between bandwidth utilization and file transfer delay. The effects of other parameters on the performance of these schemes as well as on the buffer sizing rules are also discussed. Burst level (in-call) parameter negotiation may be carried out by the end system with the network elements or by an interface system (access controller) with the broadband network elements. We discuss implications of this location as well as the needed protocol features. Finally, the service discrimination capabilities desired at the trunk controllers in switching nodes are briefly discussed.  相似文献   

9.
An access control protocol is proposed for integrated voice/data/video code-division multiple-access (CDMA) systems. The protocol involves predicting the residual capacity available for non-real-time data services in reverse link (mobile to base station). Two estimation schemes, a static estimation scheme and a dynamic estimation scheme, are proposed for predicting the residual capacity, the number or the data rate of data packets that could be scheduled at the next time slot. The performances of the proposed estimation schemes are evaluated in view of the outage probability and the mean data message delay.  相似文献   

10.
This paper presents a source-coding framework for the design of coding schemes to reduce transition activity. These schemes are suited for high-capacitance buses where the extra power dissipation due to the encoder and decoder circuitry is offset by the power savings at the bus. In this framework, a data source (characterized in a probabilistic manner) is first passed through a decorrelating function f1. Next, a variant of entropy coding function f2 is employed, which reduces the transition activity. The framework is then employed to derive novel encoding schemes whereby practical forms for f1 and f2 are proposed. Simulation results with an encoding scheme for data buses indicate an average reduction in transition activity of 36%. This translates into a reduction in total power dissipation for bus capacitances greater than 14 pF/b in 1.2 μm CMOS technology. For a typical value for bus capacitance of 50 pF/b, there is a 36% reduction in power dissipation and eight times more power savings compared to existing schemes. Simulation results with an encoding scheme for instruction address buses indicate an average reduction in transition activity by a factor of 1.5 times over known coding schemes  相似文献   

11.
T1 clock recovery equipment requires that transmitted data not contain long sequences of 0 bits. For this reason, equipment that interfaces to T1 networks must meet a ones-density specification that ensures that 1 bit occurs frequently enough. Most schemes for meeting this specification require a substantial amount of overhead that consumes a significant portion of the available bandwidth. In this paper, and approach that meets the ones-density requirement with very little wasted bandwidth is described. Two practical coding schemes based on the approach are presented. The first, a block coding scheme, requires an overhead rate on the order of one bit per T1 frame, along with a delay of several frames. In an error-free channel, it introduces some errors, the rate of which is made acceptably low by using sufficient delay and overhead. In an errored channel, extension of errors is negligible. The second scheme, a sliding code scheme, requires an overhead rate on the order of a fraction of a bit per T1 frame, along with a delay of only several bit times. In an error-free channel, the rate of errors introduced is negligible. In an errored channel, approximately one out of every 2000 channel errors is extended into a burst, the length of which can be made acceptably low by using sufficient overhead  相似文献   

12.
The problem of increasing the probability of successfully transmitting a data packet during a meteor burst is considered. It is shown how the ordering of redundant information can be chosen to increase success probability. Two simple rate 1/2 coding schemes amenable to soft decision decoding are introduced. It is found that these codes provide considerable additional improvement in the probability of successful transmission. One of the codes achieves significantly better performance than a hard decision (31, 15) Reed-Solomon code proposed previously  相似文献   

13.
In this paper, we consider the problem of lossy coding of correlated vector sources with uncoded side information available at the decoder. In particular, we consider lossy coding of vector source xisinRN which is correlated with vector source yisinRN, known at the decoder. We propose two compression schemes, namely, distributed adaptive compression (DAC) and distributed universal compression (DUC) schemes. The DAC algorithm is inspired by the optimal solution for Gaussian sources and requires computation of the conditional Karhunen-Loegraveve transform (CKLT) of the data at the encoder. The DUC algorithm, however, does not require knowledge of the CKLT at the encoder. The DUC algorithms are based on the approximation of the correlation model between the sources y and x through a linear model y=Hx+n in which H is a matrix and n is a random vector and independent of x. This model can be viewed as a fictitious communication channel with input x and output y. Utilizing channel equalization at the receiver, we convert the original vector source coding problem into a set of manageable scalar source coding problems. Furthermore, inspired by bit loading strategies employed in wireless communication systems, we propose for both compression schemes a rate allocation policy which minimizes the decoding error rate under a total rate constraint. Equalization and bit loading are paired with a quantization scheme for each vector source entry (a slightly simplified version of the so called DISCUS scheme). The merits of our work are as follows: 1) it provides a simple, yet optimized, implementation of Wyner-Ziv quantizers for correlated vector sources, by using the insight gained in the design of communication systems; 2) it provides encoding schemes that, with or without the knowledge of the correlation model at the encoder, enjoy distributed compression gains  相似文献   

14.
The service outage based allocation problem explores variable-rate transmission schemes and combines the concepts of ergodic capacity and outage capacity for fading channels. A service outage occurs when the transmission rate is below a given basic rate r/sub o/. The allocation problem is to maximize the expected rate subject to the average power constraint and the constraint that the outage probability is less than /spl epsi/. A general class of probabilistic power allocation schemes is considered for an M-parallel fading channel model. The optimum power allocation scheme is derived and shown to be deterministic except at channel states of a boundary set. The resulting service outage achievable rate ranges from 1-/spl epsi/ of the outage capacity up to the ergodic capacity with increasing average power. Two near-optimum schemes are also derived by exploiting the fact that the outage probability is usually small. The second near-optimum scheme significantly reduces the computational complexity of the optimum solution; moreover, it has a simple structure for the implementation of transmission of mixed real-time and non-real-time services.  相似文献   

15.
In this paper, a random Low-Density Parity-Check (LDPC) code is proposed for Noncontiguous Orthogonal Frequency Division Multiplexing (NC-OFDM) Cognitive Radio (CR) systems. Unlike other encoding schemes which achieve a system code rate of only 1/4 when half of the subcarriers are active, the proposed scheme achieves a system code rate of 1/2. The LDPC code is used to enhance the data transmission rate. A new channel model comprised of a binary erasure channel concatenated with an uncorrelated fading channel and an AWGN channel is adopted for NC-OFDM CR systems. Moreover, the adopted channel model is employed with density evolution algorithm to obtain good degree distribution pairs for the LDPC code. Thereafter, a modified shortest-path algorithm is used to construct the parity-check matrix for the LDPC code. Simulation results show that the proposed LDPC code performs well in terms of both error rate and data transmission rate.  相似文献   

16.
In this paper, the performance of some efficient ARQ schemes which make use of repeated transmissions of the same data block and compound detection are analysed. The repeated transmission of the same data block involves repeated transmission of each symbol of the data block. At the receiving end, compound detection formed of a soft detection jointly with a classical hard detection is performed by using all the received copies of the same symbol. The throughput efficiencies for the proposed ARQ schemes are analytically derived in closed form and optimized with respect to the number of copies to be transmitted for each data block. It will be shown that the optimized ARQ schemes proposed in this paper achieve better performance than classical ARQ schemes, in particular under high error rate conditions.  相似文献   

17.
Using a multidimensional approach, the author discovers a large family of rotationally invariant trellis-coded M-PSK (M-ary shift keying) schemes, M⩾8, with nominal coding gains ranging from 3 to 5 dB and with bandwidth requirements the same as, or even less than, those of uncoded M/2-PSK schemes at the same information bit rate. The rotationally invariant schemes have performance and complexities comparable to the best known nonrotationally-invariant trellis-coded two-dimensional M-PSK schemes. Computer simulation results for these schemes, assuming an additive white-Gaussian-noise (AWGN) channel, are reported  相似文献   

18.
A variety of schemes for performing differential detection in environments characterized by frequency offset are discussed. All of the schemes involve encoding the input phase information as a second-order difference and performing an analogous second-order differential detection at the receiver. Because of the back-to-back differential detection operations at the receiver, The performance of most of the schemes is considerably degraded relative to that of first-order differential detection schemes. However, the latter are quite sensitive to frequency offset and, in many instances, cannot be used at all. It is demonstrated that via a simple enhancement of using a 2 T s (instead of T s) delay in the second stage of the encoder and first stage of the decoder, the performance degradation can be significantly reduced. This result is significant in view of the fact that it comes without any penalty in implementation complexity  相似文献   

19.
The bit error rate (BER) performance of single code and multicode channelization schemes for high rate data transmission in direct-sequence code division multiple access (DS/CDMA) systems is compared. The multipath interference (MPI) effects, which become significant as the data rate increases, are accurately included in the analysis. It is shown that notable performance improvement can be achieved by using multicode scheme in the multipath fading channel  相似文献   

20.
In this paper, an advanced site-specific image-based ray-tracing model is developed that enables multielement outdoor propagation analysis to be performed in dense urban environments. Sophisticated optimization techniques, such as preprocessing the environment database using object partitioning, visibility determination, diffraction image tree precalculation, and parallel processing are used to improve run-time efficiency. Wideband and multiple-input-multiple-output (MIMO) site-specific predictions (including derived parameters such as theoretic capacity and eigenstructure) are compared with outdoor site-specific measurements at 1.92 GHz. Results show strong levels of agreement, with a mean path-loss error of 2 dB and a mean normalized-capacity error of 1.5 b/s/Hz. Physical-layer packet-error rate (PER) results are generated and compared for a range of MIMO-orthogonal frequency-division-multiplexing (OFDM) schemes using measured and predicted multielement channel data. A mean Eb/N 0 error (compared to PER results from measured channel data) of 4 and 1 dB is observed for spatial-multiplexing and space-time block-code schemes, respectively. Results indicate that the ray-tracing model successfully predicts key channel parameters (including MIMO channel structure) and thus enable the accurate prediction of PER and service coverage for emerging MIMO-OFDM networks such as 802.11n and 802.16e  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号