共查询到20条相似文献,搜索用时 15 毫秒
1.
针对距离欺骗干扰对雷达探测性能的影响,在波形设计中应尽量满足雷达发射信号的正交特性,即雷达发射波形具有更低的互相关和自相关旁瓣。采用具有良好自相关特性的混沌二相编码正交频分复用(OFDM)雷达信号,提出一种基于混沌粒子群算法的正交混沌二相编码OFDM雷达信号。以自相关和互相关联合最优作为适应值函数,利用混沌粒子群算法优化设计正交编码脉冲串信号。仿真实验结果证明,基于混沌粒子群优化的正交混沌二相编码OFDM雷达信号自相关旁瓣和互相关的最大值为–22.33 dB,具有良好的相关特性,对抗距离欺骗干扰有一定效果。 相似文献
2.
This study evaluates the performance of an optical receiver for binary phase shift keyed (BPSK) signals in the presence of noise originating from the photodetectors and the phase fluctuations of the optical sources. Analysis of the homodyne detection process shows that the performance is degraded by two effects: One due to the phase error fluctuations of the recovered carrier and the other due to reduction of the energy per bit available for data recovery. The resulting power penalty can be minimized by dividing in an optimal way the received optical signal between the carrier recovery and the data recovery circuits of the receiver. The minimum penalty thus obtained depends on the 3-dB linewidth and on the transmission rate. For example, a penalty of 0.5 dB, relative to the quantum limit of 9 photon bit needed to achieve a BER of 10-9, imposes a minimum transmission rate of about 180 Gbit/s when the optical source has a 3-dB linewidth of 20 MHz. 相似文献
3.
Novel symbol-by-symbol differential detection algorithms are proposed for minimum-shift keying signals transmitted over additive white Gaussian noise and frequency-flat Rayleigh fading channels. They are derived as approximations to the maximum likelihood noncoherent detection strategy. Their error performance is assessed by computer simulation and is compared with that of other noncoherent detectors. It is shown that, with fading channels, the new algorithms outperform the traditional methods 相似文献
4.
Versatile video coding (VVC) is the newest video compression standard. It adopts quadtree with nested multi-type tree (QT-MTT) to encode square or rectangular coding units (CUs). The QT-MTT coding structure is more flexible for encoding video texture, but it is also accompanied by many time-consuming algorithms. So, this work proposes fast algorithms to determine horizontal or vertical split for binary or ternary partition of a 32 × 32 CU in the VVC intra coding to replace the rate-distortion optimization (RDO) process, which is time-consuming. The proposed fast algorithms are actually a two-step algorithm, including feature analysis method and deep learning method. The feature analysis method is based on variances of pixels, and the deep learning method applies the convolution neural networks (CNNs) for classification. Experimental results show that the proposed method can reduce encoding time by 28.94% on average but increase Bjontegaard delta bit rate (BDBR) by about 0.83%. 相似文献
5.
Synchronization algorithms for UWB signals 总被引:2,自引:0,他引:2
This paper is concerned with timing recovery for ultra-wideband communication systems operating in a dense multipath environment. Two timing algorithms are proposed that exploit the samples of the received signal to estimate the start of the individual frames with respect to the receiver's clock (frame timing) and the location of the first frame in each symbol (symbol timing). Channel estimation comes out as a by-product and can be used for coherent matched filter detection. The proposed algorithms require sampling rates on the order of the inverse of the pulse duration. Their performance is assessed by simulating the operation of coherent and differential detectors. Their sensitivity to the sampling rate is discussed, and the effects of the multiple access interference are evaluated. 相似文献
6.
《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》1963,51(8):1127-1134
The application of the electron spin echo phenomenon to the detection of chirped microwave pulses is considered. The attainable time resolution is determined by the width of the paramagnetic resonance line, and the length of the received waveform is limited by the phase memory time of the spin packets in the active material. Pulse compressions of 1000:1 appear to be feasible. A typical device of this kind would require to be operated at liquid helium temperatures. It could be adapted so that in itself it provided maser amplification of the compressed pulses. 相似文献
7.
8.
In the area of radar signature modelling, subspace-based methods have recently become very popular. To model radar signals using subspace-based methods, spatial-smoothing preprocessing (SSP) is essential to estimate the covariance matrix of the received signals. Here, the performances of two typical SSP techniques are compared in the context of radar signature modelling 相似文献
9.
Carsten Meyer José Fernández Gavela Matthew Harris 《IEEE transactions on information technology in biomedicine》2006,10(3):468-475
QRS complex and specifically R-Peak detection is the crucial first step in every automatic electrocardiogram analysis. Much work has been carried out in this field, using various methods ranging from filtering and threshold methods, through wavelet methods, to neural networks and others. Performance is generally good, but each method has situations where it fails. In this paper, we suggest an approach to automatically combine different QRS complex detection algorithms, here the Pan-Tompkins and wavelet algorithms, to benefit from the strengths of both methods. In particular, we introduce parameters allowing to balance the contribution of the individual algorithms; these parameters are estimated in a data-driven way. Experimental results and analysis are provided on the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) Arrhythmia Database. We show that our combination approach outperforms both individual algorithms. 相似文献
10.
11.
Two-layer coding of video signals for VBR networks 总被引:5,自引:0,他引:5
Two-layer conditional-replenishment coding of video signals over a variable-bit-rate (VBR) network is described. A slotted-ring network based on an Orwell protocol is assumed, where transmission of certain packets is guaranteed. The two-layer coder produces two output bit streams: the first bit stream contains all the important structural information in the image and is accommodated in the guaranteed capacity of the network, while the second adds the necessary quality finish. The performance of the coder is tested with CIF standard sequences and broadcast-quality pictures. The portion of the VBR channel allocated to the lower layer as guaranteed bandwidth is examined. Using broadcast-quality pictures, statistics were obtained on the performance of this system for different choices of bit rate in the lower layer. The effect of lost packets is shown on CIF standard picture sequences. It is shown that the coder performs well for a guaranteed channel rate as low as 10-20% of the total bit rate 相似文献
12.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1977,23(3):360-370
A measure of picture quality for simple element, differentially coded pictures is developed based on certain subjective tests. The measure weights the quantization noise according to its visibility. It is shown that the measure correlates well with the picture quality determined on a standard impairment scale. Optimization of DPCM quantizers is done for this and for the mean-square measure of picture quality. Performance of the following types of quantizers is evaluated in terms of entropy of the quantized output and the picture quality: a) minimum mean-square error quantizers with a fixed number of levels, b) minimum mean-square error quantizers with fixed entropy, c) minimum mean-square subjective distortion quantizers with a fixed number of levels, d) minimum mean-square subjective distortion quantizers with fixed entropy, and e) uniform quantizers. It is concluded that for a fixed number of levels and a fixed word-length coding of the quantizer outputs, the quantizers in c) outperform those in a); and with variable length coding, the quantizers in d) perform better than all of the other quantizers having the same entropy. The sensitivity of the approach to variation of picture content is also investigated. 相似文献
13.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1965,11(3):330-335
The optimum test statistic for the detection of binary sure signals in stationary Gaussian noise takes a particularly simple form, that of a correlation integral, when the solution, denoted byq(t) , of a given integral equation is well behaved(L_{2}) . For the case of a rational noise spectrum, a solution of the integral equation can always be obtained if delta functions are admitted. However, it cannot be argued that the test statistic obtained by formally correlating the receiver input with aq(t) which is notL_{2} is optimum. In this paper, a rigorous derivation of the optimum test statistic for the case of exponentially correlated Gaussian noiseR(tau) = sigma^{2} e^{-alpha|tau|} is obtained. It is proved that for the correlation integral solution to yield the optimum test statistic whenq(t) is notL_{2} , it is sufficient that the binary signals have continuous third derivatives. Consideration is then given to the case where a, the bandwidth parameter of the exponentially correlated noise, is described statistically. The test statistic which is optimum in the Neyman-Pearson sense is formulated. Except for the fact that the receiver employsalpha_{infty} (which in general depends on the observed sample function) in place ofalpha , the operations of the optimum detector are unchanged by the uncertainty inalpha . It is then shown that almost all sample functions can be used to yield a perfect estimate ofalpha . Using this estimate ofalpha , a test statistic equivalent to the Neyman-Pearson statistic is obtained. 相似文献
14.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1977,23(5):563-575
Adaptive mean-square error (mse) and maximum-likelihood detection (MLD) algorithms for a dual-channel digital communication system in the presence of interchannel interference and white Gaussian noise are presented. The mse algorithm forms estimates of the transmitted symbols from a linear combination of received symbols using weights that minimize the mse between transmitted and estimated symbols. The nonlinear MLD algorithm minimizes the probability of symbol error by maximizing the probability of the received signal samples on the two channels over ail possible transmitted symbol pairs. The probability of error is derived for the two algorithms when quadrature phase-shift keying (QPSK) is used as a modulation technique, and is compared with that of a dual-channel QPSK system having no compensation for the crosstalk. 相似文献
15.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1966,12(4):425-430
Various encoding schemes are examined from the point of view of minimizing the mean magnitude error of a signal caused by transmission through a binary symmetric channel. A necessary property is developed for optimal codes for any binary symmetric channel and any set of quantization levels. The class of optimal codes is found for the case where the probability of error is small but realistic. This class of codes includes the natural numbering and some unit distance codes, among which are the Gray codes. 相似文献
16.
17.
18.
A differentially coherent detection scheme with improved bit error rate (BER) performance is presented for differentially encoded binary and quaternary phase shift keying (PSK) modulation. The improvement is based on using L symbol detectors with delays of 1, 2, . . ., L symbol periods and on feeding back detected PSK symbols. Exact formulas for the bit error probability are derived for the case that correct symbols are fed back. The effect of symbol errors in the feedback path on the BER is determined by computer simulations 相似文献
19.
This paper addresses the application of genetic algorithm (GA)-based optimization techniques to problems in image and video coding, demonstrating the success of GAs when used to solve real design problems with both performance and implementation constraints. Issues considered include problem representation, problem complexity, and fitness evaluation methods. For offline problems, such as the design of two-dimensional filters and filter banks, GAs are shown to be capable of producing results superior to conventional approaches. In the case of problems with real-time constraints, such as motion estimation, fractal search and vector quantization codebook design, GAs can provide solutions superior to those reported using conventional techniques with comparable implementation complexity. The use of GAs to jointly optimize algorithm performance in the context of a selected implementation strategy is emphasized throughout and several design examples are included 相似文献
20.
Joint source-channel turbo coding for binary Markov sources 总被引:1,自引:0,他引:1
We investigate the construction of joint source-channel (JSC) turbo codes for the reliable communication of binary Markov sources over additive white Gaussian noise and Rayleigh fading channels. To exploit the source Markovian redundancy, the first constituent turbo decoder is designed according to a modified version of Berrou's original decoding algorithm that employs the Gaussian assumption for the extrinsic information. Due to interleaving, the second constituent decoder is unable to adopt the same decoding method; so its extrinsic information is appropriately adjusted via a weighted correction term. The turbo encoder is also optimized according to the Markovian source statistics and by allowing different or asymmetric constituent encoders. Simulation results demonstrate substantial gains over the original (unoptimized) Turbo codes, hence significantly reducing the performance gap to the Shannon limit. Finally, we show that our JSC coding system considerably outperforms tandem coding schemes for bit error rates smaller than 10/sup -4/, while enjoying a lower system complexity. 相似文献