首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Fast algorithms for solving arbitrary Toeplitz-plus-Hankel systems of equations are presented. The algorithms are analogs of the split Levinson and Schur algorithms, although the more general Toeplitz-plus-Hankel structure requires that the algorithms be based on a four-term recurrence. Relations with the previous split algorithms are considered. The algorithms require roughly half as many multiplications as previous fast algorithms for Toeplitz-plus-Hankel systems  相似文献   

2.
The split Schur algorithms of P. Delsarte and Y. Genin (1987) represent methods of computing reflection coefficients that are computationally more efficient, in terms of multiplications, than the conventional Schur algorithm by a constant factor. The authors investigate the use of fixed-point binary arithmetic, with quantization due to rounding, in the implementation of the symmetric and antisymmetric split Schur algorithms. It is shown, through a combination of analysis and simulation, that the errors in the reflection coefficient estimates due to quantization are large when the input signal is either a narrowband high-pass signal or a narrowband low-pass signal  相似文献   

3.
4.
Today's 3G wireless systems require both high linearity and high power amplifier (PA) efficiency. The high peak-to-average ratios of the digital modulation schemes used in 3G wireless systems require that the RF PA maintain high linearity over a large range while maintaining this high efficiency; these two requirements are often at odds with each other with many of the traditional amplifier architectures. In this article, a fast and easy-to-implement adaptive digital predistorter has been presented for Wideband Code Division Multiplexed signals using complex memory polynomial work function. The proposed algorithm has been implemented to test a Motorola LDMOSFET PA. The proposed technique also takes care of the memory effects of the PA, which have been ignored in many proposed techniques in the literature. The results show that the new complex memory polynomial-based adaptive digital predistorter has better linearisation performance than conventional predistortion techniques.  相似文献   

5.
This letter presents a simple polynomial predictor-based sequence detector for the Rayleigh nonselective fading channel. Unlike the polynomial predictor-based sequence detector proposed by Borah and Hart, the new receiver is not restricted to constant envelope modulation schemes. Analytical and simulated results are presented. In some instances, the proposed receiver performs within 6 dB of the equivalent maximum-likelihood sequence estimation receiver.  相似文献   

6.
Comparisons are made of a genie-aided sequential algorithm due to D. Haccoun and M.J. Ferguson (1975), the Viterbi algorithm, the M -algorithm, and the Fano algorithm for rate-1/2 and rate-2/3 trellis modulation codes on rectangular signal sets. The effects of signal-to-noise ratio and decoding-delay constraints on the choice of decoding algorithms for framed data are examined by computer simulation. Additionally, the genie-aided algorithm is used as a tool in estimating the asymptotic behavior of the M-algorithm. In general, the results conform closely to experience with convolutional codes due to the similar distance structure of the codes. The Fano algorithm produces good error performance with a low average number of computations when long decoding delay is permissible. The M-algorithm provides a savings in computation compared to the Viterbi algorithm if a small decoding delay is required  相似文献   

7.
A simple algorithm for the evaluation of discrete Fourier transforms (DFT) and discrete cosine transforms (DCT) is presented. This approach, based on the divide and conquer technique, achieves a substantial decrease in the number of additions when compared to currently used FFT algorithms (30% for a DFT on real data, 15% for a DFT on complex data and 25% for a DCT) and keeps the same number of multiplications as the best known FFT algorithms. The simple structure of the algorithm and the fact that it is best suited for real data (one does not have to take a transform of two real sequences simultaneously anymore) should lead to efficient implementations and to a wide range of applications.  相似文献   

8.
The future generation of wireless communications systems uses the generalized frequency division multiplexing (GFDM) due to its viable candidate waveform to perform multiple user scheduling. It is a non-orthogonal waveform susceptible to intercarrier and intersymbol interference (ISI); still, it offers flexible pulse shaping that enhances the efficiency of user scheduling. Here, to accomplish spatial diversity, the multiple-input multiple-output (MIMO) is incorporated with GFDM to enhance the performance; there is also an extra inter-antenna interference problem that limits the model's performance. The issue concerning inter-antenna interference can be resolved by adopting pilot-based information transfer. The detection of a signal at the receiver based on the minimum mean square error has the issue of computational complexity while enhancing the estimation quality of a detector. Hence, a low-complexity channel estimation technique is proposed in this research using the V-degree polynomial-based channel estimation technique, wherein the cubic order computation is reduced to square order. Also, the proposed adaptive pulse shaping technique, wherein the filter coefficient is optimized using the gazelle optimization algorithm (GOA) to provide optimal pulse shaping filter parameters by employing bit error rate (BER) as the objective function. The performance of a proposed V-degree polynomial-based channel estimation is analyzed based on various assessment measures like BER and MSE and acquired the minimal values of 5.75E-05 and 2.07E-05, respectively.  相似文献   

9.
On fast address-lookup algorithms   总被引:17,自引:0,他引:17  
The growth of the Internet and its acceptance has sparkled keen interest in the research community in respect to many apparent scaling problems for a large infrastructure based on IP technology. A self-contained problem of considerable practical and theoretical interest is the longest-prefix lookup operation, perceived as one of the decisive bottlenecks. Several novel approaches have been proposed to speed up this operation that promise to scale forwarding technology into gigabit speeds. This paper surveys these new lookup algorithms and classifies them based on applied techniques, accompanied by a set of practical requirements that are critical to the design of high-speed routing devices. We also propose several new algorithms to provide lookup capability at gigabit speeds. In particular, we show the theoretical limitations of routing table size and show that one of our new algorithms is almost optimal, while requiring only a small number of memory accesses to perform each address lookup  相似文献   

10.
A new nonlinear expression of Fermi-level variation with two-dimensional electron gas density in a high electron mobility has been proposed. It was found that our expression has a better fit with the numerical results. And, an analytical expression for n s in terms of the applied gate voltage is developed. Comparing with other previous approximations, the solutions of our expression has a better agreement with the exact numerical results over the entire range of interest. Besides, the solutions of our expression of n s versus V G are compared with the experimental data and shown to be in good agreement over a wide range of bias conditions.  相似文献   

11.
Recoding is the process of transforming between digit sets. It is used to reduce the cost and delay of the implementation of arithmetic algorithms, such as digit-recurrence and parallel algorithms for multiplication, division/square-root, and in compound operations. We present a simple and systematic basis for developing these recodings.  相似文献   

12.
This letter discusses the equivalence between the pre- and post-permutation algorithms for the fast Hartley transform (FHT). Some improvements are made to two recently published FHT programs.  相似文献   

13.
基于GSC框架降秩自适应滤波算法研究   总被引:1,自引:0,他引:1  
基于对自适应滤波算法的研究可以得出,GSC(Generalized Sidelobe Canceller)框架是所有降秩滤波算法的统一基础模型,通过对GSC一般结构的降秩模型的研究。文中提出了3种能够改善一般结构结果的优化模型,PC(Prin-ciple Component)主分量算法、CS(Cross Spectrum)交叉谱算法、MWF(Mutistage Wiener Filter)多级维纳滤波器。这三种方法都是基于特征子空间截断的方法。通过对降秩矩阵T的特征值分解和重新构造,大大降低了自由度和工程运算量。该两种方法在实际应用中具有更优的实时性。仿真结果证明了提出的基于广义旁瓣相消的降秩自适应波束形成算法具有良好的降低自由度和波束形成性能,验证了算法的有效性。  相似文献   

14.
The trellis of a finite Abelian group code is locally (i.e., trellis section by trellis section) related to the trellis of the corresponding dual group code which allows one to express the basic operations of the a posteriori probability (APP) decoding algorithm (defined on a single trellis section of the primal trellis) in terms of the corresponding dual trellis section. Using this local approach, any algorithm employing the same type of operations as the APP algorithm can, thus, be dualized, even if the global dual code does not exist (e.g., nongroup codes represented by a group trellis). Given this, the complexity advantage of the dual approach for high-rate codes can be generalized to a broader class of APP decoding algorithms, including suboptimum algorithms approximating the true APP, which may be more attractive in practical applications due to their reduced complexity. Moreover, the local approach opens the way for mixed approaches where the operations of the APP algorithm are not exclusively performed on the primal or dual trellis. This is inevitable if the code does not possess a trellis consisting solely of group trellis sections as, e.g., for certain terminated group or ring codes. The complexity reduction offered by applying dualization is evaluated. As examples, we give a dual implementation of a suboptimum APP decoding algorithm for tailbiting convolutional codes, as well as dual implementations of APP algorithms of the sliding-window type. Moreover, we evaluate their performance for decoding usual tailbiting codes or convolutional codes, respectively, as well as their performance as component decoders in iteratively decoded parallel concatenated schemes.  相似文献   

15.
The authors study the computational complexity of two methods for solving least squares and maximum likelihood modal analysis problems. In particular, they consider the Steiglitz-McBride and iterative quadratic maximum likelihood (IQML) algorithms. J.H. McClellan and D. Lee (ibid., vol.39, no.2, p.509-12, 1991) have shown the iterations of the two methods to be equivalent. However, they suggest that the Steiglitz-McBride algorithm may be computationally preferable. A method for reducing the dimension of the matrix inversion required at each iteration of IQML is provided. The resulting reduction in the computation makes the computational complexity of IQML commensurate with that of the Steiglitz-McBride algorithm  相似文献   

16.
In this paper, two efficient approaches will be discussed that support linear network analysis: supernode analysis (SNA) and reduced loop analysis (RLA). By means of some selected example networks, these methods will be demonstrated and, thus, it will be shown that calculations can be dramatically simplified. In this way, all network situations can be handled. There are obvious advantages to SNA as it combines the MNA and the straightforward manual processing of the network. A very efficient solution strategy is obtained without source shifting and other common, less directed methods being used. SNA/RLA and symbolic algebra fit extremely well together. Thus an algorithm that supports the symbolic calculation of networks by means of supernodes which has been conceptualized and implemented in the analog design expert system EASY will be presented in detail. Above the educational aspect, it should be noted that the computer can now take a systematic approach to MNA and network analysis in general.1. There exist some extreme situations in which these additional equations are needed to express controlling currents.2. Generalized cut-sets are not necessarily minimal cut-sets [2]. This means that the removal of a generalized cut-set may split the network graph into more than only two components.3. Remark: This notation means that the current is in the frequency domain, commonly known as a phasor.4. This intuitive explanation will be confirmed in Section 7.5. These compactions are exactly the same as those applied by the CMNA implemented in ISAAC [7, 8, 11]. In fact, the CMNA is isomorphic to the SNA.6. Not subject of this paper.It's funny how many of the best ideas are just an old idea back-to-front. Douglas Adams  相似文献   

17.
This paper reports on the comparative performances of some polynomial based lowpass filters in the lumped elements, microstrip and defected ground structure (DGS) environment. The microwave designers are normally familiar with Butterworth, Chebyshev and Bessel filters. However, many more polynomials based, ripple and non-ripple types LPF have been suggested for the low-frequency applications. Some of these LPF, such as L-opt, H-type, Transitional Butterworth-Legendre (TBL), Pascal, Legendre group of LPF, are compared for their applications in analog microwaves, digital transmission, efficiency enhancement of the linear power amplifiers and five level partial response modulations. A method is reported to compute the ripple frequency for the Legendre group of LPF and Pascal LPF. Further, a design is suggested to significantly improve group delay performance of high selectivity filters.  相似文献   

18.
设计了基于红外检测的产品自动分装系统。采用红外传感器与基于NE567的单音解码电路组成红外检测电路以提高系统检测灵敏度,增强检测系统的抗干扰能力。利用NE567中CCO的输出信号控制红外发射电路的发射频率,实现了红外收发频率的自动同步跟踪,同时简化了红外检测电路,使电路更易调试。通过编程实现二进制与BCD码之间的码制转换,利用可编程软元件完成产品分装计数功能,同时为显示电路提供时钟脉冲,使系统显示电路大大简化。该系统灵敏度高,抗干扰能力强,能精确完成产品分装,能显示瓶装片数(0~999),及实时显示工作进程(0~9 999)。  相似文献   

19.
In this article, we present LipSynch, a software tool that can be used for the automatic replacement of speech dialogues in motion pictures, video or television series. The system operates in two steps: during analysis, the timing relationships between the speech segments of the dialogues that serve as a timing reference and the corresponding speech segments in the replacement dialogues are measured by means of a split Dynamic Time Warping algorithm. The obtained warping paths are then processed and used to synthesize high-quality natural-sounding speech dialogues that are precisely time-synchronized with the reference dialogues. Subjective audio-visual listening tests performed within the context of a difficult Automatic Dialogue Replacement task demonstrated that LipSynch achieves a significant improvement compared to the industry-standard benchmark VocALign, both in terms of achieved lip-synchronization accuracy as well as in overall speech quality of the synthesized dialogues.  相似文献   

20.
On pattern classification algorithms introduction and survey   总被引:1,自引:0,他引:1  
This paper attempts to lay bare the underlying ideas used in various pattern classification algorithms reported in the literature. It is shown that these algorithms can be classified according to the type of input information required and that the techniques of estimation, decision, and optimization theory can be used to effectively derive known as well as new results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号