首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Wiener-Hopf integral equation of linear least-squares estimation of a wide-sense stationary random process and the Krein integral equation of one-dimensional (1-D) inverse scattering are Fredholm equations with symmetric Toeplitz kernels. They are transformed using a wavelet-based Galerkin method into a symmetric “block-slanted Toeplitz (BST)” system of equations. Levinson-like and Schur-like fast algorithms are developed for solving the symmetric BST system of equations. The significance of these algorithms is as follows. If the kernel of the integral equation is not a Calderon-Zygmund operator, the wavelet transform may not sparsify it. The kernel of the Krein and Wiener-Hopf integral equations does not, in general, satisfy the Calderon-Zygmund conditions. As a result, application of the wavelet transform to the integral equation does not yield a sparse system matrix. There is, therefore, a need for fast algorithms that directly exploit the (symmetric block-slanted Toeplitz) structure of the system matrix and do not rely on sparsity. The first such O(n2) algorithms, viz., a Levinson-like algorithm and a Schur (1917) like algorithm, are presented. These algorithms are also applied to the factorization of the BST system matrix. The Levinson-like algorithm also yields a test for positive definiteness of the BST system matrix. The results obtained are directly applicable to the problem of constrained deconvolution of a nonstationary signal, where the locations of the smooth regions of the signal being deconvolved are known a priori  相似文献   

2.
This paper presents Levinson (1947)-type algorithms for (i) polynomial fitting (ii) obtaining a Q decomposition of Vandermonde matrices and a Cholesky factorization of Hankel matrices (iii) obtaining the inverse of Hankel matrices. The algorithm for the least-squares solution of Hankel systems of equations requires 3n2+9n+3 multiply and divide operation (MDO). The algorithm for obtaining an orthogonal representation of an (m×n) Vandermonde matrix X and computing the Cholesky factors F of Hankel matrices requires 5mn+n2 +2n-3m MDO, and the algorithm for generating the inverse of Hankel matrices requires 3(n2+n-2)/2 MDO. Our algorithms have been tested by means of fitting of polynomials of various orders and Fortran versions of all subroutines are provided in the Appendix  相似文献   

3.
A general wavelet packet tree is proposed to design predefined wavelet packet (PWP) bases for the efficient representation of electrodynamic integral equations. The wavelet packet decomposition tree is constructed by zooming in along the spectral oscillatory frequency of the free-space Green's function. Numerical results show that for typical two dimensional (2-D) scatterers the number of above-threshold elements in the PWP-based moment matrix is on the order of O(N1.3) and tends to grow at a rate of O(N·log N) for large-scale problems. Therefore, the complexity of solving the moment equations can be reduced accordingly. Furthermore, it is shown that the elements of the moment matrix based on the PWP bases can be computed directly at approximately the same complexity as the fast wavelet transform approach. Consequently, with on-the-fly thresholding of the matrix elements, the O(N2) memory bottleneck in the formation of the PWP-based moment matrix can be circumvented  相似文献   

4.
This paper considers the problem of wavelet sparsification of matrices arising in the numerical solution of electromagnetic integral equations by the method of moments. Scattering of plane waves from two-dimensional (2-D) cylinders is computed numerically using a constant number of test functions per wavelength. Discrete wavelet packet (DWP) similarity transformations and thresholding are applied to system matrices to obtain sparsity. If thresholds are selected to keep the relative residual error constant the matrix sparsity is of order O(NP) with p<2. This stands in contrast with O(N2 ) sparsities obtained with standard wavelet transformations. Numerical tests also show that the DWP method yields faster matrix-vector multiplication than some fast multipole algorithms  相似文献   

5.
A new analytical methodology is introduced here for fixed-point error analysis of various Toeplitz solving algorithms. The method is applied to the very useful Schur algorithm and the lately introduced split Schur (1918, 1986) algorithm. Both exact and first order error analysis are provided in this paper. The theoretical results obtained are consistent with experimentation. Besides the intrinsic symmetry of the error propagation recursive formulae, the technique presented here is capable of explaining many practical situations. For signals having a small eigenvalue spread the Schur algorithm behaves better than the split Schur in the fixed-point environment. The intermediate coefficients of the split Schur algorithm leading to the PARCOR's cannot serve as alternatives to the reflection coefficients in error sensitive applications. It is demonstrated that the error-weight vectors of the Schur propagation mechanism follow Levinson-like (second order) recursions, while the same vectors of the split Schur propagation mechanism follow split Levinson-like (third-order) recursions  相似文献   

6.
This paper presents resource and latency constrained scheduling algorithms to minimize power/energy consumption when the resources operate at multiple voltages (5 V, 3.3 V, 2.4 V, and 1.5 V). The proposed algorithms are based on efficient distribution of slack among the nodes in the data-flow graph. The distribution procedure tries to implement the minimum energy relation derived using the Lagrange multiplier method in an iterative fashion. Two algorithms are proposed, 1) a low complexity O(n2) algorithm and 2) a high complexity O(n2 log(L)) algorithm, where n is the number of nodes and L is the latency. Experiments with some HLS benchmark examples show that the proposed algorithms achieve significant power/energy reduction. For instance, when the latency constraint is 1.5 times the critical path delay, the average reduction is 39%  相似文献   

7.
Fast algorithms for solving arbitrary Toeplitz-plus-Hankel systems of equations are presented. The algorithms are analogs of the split Levinson and Schur algorithms, although the more general Toeplitz-plus-Hankel structure requires that the algorithms be based on a four-term recurrence. Relations with the previous split algorithms are considered. The algorithms require roughly half as many multiplications as previous fast algorithms for Toeplitz-plus-Hankel systems  相似文献   

8.
Medard et al. proposed an elegant recovery scheme (known as the MFBG scheme) using red/blue recovery trees for multicast path protection against single link or node failures. Xue et al. extended the MFBG scheme and introduced the concept of quality of protection (QoP) as a metric for multifailure recovery capabilities of single failure recovery schemes. They also presented polynomial time algorithms to construct recovery trees with good QoP and quality of service (QoS). In this paper, we present faster algorithms for constructing recovery trees with good QoP and QoS performance. For QoP enhancement, our O(n + m) time algorithm has comparable performance with the previously best O(n2(n + m)) time algorithm, where n and m denote the number of nodes and the number of links in the network, respectively. For cost reduction, our O(n + m) time algorithms have comparable performance with the previously best O(n2(n + m)) time algorithms. For bottleneck bandwidth maximization, our O(m log n) time algorithms improve the previously best O(nm) time algorithms. Simulation results show that our algorithms significantly outperform previously known algorithms in terms of running time, with comparable QoP or QoS performance.  相似文献   

9.
Many algorithms for computing the reliability of linear or circular consecutive-k-out-of-n:F systems appeared in this Transactions. The best complexity estimate obtained for solving this problem is O(k3 log(n/k)) operations in the case of i.i.d. components. Using fast algorithms for computing a selected term of a linear recurrence with constant coefficients, we provide an algorithm having arithmetic complexity O(k log (k) log(log(k)) log(n)+komega) where 2相似文献   

10.
Two efficient time slot assignment algorithms, called the two-phase algorithm for the nonhierarchical and the three-phase algorithm for the hierarchical time-division multiplex (TDM) switching systems, are proposed. The simple idea behind these two algorithms is to schedule the traffic on the critical lines/trunks of a traffic matrix first. The time complexities of these two algorithms are found to be O(LN2) and O(LM2), where L is the frame length, N is the switch size, and M is the number of input/output users connected to a hierarchical TDM switch. Unlike conventional algorithms, they are fast, iterative and simple for hardware implementation. Since no backtracking is used, pipelined packet transmission and packet scheduling can be performed for reducing the scheduling complexity of a transmission matrix to O(N2) and O(M2), respectively. Extensive simulations reveal that the two proposed algorithms give close-to-optimal performance under various traffic conditions  相似文献   

11.
In this paper, the effects of simultaneous write access on the fault modeling of multiport RAMs are investigated. New fault models representing more accurately the actual faults in such memories are then defined. Subsequently, a general algorithm that ensures the detection of all faults belonging to the new fault model is proposed. Unfortunately, the obtained algorithms are of O(n2) complexity which is not practical for real purposes. In order to reduce the complexity of the former test algorithm a topological approach has been developed. Finally, a BIST implementation of one of the proposed topological algorithms is presented  相似文献   

12.
Deng  H. Ling  H. 《Electronics letters》1997,33(13):1127-1128
The adaptive wavelet packet transform is applied to sparsify moment matrices for the fast solution of electromagnetic integral equations. An additive cost function is employed to adaptively select the optimal wavelet expansion/testing functions. It is found that the sparsified matrix has above-threshold elements that grow as O(N1.4 ) for typical scatterers  相似文献   

13.
The adaptive wavelet packet transform is applied to sparsify the moment matrices for the fast solution of electromagnetic integral equations. In the algorithm, a cost function is employed to adaptively select the optimal wavelet packet expansion/testing functions to achieve the maximum sparsity possible in the resulting transformed system. The search for the best wavelet packet basis and the moment matrix transformation are implemented by repeated two-channel filtering of the original moment matrix with a pair of quadrature filters. It is found that the sparsified matrix has above-threshold elements that grow only as O(N1.4) for typical scatterers. Consequently the operations to solve the transformed moment equation using the conjugate gradient method scales as O(N1.4). The additional computational cost for carrying out the adaptive wavelet packet transform is evaluated and discussed  相似文献   

14.
Fast Directional Continuous Spherical Wavelet Transform Algorithms   总被引:1,自引:0,他引:1  
We describe the construction of a spherical wavelet analysis through the inverse stereographic projection of the Euclidean planar wavelet framework, introduced originally by Antoine and Vandergheynst and developed further by Wiaux Fast algorithms for performing the directional continuous wavelet analysis on the unit sphere are presented. The fast directional algorithm, based on the fast spherical convolution algorithm developed by Wandelt and Goacuterski, provides a savings of O(radicNpix) over a direct quadrature implementation for Npix pixels on the sphere, and allows one to perform a directional spherical wavelet analysis of a 106 pixel map on a personal computer  相似文献   

15.
Maximizing Cooperative Diversity Energy Gain for Wireless Networks   总被引:1,自引:0,他引:1  
We are concerned with optimally grouping active mobile users in a two-user-based cooperative diversity system to maximize the cooperative diversity energy gain in a radio cell. The optimization problem is formulated as a non-bipartite weighted-matching problem in a static network setting. The weighted-matching problem can be solved using maximum weighted (MW) matching algorithm in polynomial time O(n3). To reduce the implementation and computational complexity, we develop a Worst-Link-First (WLF) matching algorithm, which gives the user with the worse channel condition and the higher energy consumption rate a higher priority to choose its partner. The computational complexity of the proposed WLF algorithm is O(n) while the achieved average energy gain is only slightly lower than that of the optimal maximum weighted- matching algorithm and similar to that of the 1/2-approximation Greedy matching algorithm (with computational complexity of O(n2 log n)) for a static-user network. We further investigate the optimal matching problem in mobile networks. By intelligently applying user mobility information in the matching algorithm, high cooperative diversity energy gain with moderate overhead is possible. In mobile networks, the proposed WLF matching algorithm, being less complex than the MW and the Greedy matching algorithms, yields performance characteristics close to those of the MW matching algorithm and better than the Greedy matching algorithm.  相似文献   

16.
A realistic model of a front-illuminated n+-p-p+ silicon solar cell is developed by solving the current continuity equations for minority carriers in the quasi-neutral regions in steady state, assuming the light in the cell is trapped as a result of multiple reflections at the front and the back of the cell. This model is used to study the effects of the front emitter thickness and doping level and the light trapping on the J-V characteristic and thereby on the open-circuit voltage, short-circuit current density, curve factor, and the efficiency of the cell. A textured cell with an emitter thickness in the range of 0.3-1.0 μm with its doping ≈5×1018 cm-3 and the recombination velocities of minority carriers as large as 200 cm/s at the n+ front surface and 10 cm/s at the back of the p base can exhibit an efficiency in excess of 26% (under AM 1.5 sunlight of 100 mW/cm2 intensity) at 25°C if the light reflection losses at the front surface can be made small  相似文献   

17.
This paper presents the multiple sweep method of moments (MSMM) analysis of the radiation and scattering from electrically large, smooth, perfectly conducting bodies. The MSMM is an O(N2) recursive method for solving the large matrix equations, which arise in the method of moments (MM) analysis of electrically large bodies. In the MSMM, the body is split into P sections and the currents on these sections are found in a recursive fashion. The first MSMM recursion or sweep includes the dominant scattering mechanisms and each subsequent recursion or sweep includes higher order mechanisms. Thus, as the electrical size of the body increases, fewer and fewer sweeps are required to accurately compute the current on the body  相似文献   

18.
A new electrical method to measure the conductivity mobility as a function of the injection level is proposed in this paper. The measurement principle is based on the detection of the voltage drop appearing across a n+-n-n+ (p+-p-p+) structure when a current step is forced into it at a given injection level in the intermediate region. This is obtained by using a three-terminal test pattern consisting of p+ , n+ layers realized on top of a n-n+ (p-p +) epitaxial wafer, where the p+-n-n+ (n+-p-p+) surface diode is forward biased to monitor the conductivity of the epilayer. The use of separate terminals for injection control and mobility measurement allows this technique to overcome some limitations presented by other electrical methods available in literature, Mobility values measured up to 2·1017 cm-3 are in good agreement with those predicted by the Dorkel and Leturcq's model (1981)  相似文献   

19.
A previously developed nonlocal impact ionization model based on the average energy of hot-electron subpopulation has been further simplified. The system of hydrodynamic transport equations consisting of three equations for these high-energy electrons has been reduced to a single equation. The simulation results for n+-n-n+ structures are in good agreement with Monte Carlo (MC) calculations. The model is easily applicable to two-dimensional (2-D) problems by exploiting the current flow line approach  相似文献   

20.
We demonstrate a new and improved borderless contact (BLC) Ti-salicide process for the fabrication of sub-quarter micron CMOS devices. A low-temperature chemical vapor deposition (CVD) SiOx Ny film to act as the selective etching stop layer and the additional n+ and p+ source-drain double implant structure (DIS) are employed in the studied device. The additional n+ and p+ DIS can reduce the junction leakage current, which is usually enhanced by BLC etching near the edge of shallow trench isolation (STI). The process window is enlarged. Furthermore, the employed low-thermal oxynitride and high deposition rate can improve the salicide thermal stability and avoid the salicide agglomeration  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号