共查询到20条相似文献,搜索用时 0 毫秒
1.
Two important, to scientists and engineers, sphere fitting procedures, namely the linear least squares (LLS) and the non-linear least squares (NLLS) methods and their general random-error analysis are described. The first-order random errors of the center coordinates and the radius of the fitted sphere using the above mentioned procedures are derived in detail under the assumption that the variance-covariance matrix exists for the random error vector. With the additional trivariate normal error distribution assumption, the Maximum Likelihood (ML) estimators and their standard deviations are also derived. The effectiveness of these procedures are studied through computer stimulation. 相似文献
2.
A linear systolic array for recursive least squares 总被引:1,自引:0,他引:1
Classical systolic design procedures rely on linear or affine space-time transformations because of the well-understood properties of linear operations. In order to increase the efficiency of the final processor, various ad hoc manipulations applied to transformations that appeared to be nonlinear at the physical array level have been proposed. Folding is one of these possible transformations. The authors show that folding can actually be considered to be an overall linear procedure by artificially increasing the dimensionality of the dependence graph of the algorithm. A 1-D array for recursive least squares is also derived as an application of a systematic linear design procedure including folding 相似文献
3.
Tong Zhang 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2006,52(4):1307-1321
In this paper, we establish upper and lower bounds for some statistical estimation problems through concise information-theoretic arguments. Our upper bound analysis is based on a simple yet general inequality which we call the information exponential inequality. We show that this inequality naturally leads to a general randomized estimation method, for which performance upper bounds can be obtained. The lower bounds, applicable for all statistical estimators, are obtained by original applications of some well known information-theoretic inequalities, and approximately match the obtained upper bounds for various important problems. Moreover, our framework can be regarded as a natural generalization of the standard minimax framework, in that we allow the performance of the estimator to vary for different possible underlying distributions according to a predefined prior 相似文献
4.
The estimation of the n? drift layer resistance Rn? of power VVMOS (V-groove vertical-geometry power MOST) devices is treated in the letter. The obtained results, based on the possibility of defining upper and lower bounds of the accurate value, are compared with experimental and theoretical results of other methods. 相似文献
5.
The Riccati equation plays as important a role in scattering theory as it does in linear least squares estimation theory. However, in the scattering literature, a somewhat different framework of treating the Riccati equation has been developed. This framework is shown to be appropriate for estimation problems and makes possible simple derivations of known results as well as leading to several new results. Examples include the derivation of backward equations to solve forward Riccati equations, an analysis of the asymptotic behavior of the Riccati equation, the derivation of backward Markovian representations of stochastic processes, and new derivations and new insights into the Chandrasekhar and related Levinson and Cholesky equations. 相似文献
6.
Universal linear prediction by model order weighting 总被引:1,自引:0,他引:1
A common problem that arises in adaptive filtering, autoregressive modeling, or linear prediction is the selection of an appropriate order for the underlying linear parametric model. We address this problem for linear prediction, but instead of fixing a specific model order, we develop a sequential prediction algorithm whose sequentially accumulated average squared prediction error for any bounded individual sequence is as good as the performance attainable by the best sequential linear predictor of order less than some M. This predictor is found by transforming linear prediction into a problem analogous to the sequential probability assignment problem from universal coding theory. The resulting universal predictor uses essentially a performance-weighted average of all predictors for model orders less than M. Efficient lattice filters are used to generate the predictions of all the models recursively, resulting in a complexity of the universal algorithm that is no larger than that of the largest model order. Examples of prediction performance are provided for autoregressive and speech data as well as an example of adaptive data equalization 相似文献
7.
Kaufman L 《IEEE transactions on medical imaging》1993,12(2):200-214
The EM algorithm is the basic approach used to maximize the log likelihood objective function for the reconstruction problem in positron emission tomography (PET). The EM algorithm is a scaled steepest ascent algorithm that elegantly handles the nonnegativity constraints of the problem. It is shown that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach. The experiments suggest that one can cut the computation by about a factor of 3 by using this technique. The results are applied to various penalized least squares functions which might be used to produce a smoother image. 相似文献
8.
The constrained least squares (CLS) distribution is a method for obtaining distribution functions that yield low sidelobe patterns with specified constraints on the aperture efficiency, and are especially useful for the transmit patterns of active array antennas. The widely used Taylor distribution optimizes only pattern performance while the CLS distribution optimizes pattern performance while taking into account the constraints on both the peak element amplitude and the total effective radiated voltage (ERV) of the aperture distribution. The paper compares the pattern characteristics of linear arrays with CLS and Taylor distributions. The results help to establish guidelines on when a CLS distribution would be preferable over a Taylor distribution when a specified aperture efficiency is important. 相似文献
9.
Human motion prediction is a critical issue in human-robot collaboration (HRC) tasks. In order to reduce thelocal error caused by the limitation of the capture range and sampling frequency of the depth sensor, a hybrid human motion prediction algorithm, optimized sliding window polynomial fitting and recursive least squares (OSWPF-RLS) was proposed. The OSWPF-RLS algorithm uses the human body joint data obtained under the HRC task as input, and uses recursive least squares (RLS) to predict the human movement trajectories within the time window. Then, the optimized sliding window polynomial fitting (OSWPF) is used to calculate the multi-step prediction value, and the increment of multi-step prediction value was appropriately constrained. Experimental results show that compared with the existing benchmark algorithms, the OSWPF-RLS algorithm improved the multi- 相似文献
10.
Seongwook Song Jun-Seok Lim Seong Joon Baek Koeng-Mo Sung 《Vehicular Technology, IEEE Transactions on》2002,51(3):613-616
In this article, the variable forgetting factor linear least squares algorithm is presented to improve the tracking capability of channel estimation. A linear channel model with respect to time change describes a time-varying channel more accurately than a conventional stationary channel model. To reduce the estimation error due to model mismatch, we incorporate the modified variable forgetting factor into the proposed algorithm. Compared to the existing algorithms-exponentially windowed recursive least squares algorithm with the optimal forgetting factor and linear least squares algorithm-the proposed method makes a remarkable improvement in a fast fading environment. The effects of channel parameters such as signal-to-noise ratio and fading rate are investigated by computer simulations 相似文献
11.
Jensen-cotes upper and lower bounds on the gaussian Q-function and related functions 总被引:1,自引:0,他引:1
We present new families of lower and upper bounds on Q-functions. First, we consider the Craig form of the Gaussian Q-function Q(ξ) and shown that its integrand ϕ(ϕ; ξ) can be partitioned into a pair of complementary convex and concave segments. This property is then exploited in conjunction with the Jensen inequality and the Newton-Cotes' quadrature rule to produce a complete family of upper and lower bounds on Q(ξ), which can be made arbitrarily tight by finer segmentation. The basic idea is then utilized to derive families of upper and lower bounds also for the squared Gaussian Q-function Q/2(ξ), the 2D joint Gaussian Q-function Q(x, y, p), and the generalized Marcum Q-function QM(x, y). The bounds are shown to be tighter than alternatives found in the literature, and in some cases the lower bounds provided find no equivalent in current literature. The generality of the principle is the elegant point of the method and the resulting Jensen-Cotes bounds are easy to implement and evaluate since only elementary transcendental functions are involved. As an example of application to the analysis of communication systems, we consider the bit error rates (BER's) of decode-and-forward (DF) cooperative relaying schemes with coherent and differential phase-shift keying (PSK) modulations, which have been shown to have an intricate dependence on the Gaussian Q-function, complicated by crossproducts, irrational functional arguments and multiple numerical integrations. In that example the bounds substantially reduce the complexity required to evaluate the expressions, retaining tightness despite multiple numerical integrations with infinite limits.i 相似文献
12.
Linear spectral mixture analysis (LSMA) is a widely used technique in remote sensing to estimate abundance fractions of materials present in an image pixel. In order for an LSMA-based estimator to produce accurate amounts of material abundance, it generally requires two constraints imposed on the linear mixture model used in LSMA, which are the abundance sum-to-one constraint and the abundance nonnegativity constraint. The first constraint requires the sum of the abundance fractions of materials present in an image pixel to be one and the second imposes a constraint that these abundance fractions be nonnegative. While the first constraint is easy to deal with, the second constraint is difficult to implement since it results in a set of inequalities and can only be solved by numerical methods. Consequently, most LSMA-based methods are unconstrained and produce solutions that do not necessarily reflect the true abundance fractions of materials. In this case, they can only be used for the purposes of material detection, discrimination, and classification, but not for material quantification. The authors present a fully constrained least squares (FCLS) linear spectral mixture analysis method for material quantification. Since no closed form can be derived for this method, an efficient algorithm is developed to yield optimal solutions. In order to further apply the designed algorithm to unknown image scenes, an unsupervised least squares error (LSE)-based method is also proposed to extend the FCLS method in an unsupervised manner 相似文献
13.
In deeply scaled CMOS technologies, device aging causes transistor performance parameters to degrade over time. While reliable models to accurately assess these degradations are available for devices and circuits, the extension to these models for estimating the aging of microprocessor cores is not trivial and there is no well accepted model in the literature.This work proposes a methodology for deriving an NBTI-induced aging model for embedded cores. Since aging can only be determined on a netlist, we use an empirical approach based on characterizing the model using a set of open synthesizable embedded cores, which allows us to establish a link between the aging at the transistor level and the aging from the core perspective in terms of maximum frequency degradation.Using this approach, we were able to (1) prove the independence of the aging on the workloads which run by the cores, and (2) calculate upper and lower bounds for the “aging factor” that can be used for a generic embedded processor.Results show that our method yields very good accuracy in predicting the frequency degradation of cores due to NBTI aging effect, and can be used with confidence when the netlist of the cores is not available. 相似文献
14.
A technique is described for synthesizing the coefficients of an antenna array which satisfy a constrained minimax criterion. Inhibition regions can be formed in the beam pattern and placed advantageously in directions associated with strong sources of interference. Modified Dolph-Chebyshev beam patterns are obtained by imposing upper and lower constraining bounds on the beam pattern over regions of directions in space. Specific examples are presented for line arrays. 相似文献
15.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1976,22(1):65-74
A method for the evaluation of upper and lower bounds to the error probability of a linear pulse-amplitude modulation (PAM) system with bounded intersymbol interference and additive Gaussian noise is obtained via an isomorphism theorem from the theory of moment spaces. These upper and lower bounds are seen to be equivalent to upper and lower envelopes of some compact convex body generated from a set of kernel functions. Depending on the selection of these kernels and their corresponding moments, different classes of bounds are obtained. In this paper, upper and lower bounds that depend on the absolute moment of the intersymbol interference random variable, the second moment, the fourth moment, and an "exponential moment" are found by analytical, graphical, or iterative approaches. We study in detail the exponential moment case and obtain a family of new upper and a family of new lower bounds. Within each family, expressions for these bounds are given explicitly as a function of an arbitrary real-valued parameter. For two channels of interest, upper and lower bounds are evaluated and compared. Results indicate these bounds to be tight and useful. 相似文献
16.
17.
Janwa H. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1989,35(1):110-122
A Griesmer-like upper bound on the covering radius, R , is given. To the author's knowledge this is the only upper bound which explicitly depends on all three parameters n , k , and d . An upper bound on R for cyclic codes is then given which depends on the generator polynomial of the cyclic code and which, in many cases, leads to an improvement of the previous bound. An upper bound on the irreducible generator polynomial cyclic codes is also given. New interpretations and applications of the so-called Norse bounds and necessary and sufficient conditions to attain one of these bounds are provided. Generalizations of most of the results for codes over GF(q ) are outlined 相似文献
18.
《Signal Processing, IEEE Transactions on》2001,49(1):74-86
The least squares (LS) estimation criterion on one hand, and the total LS (TLS), constrained TLS (CTLS) and structured TLS (STLS) criteria on the other hand, can be viewed as opposite limiting cases of a more general criterion, which we term “extended LS” (XLS). The XLS criterion distinguishes measurement errors from modeling errors by properly weighting and balancing the two error sources. In the context of certain models (termed “pseudo-linear”), we derive two iterative algorithms for minimizing the XLS criterion: One is a straightforward “alternating coordinates” minimization, and the other is an extension of an existing CTLS algorithm. The algorithms exhibit different tradeoffs between convergence rate, computational load, and accuracy. The XLS criterion can be applied to popular estimation problems, such as identifying an autoregressive (AR) with exogenous noise (ARX) system from noisy input/output measurements or estimating the parameters of an AR process from noisy measurements. We demonstrate the convergence properties and performance of the algorithms with examples of the latter 相似文献
19.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1984,30(6):877-878
A sequence of upper and lower bounds for theQ function defined asQ(x)= 1/ sqrt{2 pi} int_{x}^{infty} exp[(-y^{2})/2]dy is developed. These bounds are shown to be tighter than those most commonly used. 相似文献
20.
Kruse L. Schmidt E. Jochens G. Stammermann A. Schulz A. Macii E. Nebel W. 《Very Large Scale Integration (VLSI) Systems, IEEE Transactions on》2001,9(1):3-14
In this paper, we present an approach for the calculation of lower and upper bounds on the power consumption of data path resources like functional units, registers, I/O ports, and busses from scheduled data flow graphs executing a specified input data stream. The low power allocation and binding problem is formulated. First, it is shown that this problem without constraining the number of resources can be relaxed to the bipartite weighted matching problem which is solvable in O(n)3. n is the number of arithmetic operations, variables, I/O-access or bus-access operations which have to be bound to data path resources. In a second step we demonstrate that the relaxation can be efficiently extended by including Lagrange multipliers in the problem formulation to handle a resource constraint. The estimated bounds take into account the effects of resource sharing. The technique can be used, for example, to prune the design space in high-level synthesis for low power before the allocation and binding of the resources. The application of the technique on benchmarks with real application input data shows the tightness of the bounds 相似文献