首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
More efficient use of multipliers in FIR filters can be achieved at the expense of a slight increase in delay by designing sparse filter structures. We have developed a new, relatively simple approach to designing sparse cascaded filters, also described in the literature as interpolated FIR filters. Our method is heuristic in nature, but gives surprisingly good results without requiring iterative design or investigation of a large number of alternative parameterizations. The design uses the efficient and widely available Remez exchange algorithm along with some routines that we have written for Matlab. Although the resulting designs are not optimal in a minimax-error sense, they have reduced RMS error, which may be attractive for some applications. We give design examples, and study the effects of coefficient quantization  相似文献   

2.
Statistical modeling methods are becoming indispensable in today's large-scale image analysis. In this paper, we explore a computationally efficient parameter estimation algorithm for two-dimensional (2-D) and three-dimensional (3-D) hidden Markov models (HMMs) and show applications to satellite image segmentation. The proposed parameter estimation algorithm is compared with the first proposed algorithm for 2-D HMMs based on variable state Viterbi. We also propose a 3-D HMM for volume image modeling and apply it to volume image segmentation using a large number of synthetic images with ground truth. Experiments have demonstrated the computational efficiency of the proposed parameter estimation technique for 2-D HMMs and a potential of 3-D HMM as a stochastic modeling tool for volume images.  相似文献   

3.
Smoothing followed by a derivative operation is often used in the analysis of hyperspectral signatures. The width of the smoothing and/or derivative operator can greatly affect the utility of the method. If one is unsure of the appropriate width or would like to conduct analysis for several widths, scale-space images can be used. This paper shows how the wavelet transform modulus-maxima method can be used to formalize and generalize the smoothing followed by derivative analysis and how the wavelet transform ran be used to greatly decrease computational costs of the analysis. The Mallat/Zhong wavelet algorithm is compared to the traditional method, convolution with Gaussian derivative filters, for computing scale-space images. Both methods are compared on two points: (1) computational expense and (2) resulting scalar decompositions. The results show that the wavelet algorithm can greatly reduce the computational expense while practically no differences exist in the subsequent scaler decompositions. The analysis is conducted on a database of hyperspectral signatures, namely, hyperspectral digital image collection experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting scale-space images is on the order of 0.02  相似文献   

4.
A computationally efficient superresolution image reconstructionalgorithm   总被引:22,自引:0,他引:22  
Superresolution reconstruction produces a high-resolution image from a set of low-resolution images. Previous iterative methods for superresolution had not adequately addressed the computational and numerical issues for this ill-conditioned and typically underdetermined large scale problem. We propose efficient block circulant preconditioners for solving the Tikhonov-regularized superresolution problem by the conjugate gradient method. We also extend to underdetermined systems the derivation of the generalized cross-validation method for automatic calculation of regularization parameters. The effectiveness of our preconditioners and regularization techniques is demonstrated with superresolution results for a simulated sequence and a forward looking infrared (FLIR) camera image sequence.  相似文献   

5.
A computationally efficient discrete Backus-Gilbert (BG) method is derived that is appropriate for resolution-matching applications using oversampled data. The method builds upon existing BG methods and approximation techniques to create a modified set of BG coefficients. The method in its current form is restricted to a resolution-only minimization constraint, but in the future could be extended to use a simultaneous noise minimization constraint using a generalized singular value decomposition (GSVD) approach. A theoretical one-dimensional intercomparison is performed using a hypothetical sensor configuration. A comparison of the discrete BG method with a nondiscrete BG method shows that the new approach can be 250% more efficient while maintaining similar accuracies. In addition, an SVD approximation increases the computational efficiencies an additional 43%-106%, depending upon the scene. Several quadrature methods were also tested. The results suggest that accuracy improvements are possible using customized quadrature in regions containing known physical data discontinuities (such as along coastlines in microwave imagery data). The ability to recompute the modified BG coefficients dynamically at lower computational cost makes this work applicable toward applications in which noise may vary, or where data observations are not available consistently (e.g. in radio frequency interference contaminated environments).  相似文献   

6.
A novel approach called `VQ-agglomeration' capable of performing fast and autonomous clustering is presented. The approach involves a vector quantisation (VQ) process followed by an agglomeration algorithm that treats codewords as initial prototypes. Each codeword is associated with a gravisphere that has a well defined attraction radius. The agglomeration algorithm requires that each codeword be moved directly to the centroid of its neighbouring codewords. The movements of codewords in the feature space are synchronous, and will converge quickly to certain sets of concentric circles for which the centroids identify the resulting clusters. Unlike other techniques, such as the k-means and the fuzzy C-means, the proposed approach is free of the initial prototype problem and it does not need pre-specification of the number of clusters. Properties of the agglomeration algorithm are characterised and its convergence is proved  相似文献   

7.
Pattern matching for network security and intrusion detection demands exceptionally high performance. This paper describes a novel systolic array-based string matching architecture using a buffered, two-comparator variation of the Knuth-Morris-Pratt (KMP) algorithm. The architecture compares favorably with the state-of-the-art hardwired designs while providing on-the-fly reconfiguration, efficient hardware utilization, and high clock rates. KMP is a well-known computationally efficient string-matching technique that uses a single comparator and a precomputed transition table. Through the use of the transition table, the number of redundant comparisons performed is reduced. Through various algorithmic changes, we enable KMP to be used in hardware, providing the computational efficiency of the serial algorithm and the high throughput of a parallel hardware architecture. The efficiency of the system allows for a faster and denser implementation than any other RAM-based exact match system. We add a second comparator and an input buffer and then prove that the modified algorithm can function efficiently implemented as an element of a systolic array. The system can accept at least one character in each cycle while guaranteeing that the stream will never stall. In this paper, we prove the bound on the buffer size and running time of the systolic array, discuss the architectural considerations involved in the FPGA implementation, and provide performance comparisons against other approaches.  相似文献   

8.
Fast algorithms for the computation of the FIR MMSE-DFE in the presence of ISI, CCI, and colored noise are presented. Substantial reductions in computational complexity are achieved by using the powerful analytical tools of Cholesky factorization and displacement structure to fully exploit the structure of the problem. Both symbol-spaced and fractionally spaced feedforward filters are considered. Finally, we give a detailed complexity evaluation of the proposed algorithm for scenarios typical of the US TDMA digital cellular standard IS-54  相似文献   

9.
In this paper, the performance of a new two-step adaptive detection algorithm is analyzed. The two-step GLRT consists of an initial adaptive matched filter (AMF) test followed by a generalized likelihood ratio test (GLRT). Analytical expressions are provided for the probability of false alarm (PFA) and the probability of detection (PD) in unknown complex Gaussian interference. The analysis shows that the two-step GLRT significantly reduces the computational load over the GLRT while maintaining detection and sidelobe rejection performance commensurate with the GLRT. The two-step GLRT detection algorithm is also compared with another two-step detection algorithm: the adaptive sidelobe blanker (ASB). Both the two-step GLRT and the ASB are characterized in terms of the mainbeam detection performance and the rejection of sidelobe targets. We demonstrate that for a given PFA, the two-step GLRT has a broad range of threshold pairs (one threshold for the AMF test and one for the GLRT) that provide performance identical to the GLRT. This is in contrast with the ASB, where the threshold pairs that maximize the PD are a function of the target's signal-to-interference-plus-noise ratio (SINR). Hence, for a fixed pair of thresholds, the two-step GLRT can provide slightly better mainbeam detection performance than the ASB in the transition region from low to high detection probabilities  相似文献   

10.
11.
Direction-of-arrival (DOA) estimation of multiple emitters with sensor arrays has been a hot topic in the area of signal processing during the past decades. Among the existing DOA estimation methods, the subspace-based ones have attracted a lot of research interest, mainly due to their satisfying performance in direction estimation precision and super-resolution of temporally overlapping signals. However, subspace-based DOA estimation methods usually contain procedures of covariance matrix decomposition and refined spatial searching, which are computationally much demanding and significantly deteriorate the computational efficiency of these methods. Such a drawback in heavy computational load of the subspace-based methods has further blocked the application of them in practical systems. In this paper, we follow the major process of the subspace-based methods to propose a new DOA estimation algorithm, and devote ourselves to reduce the computational load of the two procedures of covariance matrix decomposition and spatial searching, so as to improve the overall efficiency of the DOA estimation method. To achieve this goal, we first introduce the propagator method to realize fast estimation of the signal-subspace, and then establish a DOA-dependent characteristic polynomial equation (CPE) with its order equaling the number of incident signals (which is generally much smaller than that of array sensors) based on the signal-subspace estimate. The DOA estimates are finally obtained by solving the low-dimensional CPE. The computational loads of both the subspace estimation and DOA calculation procedures are thus largely reduced when compared with the corresponding procedures in traditional subspace-based DOA estimation methods, e.g., MUSIC. Theoretical analyses and numerical examples are carried out to demonstrate the predominance of the proposed method in both DOA estimation precision and computational efficiency over existing ones.  相似文献   

12.
Wireless sensor network (WSN) consists of densely distributed nodes that are deployed to observe and react to events within the sensor field. In WSNs, energy management and network lifetime optimization are major issues in the designing of cluster-based routing protocols. Clustering is an efficient data gathering technique that effectively reduces the energy consumption by organizing nodes into groups. However, in clustering protocols, cluster heads (CHs) bear additional load for coordinating various activities within the cluster. Improper selection of CHs causes increased energy consumption and also degrades the performance of WSN. Therefore, proper CH selection and their load balancing using efficient routing protocol is a critical aspect for long run operation of WSN. Clustering a network with proper load balancing is an NP-hard problem. To solve such problems having vast search area, optimization algorithm is the preeminent possible solution. Spider monkey optimization (SMO) is a relatively new nature inspired evolutionary algorithm based on the foraging behaviour of spider monkeys. It has proved its worth for benchmark functions optimization and antenna design problems. In this paper, SMO based threshold-sensitive energy-efficient clustering protocol is proposed to prolong network lifetime with an intend to extend the stability period of the network. Dual-hop communication between CHs and BS is utilized to achieve load balancing of distant CHs and energy minimization. The results demonstrate that the proposed protocol significantly outperforms existing protocols in terms of energy consumption, system lifetime and stability period.  相似文献   

13.
In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speed-up of the algorithm using the methodology proposed in the paper.  相似文献   

14.
Adaptive blind equalization has gained widespread use in communication systems that operate without training signals. In particular, the constant modulus algorithm (CMA) has become a favorite of practitioners due to its LMS-like complexity and desirable robustness properties. The desire for further reduction in computational complexity has motivated signed-error versions of CMA, which have been found to lack the robustness properties of CMA. This paper presents a simple modification of signed-error CMA, based on the judicious use of dither, that results in an algorithm with robustness properties closely resembling those of CMA. We establish the fundamental transient and steady-state properties of dithered signed-error CMA and compare them with those of CMA  相似文献   

15.
This letter presents a new bit-loading algorithm for discrete multitone systems that converges faster to the same bit allocation as the optimal discrete bit-filling and bit-removal methods. The algorithm exploits the differences between the subchannel gain-to-noise ratios in order to determine an initial bit allocation and then performs a multiple-bits loading procedure for achieving the requested target rate. Numerical results using asymmetric digital subscriber test loops demonstrate the computational efficiency of the proposed algorithm.  相似文献   

16.
Katiyar  Abhay  Singh  Dinesh  Yadav  Rama Shankar 《Wireless Networks》2020,26(7):5307-5336
Wireless Networks - Vehicular ad hoc network (VANET) assists in improving road safety, traveller comfort, and intelligent transportation systems to a great extent. Dedicated short-range...  相似文献   

17.
SOC test time minimization hinges on the attainment of core test parallelism; yet test power constraints hamper this parallelism as excessive power dissipation may damage the SOC being tested. We propose a test power reduction methodology for SOC cores through scan chain modification. By inserting logic gates between scan cells, a given set of test vectors & captured responses is transformed into a new set of inserted stimuli & observed responses that yield fewer scan chain transitions. In identifying the best possible scan chain modification, we pursue a decoupled strategy wherein test data are decomposed into blocks, which are optimized for power in a mutually independent manner. The decoupled handling of test data blocks not only ensures significantly high levels of overall power reduction but it furthermore delivers computational efficiency at the same time. The proposed methodology is applicable to both fully, and partially specified test data; test data analysis in the latter case is performed on the basis of stimuli-directed controllability measures which we introduce. To explore the tradeoff between the test power reduction attained by the proposed methodology & the computational cost, we carry out an analysis that establishes the relationship between block granularity & the number of scan chain modifications. Such an analysis enables the utilization of the proposed methodology in a computationally efficient manner, while delivering solutions that comply with the stringent area & layout constraints in SOC as well.  相似文献   

18.
A computationally efficient nonuniform digital FIR filter bank is proposed for hearing aid applications. The eight nonuniform spaced subbands are formed with the help of frequency-response masking technique. Two half-band finite-impulse response (FIR) filters are employed as prototypes resulting in significant improvements in the computational efficiency. We show, by means of example, that an eight-band nonuniform FIR filter bank with stopband attenuation of 80 dB can be implemented with 15 multipliers. The performance of the filter bank is enhanced by optimizing the gains for each subband. The tests on various hearing loss cases suggest that the proposed filter achieves reasonable good matching between audiograms and magnitude responses of the filter bank at very low computational cost.  相似文献   

19.
Density estimation is the process of taking a set of multivariate data and finding an estimate for the probability density function (pdf) that produced it. One approach for obtaining an accurate estimate of the true density f(x) is to use the polynomial-moment method with Boltzmann-Shannon entropy. Although rigorous mathematically, the method is difficult to implement in practice because the solution involves a large set of simultaneous nonlinear integral equations, one for each moment or joint moment constraint. Solutions available in the literature are generally not easily applicable to multivariate data, nor computationally efficient. In this paper, we take the functional form that was developed in this problem and apply pointwise estimates of the pdf as constraints. These pointwise estimates are transformed into basis coefficients for a set of Legendre polynomials. The procedure is mathematically similar to the multidimensional Fourier transform, although with different basis functions. We apply this technique, called the maximum-entropy density estimation (MEDE) technique, to a series of multivariate datasets.  相似文献   

20.
A fuzzy clustering approach to EP estimation   总被引:6,自引:0,他引:6  
The problem of extracting a useful signal (a response) buried in relatively high amplitude noise has been investigated, under the conditions of low signal-to-noise ratio. In particular, the authors present a method for detecting the “true” response of the brain resulting from repeated auditory stimulation, based on selective averaging of single-trial evoked potentials. Selective averaging: is accomplished in two steps. First, an unsupervised fuzzy-clustering algorithm is employed to identify groups of trials with similar characteristics, using a performance index as an optimization criterion. Then, typical responses are obtained by ensemble averaging of all trials in the same group. Similarity among the resulting estimates is quantified through a synchronization measure, which accounts for the percentage of time that the estimates are in phase. The performance of the classifier is evaluated with synthetic signals of known characteristics, and its usefulness is demonstrated with real electrophysiological data obtained from normal volunteers  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号