首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Proportionate adaptive algorithms for network echo cancellation   总被引:2,自引:0,他引:2  
By analyzing the coefficient adaptation process of the steepest descent algorithm, the condition under which the fastest overall convergence will be achieved is obtained and the way to calculate optimal step-size control factors to satisfy that condition is formulated. Motivated by the results and using the stochastic approximation paradigm, the /spl mu/-law PNLMS (MPNLMS) algorithm is proposed to keep, in contrast to the proportionate normalized least-mean-square (PNLMS) algorithm, the fast initial convergence during the whole adaptation process in the case of sparse echo path identification. Modifications of the MPNLMS algorithm are proposed to lower the computational complexity.  相似文献   

2.
In this paper, successive intracell interference cancellation (IIC) of the wideband-code division multiple access (W-CDMA) signal at the mobile unit is considered. Three new interference cancellation techniques suitable for the downlink of any CDMA system with orthogonal spreading are proposed. No prior knowledge of users' spreading codes or even their spreading factors are required for interference cancellation. A new term, effective spreading code, has been introduced, which is defined as the interfering user physical code as seen by the desired user within the desired user symbol duration. The mobile receiver estimates the effective spreading codes of the interfering users regardless of their spreading factors using fast Walsh transform (FWT) correlators (instead of the regular correlators) and uses this information to suppress the intracell multiuser interference. Three different interference-suppressing techniques are studied: subtraction; combined interfering signal projection; and separate interfering signal subspace projection. The complexity of the proposed techniques is low compared to conventional interference cancellation techniques. For a W-CDMA system and the IMT-2000 vehicular channel model, a capacity increase of up to 150% of the original (without IIC) system capacity is shown  相似文献   

3.
In acoustic echo cancellation (AEC), the sparseness of impulse responses can vary over time or/and context. For such scenario, the proportionate normalized subband adaptive filter (PNSAF) and μ-law (MPNSAF) algorithms suffer from performance deterioration. To this end, we propose their sparseness-measured versions by incorporating the estimated sparseness into the PNSAF and MPNSAF algorithms, respectively, which can adapt to the sparseness variation of impulse responses. In addition, based on the energy conservation argument, we provide a unified formula to predict the steady-state mean-square performance of any PNSAF algorithm, which is also supported by simulations. Simulation results in AEC have shown that the proposed algorithms not only exhibit faster convergence rate than their competitors in sparse, quasi-sparse and dispersive environments, but also are robust to the variation in the sparseness of impulse responses.  相似文献   

4.
High dynamic range (HDR) image generation and display technologies are becoming increasingly popular in various applications. A standard and commonly used approach to obtain an HDR image is the multiple exposures' fusion technique which consists of combining multiple images of the same scene with varying exposure times. However, if the scene is not static during the sequence acquisition, moving objects manifest themselves as ghosting artefacts in the final HDR image. Detecting and removing ghosting artefacts is an important issue for automatically generating HDR images of dynamic scenes. The aim of this paper is to provide an up-to-date review of the recently proposed methods for ghost-free HDR image generation. Moreover, a classification and comparison of the reviewed methods is reported to serve as a useful guide for future research on this topic.  相似文献   

5.
We propose a method for devising approximate multiplication-free algorithms for compressed-domain linear operations on images, e.g., downsampling, translation, filtering, etc. We demonstrate that the approximate algorithms give output images that are perceptually nearly equivalent to those of the exact processing, while the computational complexity is significantly reduced.  相似文献   

6.
Using a one-dimensional convective-dispersive model of contrast agent flow in a blood vessel, we optimized and compared algorithms for combining a temporal sequence of X-ray angiography images, each with incomplete arterial filling, into a single-output image with fully opacified arteries. The four algorithms were: maximum opacity (MO) with a maximum over time at each spatial location; matched filtering (MAT); recursive filtering (REC) with a maximum opacity; and an approximate matched filter (AMF) consisting of a correlation with a kernel that approximates the matched filter kernel followed by a maximum opacity operation. Based on the contrast-to-noise ratio (CNR), MAT is theoretically the best algorithm. However, with spatially varying clinical images, a poorly matched MAT kernel greatly degraded CNR to the point of even inverting artery contrast. The practical AMF method maintained uniform CNR values over the entire field of view and gave >90% of the theoretical limit set by MAT. REC and MO created fully opacified arteries, but provided little CNR enhancement. By holding CNR at a nominal reference value, simulations predicted that AMF could be used with a contrast agent volume reduced by as much as 66%. Alternatively, X-ray exposure rate could be lowered. Although MO and REC are more easily implemented, the contrast enhancement with AMF makes it attractive for processing diagnostic angiography images acquired with a reduced contrast agent dose.  相似文献   

7.
In a high capacity personal communication system (PCS), for a given bandwidth, co-channel interference (CCI) limits the system capacity. Low-complexity diversity combining algorithms and circuit architectures for co-channel interference cancellation and frequency-selective fading mitigation, which do not require training sequences, are introduced. Results obtained from computer simulation of hardware show that two-antenna diversity combining gives wireless communication systems a signal to interference ratio improvement of at least 3 dB over conventional two-antenna selection diversity. The technique is also effective in mitigating frequency-selective fading without using conventional equalization-an average irreducible word error rate (WER) of 2.4% is obtained in computer simulation of hardware for radio channels with normalized delay spread of 0.3. In contrast, for the same WER, selection diversity and single antenna without diversity can sustain normalized delay spread up to about 0.16 and 0.06 respectively  相似文献   

8.
场景统计类红外图像非均匀性校正算法研究   总被引:1,自引:3,他引:1  
场景统计类非均匀性校正算法对场景分布进行假定,获得探测单元接收红外能量的一、二阶矩,进而估计探测单元的响应参数,校正非均匀性。分析比较了现有的场景统计类非均匀性校正算法,并应用交互多模(IMM)算法校正非均匀性,实验结果表明以连续图像序列作为观测数据时,能够有效地校正非均匀性,扩展了卡尔曼滤波法的适用范围。对各种方法进行仿真,表明删算法具有较好的收敛特性。  相似文献   

9.
绝缘子自爆缺陷检测对于保障输电线路的安全具有十分重要的作用,准确快速检测算法能够帮助运维人员快速定位自爆缺陷绝缘子的位置,并及时更换.传统的人工检测方法已无法满足检测的要求,面向图像的绝缘子自爆缺陷检测算法在其检测的准确性和快速性上仍面临着极大挑战,必须进一步对算法进行改进.本文首先介绍了绝缘子自爆缺陷图像的预处理过程...  相似文献   

10.
Image segmentation is the partition of an image into a set of nonoverlapping regions whose union is the entire image. The image is decomposed into meaningful parts which are uniform with respect to certain characteristics, such as gray level or texture. In this paper, we propose a methodology for evaluating medical image segmentation algorithms wherein the only information available is boundaries outlined by multiple expert observers. In this case, the results of the segmentation algorithm can be evaluated against the multiple observers' outlines. We have derived statistics to enable us to find whether the computer-generated boundaries agree with the observers' hand-outlined boundaries as much as the different observers agree with each other. We illustrate the use of this methodology by evaluating image segmentation algorithms on two different applications in ultrasound imaging. In the first application, we attempt to find the epicardial and endocardial boundaries from cardiac ultrasound images, and in the second application, our goal is to find the fetal skull and abdomen boundaries from prenatal ultrasound images  相似文献   

11.
This paper presents novel coding algorithms based on tree-structured segmentation, which achieve the correct asymptotic rate-distortion (R-D) behavior for a simple class of signals, known as piecewise polynomials, by using an R-D based prune and join scheme. For the one-dimensional case, our scheme is based on binary-tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly to achieve the correct exponentially decaying R-D behavior (D(R) - c(o)2(-c1R)), thus improving over classic wavelet schemes. We also prove that the computational complexity of the scheme is of O(N log N). We then show the extension of this scheme to the two-dimensional case using a quadtree. This quadtree-coding scheme also achieves an exponentially decaying R-D behavior, for the polygonal image model composed of a white polygon-shaped object against a uniform black background, with low computational cost of O(N log N). Again, the key is an R-D optimized prune and join strategy. Finally, we conclude with numerical results, which show that the proposed quadtree-coding scheme outperforms JPEG2000 by about 1 dB for real images, like cameraman, at low rates of around 0.15 bpp.  相似文献   

12.
13.
Teleconferencing systems and hands-free mobile terminals use acoustic echo cancellation (AEC) for high-quality full-duplex speech communication. The problem of aliasing in subband AEC is addressed. Filter banks with implicit notch filtering are derived from cascaded power symmetric-infinite impulse response (CFS-IIR) filters. It is shown that adaptive filters used with these filter banks must be coupled via continuity constraints to reduce the aliasing in the residual echo. A continuity constrained NLMS algorithm is therefore proposed and evaluated  相似文献   

14.
This paper presents a method to exploit rank statistics to improve fully automatic tracing of neurons from noisy digital confocal microscope images. Previously proposed exploratory tracing (vectorization) algorithms work by recursively following the neuronal topology, guided by responses of multiple directional correlation kernels. These algorithms were found to fail when the data was of lower quality (noisier, less contrast, weak signal, or more discontinuous structures). This type of data is commonly encountered in the study of neuronal growth on microfabricated surfaces. We show that by partitioning the correlation kernels in the tracing algorithm into multiple subkernels, and using the median of their responses as the guiding criterion improves the tracing precision from 41% to 89% for low-quality data, with a 5% improvement in recall. Improved handling was observed for artifacts such as discontinuities and/or hollowness of structures. The new algorithms require slightly higher amounts of computation, but are still acceptably fast, typically consuming less than 2 seconds on a personal computer (Pentium III, 500 MHz, 128 MB). They produce labeling for all somas present in the field, and a graph-theoretic representation of all dendritic/axonal structures that can be edited. Topological and size measurements such as area, length, and tortuosity are derived readily. The efficiency, accuracy, and fully-automated nature of the proposed method makes it attractive for large-scale applications such as high-throughput assays in the pharmaceutical industry, and study of neuron growth on nano/micro-fabricated structures. A careful quantitative validation of the proposed algorithms is provided against manually derived tracing, using a performance measure that combines the precision and recall metrics.  相似文献   

15.
MRI artifact cancellation due to rigid motion in the imaging plane   总被引:7,自引:0,他引:7  
A post-processing technique has been developed to suppress the magnetic resonance imaging (MRI) artifact arising from object planar rigid motion. In two-dimensional Fourier transform (2-DFT) MRI, rotational and translational motions of the target during magnetic resonance magnetic resonance (MR) scan respectively impose nonuniform sampling and a phase error an the collected MRI signal. The artifact correction method introduced considers the following three conditions: (1) for planar rigid motion with known parameters, a reconstruction algorithm based on bilinear interpolation and the super-position method is employed to remove the MRI artifact, (2) for planar rigid motion with known rotation angle and unknown translational motion (including an unknown rotation center), first, a super-position bilinear interpolation algorithm is used to eliminate artifact due to rotation about the center of the imaging plane, following which a phase correction algorithm is applied to reduce the remaining phase error of the MRI signal, and (3) to estimate unknown parameters of a rigid motion, a minimum energy method is proposed which utilizes the fact that planar rigid motion increases the measured energy of an ideal MR image outside the boundary of the imaging object; by using this property all unknown parameters of a typical rigid motion are accurately estimated in the presence of noise. To confirm the feasibility of employing the proposed method in a clinical setting, the technique was used to reduce unknown rigid motion artifact arising from the head movements of two volunteers.  相似文献   

16.
静止图像的一种自适应平滑滤波算法   总被引:25,自引:0,他引:25  
本文提出了一种基于梯度信息的自适应平滑滤波算法,该算法根据图像中像元灰度值的突变特性,自适应地改变滤波器的权值,在区域平滑近过程中使图像的边缘锐化,较地处理了平滑噪声、锐化边缘这对滤技术中的矛盾。后续实验结果表明,该算法具有良好的滤波性能,易于实时处理。  相似文献   

17.
肖行 《智能计算机与应用》2021,11(3):215-216,封3
深度学习技术的运用正日趋广泛,深度学习自身的高效性和智能性受到研究者的青睐.通过对深度学习影像分类的剖析,进一步探究深度学习在影像识别方向的应用,介绍了主要用于影像分类识别的基于深度学习的医疗影像检测算法,可作为开展深度学习技术运用于医学影像检测研究工作的有益参考.  相似文献   

18.
This paper evaluates a segmentation technique for magnetic resonance (MR) images of the brain based on fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive network through an unsupervised learning process. Segmentation of MR images is formulated as an unsupervised vector quantization process, where the local values of different relaxation parameters form the feature vectors which are represented by a relatively small set of prototypes. The experiments evaluate a variety of FALVQ algorithms in terms of their ability to identify different tissues and discriminate between normal tissues and abnormalities.  相似文献   

19.
The discussion of the causes of image deterioration in the maximum-likelihood estimator (MLE) method of tomographic image reconstruction, initiated with the publication of a stopping rule for that iterative process (E. Veklerov and J. Llacer, 1987) is continued. The concept of a feasible image is introduced, which is a result of a reconstruction that, if it were a radiation field, could have generated the initial projection data by the Poisson process that governs radioactive decay. From the premise that the result of a reconstruction should be feasible, the shape and characteristics of the region of feasibility in projection space are examined. With a new rule, reconstructions from real data can be tested for feasibility. Results of the tests and reconstructed images for the Hoffman brain phantom are shown. A comparative examination of the current methods of dealing with MLE image deterioration is included.  相似文献   

20.

Protection of multimedia information from different types of attackers has become important for people and governments. A high definition image has a large amount of data, and thus, keeping it secret is difficult. Another challenge that security algorithms must face with respect to high definition images in medical and remote sensing applications is pattern appearances, which results from existing regions with high density in the same color, such as background regions. An encryption and hiding based new hybrid image security systems are proposed in this paper for the purpose of keeping high definition images secret. First, one hiding method and two encryption methods are used in two hybrid algorithms. The new hiding algorithm proposed here starts by applying reordering and scrambling operations to the six Most Significant Bit planes of the secret image, and then, it hides them in an unknown scene cover image using adding or subtracting operations. Second, two different ciphering algorithms are used to encrypt the stego-image to obtain two different hybrid image security systems. The first encryption algorithm is based on binary code decomposition, while the second algorithm is a modification of an advanced encryption standard. After evaluating each hybrid algorithm alone, a comparison between the two hybrid systems is introduced to determine the best system. Several parameters were used for the performance, including the visual scene, histogram analysis, entropy, security analysis, and execution time.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号