共查询到20条相似文献,搜索用时 15 毫秒
1.
Rapid growth in social networks(SNs)presents a unique scalability challenge for SN operators because of the massive amounts of data distribution among large number of concurrent online users.A request from any user may trigger hundreds of server activities to generate a customized page and which has already become a huge burden.Based on the theoretical model and analytical study considering realistic network scenarios,this article proposes a hybrid P2P-based architecture called PAIDD.PAIDD fulfills effective data distribution primarily through P2P connectivity and social graph among users but with the help of central servers.To increase system efficiency,PAIDD performs optimized content prefetching based on social interactions among users.PAIDD chooses interaction as the criteria because user’s interaction graph is measured to be much smaller than the social graph.Our experiments confirm that PAIDD ensures satisfactory user experience without incurring extensive overhead on clients’network.More importantly,PAIDD can effectively achieve one order of magnitude of load reduction at central servers. 相似文献
2.
陈霞 《电脑与微电子技术》2013,(22):8-11
针对网络回波抵消器中大量抽头系数需要更新的问题,提出一种基于集员滤波和Mmax局部迭代策略的符号数据归一化LMS算法。仿真与实验结果表明:由于每次迭代将对误差性能贡献最大的输入信号筛选出来作为输入,从而提高集员滤波算法的稳态性能,并加快符号数据归一化LMS算法的收敛速度,同时还能够减少算法的运算量。 相似文献
3.
针对回波抵消器中大量抽头系数需要更新的问题,提出了一种基于权系数局部迭代和集员滤波的NLMS算法。首先基于权系数瞬时梯度估计的Mmax系数局部迭代方法,在每次迭代中把幅度较大输入元素对应的权系数筛选出来进行更新。其次,为了进一步减少算法的运算量,引入了基于系数稀疏更新理论的集员滤波算法,该算法中只有当参数估计误差大于给定的误差门限时滤波器系数才进行迭代更新,从而有效地减少了滤波器系数的迭代次数。 相似文献
4.
5.
6.
Yukawa M. de Lamare R.C. Sampaio-Neto R. 《IEEE transactions on audio, speech, and language processing》2008,16(4):696-710
This paper presents a new approach to efficient acoustic echo cancellation (AEC) based on reduced-rank adaptive filtering equipped with selective-decimation and adaptive interpolation. We propose a novel structure of an AEC scheme that jointly optimizes an interpolation filter, a decimation unit, and a reduced-rank filter. With a practical choice of parameters in AEC, the total computational complexity of the proposed reduced-rank scheme with the normalized least mean square (NLMS) algorithm is approximately half of that of the full-rank NLMS algorithm. We discuss the convergence properties of the proposed scheme and present a convergence condition. First, we examine the performance of the proposed scheme in a single-talk situation with an error-minimization criterion adopted in the decimation selection. Second, we investigate the potential of the proposed scheme in a double-talk situation by employing an ideal decimation selection. In addition to mean squared error (MSE) and power spectrum analysis of the echo estimation error, subjective assessments based on absolute category rating are performed, and the results demonstrate that the proposed structure provides significant improvements compared to the full-rank NLMS algorithm. 相似文献
7.
8.
在一种变步长算法基础上,从语音信号相关性的角度出发,提出了一种新的去相关变步长LMS算法(DCL—NLMS)。该算法结构简单,收敛速度快,稳态失调小,计算量与NLMS算法相当。仿真结果表明,该算法在处理强相关性信号时,不仅收敛速度明显快于其余算法,而且稳态失调特性也有很大优势。 相似文献
9.
As one of the main features of H.264/AVC, intra prediction coding technique acts as a basis for encoding performance and efficiency.
In official reference software Joint Model (JM), it employs the rate distortion optimization (RDO) technique to get to the
best encoding performance. By full searching (FS) all of the candidate modes under the rule of RDO, peak-signal-noise-rate
(PSNR) decreases to a very low level, but at the same time, the complexity of calculation increases a lot. Many researchers
had devoted to searching the fast algorithm which can decrease the complexity, and had designed so many excellent and intelligent
fast algorithms. In this paper we introduced a low complexity and fast approach for H.264/AVC intra prediction algorithm.
The new approach is based on reducing the number of candidate modes for further RDO calculation, and decreasing the computational
complicacy. It can decide the interpolation direction accurately by calculating the directional pixel-value differences (DPD)
of target block, and then do statistic with the obtained values to choose the most probable modes. The experimental results
demonstrate that the proposed algorithm can achieve more than 70% time saving than JM, but only a tiny degradation of encoding
performance is brought in. 相似文献
10.
NLMS与RLS算法的仿真比较及其在FECG提取中的应用 总被引:5,自引:0,他引:5
该文通过计算机仿真对比研究了归一化最小均方误差(NLMS)和递推最小二乘(RLS)两种自适应滤波算法,并将这两种算法用于胎儿心电图仪的自适应滤波器仿真设计中。该方法通过自适应滤波拾取理想的参考信号,再与腹部混迭信号相减抵消母亲心电图(MECG),从而提取出胎儿心电(FECG)信号。计算机仿真实验结果表明,这两种算法都能通过有效抑制MECG及其它各种干扰以实现FECG的检测。相比之下,RLS算法具有良好的应用性能,除收敛速度快于NLMS以及稳定性强外,还具有更高的起始收敛速率;更小的权失调噪声,更大的抑噪能力,但其计算复杂度高于NLMS算法。 相似文献
11.
GPCA(Generalized Principal Component Analysis)是近几年提出的一种数据聚类和降维方法,它通过将样本聚类为不同的子空间得到样本的低维表达.GPCA方法已经被应用于图像分割、图像聚类等问题.原有的GPCA算法具有指数计算复杂度,很难应用于高维数据的实际处理.文中针对此问题,提出了基于子空间搜索的SGPCA算法,将聚类问题分解为单个平面的单个垂直向量的搜索问题,对不同子空间分别搜索,从而实现多项式复杂度算法.实验表明,新方法不仅计算复杂度低,而且对噪声的鲁棒性也更强. 相似文献
12.
Z. Pawlak于1982年提出的Rough集理论有效地分析了不确定、不精确、不一致等各种不完备信息,其优点是无需任何关于数据的初始的或附加的信息,如统计学中的概率分布。该文介绍了Rough集的基本理论在数据约简中的应用。在分析基于信息系统的粗糙集理论的基础上,描述了一种基于核与重要度的约简算法,从降低约简算法计算复杂度角度出发,修改了属性约简算法,计算了算法修改前后的复杂度。实验结果表明,修改后的算法在降低时间复杂度的同时得出了次优属性集的约简。 相似文献
13.
在多目标柔性车间作业调度问题的研究中,求解算法与多目标处理至关重要。因此,基于非支配排序遗传算法提出了改进遗传算法求解该问题,设计了相应的矩阵编码、交叉算子,改进了非劣前沿分级方法,并提出了基于Pareto等级的自适应变异算子以及精英保留策略。实例计算表明,该算法可以利用传统遗传算法全局搜索能力的同时可以防止早熟现象的发生。改进非劣前沿分级方法可以快速得到Pareto最优解集,进一步减小了计算复杂度,而且可以根据种群的多样性改变变异概率,有利于保持种群多样性、发掘潜力个体。 相似文献
14.
Efficient vector quantization using genetic algorithm 总被引:1,自引:1,他引:0
This paper proposes a new codebook generation algorithm for image data compression using a combined scheme of principal component analysis (PCA) and genetic algorithm (GA). The combined scheme makes full use of the near global optimal searching ability of GA and the computation complexity reduction of PCA to compute the codebook. The experimental results show that our algorithm outperforms the popular LBG algorithm in terms of computational efficiency and image compression performance. 相似文献
15.
16.
17.
针对回波抵消器中大量抽头系数需要更新的问题,分析比较各种局部迭代NLMS算法的性能和计算复杂度。根据权系数局部迭代的简化原理,将滤波器系数分成多个系数子集,通过每次迭代仅更新权系数部分子集的方法,减少算法的计算量。并对不同的子集生成策略进行复杂度和滤波性能分析,分析结果表明,基于权系数瞬时梯度估计的Mmax系数局部迭代方法,以及基于最小化干扰原理的选择性权系数局部迭代算法,其性能与全系数更新算法相当,而计算量与连续局部迭代算法相比仅附加少量比较运算。 相似文献
18.
Distributed compressed video sensing scheme combines advantages of compressive sensing and distributed video coding to get better performance, in the meantime, adapts to the limited-resource wireless multimedia sensor network. However, in the conventional distributed compressed video sensing schemes, self-similarity and high sampling rate of the key frame have not been sufficiently utilized, and the overall computational complexity increases with the development of these schemes. To solve the aforementioned problems, we propose a novel distributed compressed video sensing scheme. A new key frame secondary reconstruction scheme is proposed, which further improves the quality of key frame and decreases computational complexity. The key frame’s initial reconstruction value is deeply exploited to assist the key frame secondary reconstruction. Then, a hypotheses set acquisition algorithm based on motion estimation is proposed to improve the quality of hypotheses set by optimizing the searching window under low complexity. Experimental results demonstrate that the overall performance of the proposed scheme outperforms that of the state-of-the-art methods. 相似文献
19.
Vector quantization(VQ) can perform efficient feature extraction from electrocardiogram (ECG) with the advantages of dimensionality reduction and accuracy increase. However, the existing dictionary learning algorithms for vector quantization are sensitive to dirty data, which compromises the classification accuracy. To tackle the problem, we propose a novel dictionary learning algorithm that employs k-medoids cluster optimized by k-means++ and builds dictionaries by searching and using representative samples, which can avoid the interference of dirty data, and thus boost the classification performance of ECG systems based on vector quantization features. We apply our algorithm to vector quantization feature extraction for ECG beats classification, and compare it with popular features such as sampling point feature, fast Fourier transform feature, discrete wavelet transform feature, and with our previous beats vector quantization feature. The results show that the proposed method yields the highest accuracy and is capable of reducing the computational complexity of ECG beats classification system. The proposed dictionary learning algorithm provides more efficient encoding for ECG beats, and can improve ECG classification systems based on encoded feature. 相似文献