共查询到20条相似文献,搜索用时 609 毫秒
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
本文给出了一种新的图像矢量量化码书的优化设计方法.传统矢量量化方法只考虑了码字与训练矢量之间的吸引影响,所以约束了最优解的寻解空间.本文提出了一种新的学习机理--模糊强化学习机制,该机制在传统的吸引因子基础上,引入新的排斥因子,极大地释放了吸引因子对最优解的寻解空间的约束.新的模糊强化学习机制没有采用引入随机扰动的方法来避免陷入局部最优码书,而是通过吸引因子和排斥因子的合力作用,较准确地确定了每个码字的最佳移动方向,从而使整体码书向全局最优解靠近.实验结果表明,基于模糊强化学习机制的矢量量化算法始终稳定地取得显著优于模糊K-means算法的性能,较好地解决了矢量量化中的码书设计容易陷入局部极小和初始码书影响优化结果的问题. 相似文献
12.
首先总结了几种常见的语音活动检测(VAD)方法,然后从计算每帧每个频率点的语音存在概率出发.提出了一种新的VAD方法,并就其中的一些参数选择问题进行了讨论。最后给出新方法与传统方法实验结果对比。 相似文献
13.
On adaptive HMM state estimation 总被引:1,自引:0,他引:1
New online adaptive hidden Markov model (HMM) state estimation schemes are developed, based on extended least squares (ELS) concepts and recursive prediction error (RPE) methods. The best of the new schemes exploit the idempotent nature of Markov chains and work with a least squares prediction error index, using a posterior estimates, more suited to Markov models than traditionally used in identification of linear systems. These new schemes learn the set of N Markov chain states, and the a posteriori probability of being in each of the states at each time instant. They are designed to achieve the strengths, in terms of computational effort and convergence rates, of each of the two classes of earlier proposed adaptive HMM schemes without the weaknesses of each in these areas. The computational effort is of order N. Implementation aspects of the proposed algorithms are discussed, and simulation studies are presented to illustrate convergence rates in comparison to earlier proposed online schemes 相似文献
14.
The authors present a new method for regularizing the ill-posed problem of computing epicardial potentials from body surface potentials. The method simultaneously regularizes the equations associated with all time points, and relies on a new theorem which states that a solution based on optimal regularization of each integral equation associated with each principal component of the data will be more accurate than a solution based on optimal regularization of each integral equation associated with each time point. The theorem is illustrated with simulations mimicking the complexity of the inverse electrocardiography problem. As must be expected from a method which imposes no additional a priori constraints, the new approach addresses uncorrelated noise only, and in the presence of dominating correlated noise it is only successful in producing a “cleaner” version of a necessarily compromised solution. Nevertheless, in principle, the new method is always preferred to the standard approach, since it (without penalty) eliminates pure noise that would otherwise be present in the solution estimate 相似文献
15.
Scalable on-demand media streaming with packet loss recovery 总被引:4,自引:0,他引:4
Mahanti A. Eager D.L. Vernon M.K. Sundaram-Stukel D.J. 《Networking, IEEE/ACM Transactions on》2003,11(2):195-209
Previous scalable on-demand streaming protocols do not allow clients to recover from packet loss. This paper develops new protocols that: (1) have a tunably short latency for the client to begin playing the media; (2) allow heterogeneous clients to recover lost packets without jitter as long as each client's cumulative loss rate is within a tunable threshold; and (3) assume a tunable upper bound on the transmission rate to each client that can be as small as a fraction (e.g., 25%) greater than the media play rate. Models are developed to compute the minimum required server bandwidth for a given loss rate and playback latency. The results of the models are used to develop the new protocols and assess their performance. The new protocols, Reliable Periodic Broadcast and Reliable Bandwidth Skimming, are simple to implement and achieve nearly the best possible scalability and efficiency for a given set of client characteristics and desirable/feasible media quality. Furthermore, the results show that the new reliable protocols that transmit to each client at only twice the media play rate have similar performance to previous protocols that require clients to receive at many times the play rate. 相似文献
16.
Maximum likelihood reconstruction for emission tomography 总被引:28,自引:0,他引:28
Previous models for emission tomography (ET) do not distinguish the physics of ET from that of transmission tomography. We give a more accurate general mathematical model for ET where an unknown emission density lambda = lambda(x, y, z) generates, and is to be reconstructed from, the number of counts n(*)(d) in each of D detector units d. Within the model, we give an algorithm for determining an estimate lambdainsertion mark of lambda which maximizes the probability p(n(*)|lambda) of observing the actual detector count data n(*) over all possible densities lambda. Let independent Poisson variables n(b) with unknown means lambda(b), b = 1, ..., B represent the number of unobserved emissions in each of B boxes (pixels) partitioning an object containing an emitter. Suppose each emission in box b is detected in detector unit d with probability p(b, d), d = 1, ..., D with p(b,d) a one-step transition matrix, assumed known. We observe the total number n(*) = n(*)(d) of emissions in each detector unit d and want to estimate the unknown lambda = lambda(b), b = 1, ..., B. For each lambda, the observed data n(*) has probability or likelihood p(n(*)|lambda). The EM algorithm of mathematical statistics starts with an initial estimate lambda(0) and gives the following simple iterative procedure for obtaining a new estimate lambdainsertion mark(new), from an old estimate lambdainsertion mark(old), to obtain lambdainsertion mark(k), k = 1, 2, ..., lambdainsertion mark(new)(b)= lambdainsertion mark(old)(b)Sum of (n(*)p(b,d) from d=1 to D/Sum of lambdainsertion mark()old(b('))p(b('),d) from b(')=1 to B), b=1,...B. 相似文献
17.
In this paper, we present a new technique for displaying High Dynamic Range (HDR) images on Low Dynamic Range (LDR) displays in an efficient way on the GPU. The described process has three stages. First, the input image is segmented into luminance zones. Second, the tone mapping operator (TMO) that performs better in each zone is automatically selected. Finally, the resulting tone mapping (TM) outputs for each zone are merged, generating the final LDR output image. To establish the TMO that performs better in each luminance zone we conducted a preliminary psychophysical experiment using a set of HDR images and six different TMOs. We validated our composite technique on several (new) HDR images and conducted a further psychophysical experiment, using an HDR display as the reference that establishes the advantages of our hybrid three-stage approach over a traditional individual TMO. Finally, we present a GPU version, which is perceptually equal to the standard version but with much improved computational performance. 相似文献
18.
Internet of Vehicles(IoV)is a new style of vehicular ad hoc network that is used to connect the sensors of each vehicle with each other and with other vehicles’sensors through the internet.These sensors generate different tasks that should be analyzed and processed in some given period of time.They send the tasks to the cloud servers but these sending operations increase bandwidth consumption and latency.Fog computing is a simple cloud at the network edge that is used to process the jobs in a short period of time instead of sending them to cloud computing facilities.In some situations,fog computing cannot execute some tasks due to lack of resources.Thus,in these situations it transfers them to cloud computing that leads to an increase in latency and bandwidth occupation again.Moreover,several fog servers may be fuelled while other servers are empty.This implies an unfair distribution of jobs.In this research study,we shall merge the software defined network(SDN)with IoV and fog computing and use the parked vehicle as assistant fog computing node.This can improve the capabilities of the fog computing layer and help in decreasing the number of migrated tasks to the cloud servers.This increases the ratio of time sensitive tasks that meet the deadline.In addition,a new load balancing strategy is proposed.It works proactively to balance the load locally and globally by the local fog managers and SDN controller,respectively.The simulation experiments show that the proposed system is more efficient than VANET-Fog-Cloud and IoV-Fog-Cloud frameworks in terms of average response time and percentage of bandwidth consumption,meeting the deadline,and resource utilization. 相似文献
19.