首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A typical problem for the parameter estimation in normal mixture models is an unbounded likelihood and the presence of many spurious local maxima. To resolve this problem, we apply the doubly smoothed maximum likelihood estimator (DS-MLE) proposed by Seo and Lindsay (in preparation). We discuss the computational issues of the DS-MLE and propose a simulation-based DS-MLE using Monte Carlo methods as a general computational tool. Simulation results show that the DS-MLE is virtually consistent for any bandwidth choice. Moreover, the parameter estimates in the DS-MLE are quite robust to the choice of bandwidths, as the theory indicates. A new method for the bandwidth selection is also proposed.  相似文献   

2.
An algorithm for data-driven bandwidth selection   总被引:21,自引:0,他引:21  
The analysis of a feature space that exhibits multiscale patterns often requires kernel estimation techniques with locally adaptive bandwidths, such as the variable-bandwidth mean shift. Proper selection of the kernel bandwidth is, however, a critical step for superior space analysis and partitioning. This paper presents a mean shift-based approach for local bandwidth selection in the multimodal, multivariate case. The method is based on a fundamental property of normal distributions regarding the bias of the normalized density gradient. This paper demonstrates that, within the large sample approximation, the local covariance is estimated by the matrix that maximizes the magnitude of the normalized mean shift vector. Using this property, the paper develops a reliable algorithm which takes into account the stability of local bandwidth estimates across scales. The validity of the theoretical results is proven in various space partitioning experiments involving the variable-bandwidth mean shift.  相似文献   

3.
Common simplifications of the bandwidth matrix cannot be applied to existing kernels for density estimation with compositional data. In this paper, kernel density estimation methods are modified on the basis of recent developments in compositional data analysis and bandwidth matrix selection theory. The isometric log-ratio normal kernel is used to define a new estimator in which the smoothing parameter is chosen from the most general class of bandwidth matrices on the basis of a recently proposed plug-in algorithm. Both simulated and real examples are presented in which the behaviour of our approach is illustrated, which shows the advantage of the new estimator over existing proposed methods.  相似文献   

4.
局部离群点检测是近年来数据挖掘领域的热点问题之一.针对交通数据去噪问题,提出一种基于局部估计密度的局部离群点检测算法,算法使用核密度估计方法计算每个数据对象的密度估计值,来表示该数据对象的局部估计密度,并在核函数的带宽函数计算中引入数据对象的k-邻域平均距离作为其邻域信息,然后利用求出的局部估计密度计算数据对象的局部离群因子,依据局部离群因子的大小来判断数据对象是否为离群点.实验表明,该算法在UCI标准数据集与模拟数据集上都可以取得较好的表现.  相似文献   

5.
针对噪声分布未知的ARMAX系统,提出了一种自适应非参数噪声密度估计方法,由估计误差动态调整高斯核函数的全局带宽和局部带宽,实现了未知噪声分布密度的自适应估计;通过极小化似然函数,给出了基于噪声密度估计的参数辨识迭代算法,分析了算法的收敛性并给出了算法收敛的充分条件.仿真结果表明本文提出的算法在系统噪声未知时具有较强的抗噪能力和良好的收敛性.  相似文献   

6.
Wireless sensor networks (WSNs) are usually deployed for monitoring systems with the distributed detection and estimation of sensors. Sensor selection in WSNs is considered for target tracking. A distributed estimation scenario is considered based on the extended information filter. A cost function using the geometrical dilution of precision measure is derived for active sensor selection. A consensus-based estimation method is proposed in this paper for heterogeneous WSNs with two types of sensors. The convergence properties of the proposed estimators are analyzed under time-varying inputs. Accordingly, a new adaptive sensor selection (ASS) algorithm is presented in which the number of active sensors is adaptively determined based on the absolute local innovations vector. Simulation results show that the tracking accuracy of the ASS is comparable to that of the other algorithms.  相似文献   

7.
8.
This paper presents a multirobot cooperative event based localization scheme with improved bandwidth usage in a heterogeneous group of mobile robots. The proposed method relies on an agent based framework that defines the communications between robots and on an event based Extended Kalman Filter that performs the cooperative sensor fusion from local, global and relative sources. The event is generated when the pose error covariance exceeds a predefined limit. By this, the robots update the pose using the relative information available only when necessary, using less bandwidth and computational resources when compared to the time based methods, allowing bandwidth allocation for other tasks while extending battery life. The method is tested using a simulation platform developed in the programming language JAVA with a group of differential mobile robots represented by an agent in a JADE framework. The pose estimation performance, error covariance and number of messages exchanged in the communication are measured and used to compare the traditional time based approach with the proposed event based algorithm. Also, the compromise between the accuracy of the localization method and the bandwidth usage is analyzed for different event limits. A final experimental test with two SUMMIT XL robots is shown to validate the simulation results.  相似文献   

9.
针对资源受限的无线传感器网络,提出一种基于数据删减及量化新息的目标跟踪方法.利用融合中心接收到的量化新息以及数据删减过程传递的信息对目标状态进行估计.每个传感器节点利用容积卡尔曼滤波执行数据删减过程,融合中心执行一个辅助粒子滤波器.为了节省节点的能量和带宽,将所选择的观测数据的新息符号发送到融合中心,融合中心将数据丢失过程所包含的信息加以利用,提高了目标跟踪精度.仿真结果表明了该方法的有效性.  相似文献   

10.
Methods for improving the basic kernel density estimator include variable locations, variable bandwidths and variable weights. Typically these methods are implemented separately and via pilot estimation of variation functions derived from asymptotic considerations. The starting point here is a simple maximum likelihood procedure which allows (in its greatest generality) variation of all these quantities at once, bypassing asymptotics and explicit pilot estimation. One special case of this approach is the density estimator associated with nonparametric maximum likelihood estimation (NPMLE) in a normal location mixture model. Another, closely associated with the NPMLE, is a kernel convolution sieve estimator proposed in 1982 but little used in practice to date. Simple algorithms are utilised, a simulation study is reported on, a method for bandwidth selection is investigated and an illustrative example is given. The simulations and other considerations suggest that the kernel convolution sieve provides an especially promising framework for further practical utilisation and development. And the method has a further advantage: it automatically reduces, where appropriate, to a few-component mixture model which indicates and initialises parametric mixture modelling of the data.  相似文献   

11.
To reduce tedious work in cartoon animation, some computer-assisted systems including automatic Inbetweening and cartoon reusing systems have been proposed. In existing automatic Inbetweening systems, accurate correspondence construction, which is a prerequisite for Inbetweening, cannot be achieved. For cartoon reusing systems, the lack of efficient similarity estimation method and reusing mechanism makes it impractical for the users. The semi-supervised graph-based cartoon reusing approach proposed in this paper aims at generating smooth cartoons from the existing data. In this approach, the similarity between cartoon frames can be accurately evaluated by calculating the distance based on local shape context, which is expected to be rotation and scaling invariant. By the semi-supervised algorithm, given an initial frame, the most similar cartoon frames in the cartoon library are selected as candidates of the next frame. The smooth cartoons can be generated by carrying out the algorithm repeatedly to select new cartoon frames after the cartoonists specifying the motion path in a background image. Experimental results of the candidate frame selection in our cartoon dataset suggest the effectiveness of the proposed local shape context for similarity evaluation. The other experiments show the excellent performance on cartoon generation of our approach.  相似文献   

12.
Existing media streaming protocols provide bandwidth adaptation features in order to deliver seamless video streams in an abrupt bandwidth shortage on the networks. For instance, popular HTTP streaming protocols such as HTTP Live Streaming (HLS) and MPEG-DASH are designed to select the most appropriate streaming quality based on client side bandwidth estimation. Unfortunately, controlling the quality at the client side means the effectiveness of the adaptive streaming is not controlled by service providers, and it harms the consistency in quality-of-service. In addition, recent studies show that selecting media quality based on bandwidth estimation may exhibit unstable behavior in certain network conditions. In this paper, we demonstrate that the drawbacks of existing protocols can be overcome with a server side, buffer based quality control scheme. Server side quality control solves the service quality problem by eliminating client assistance. Buffer based control scheme eliminates the side effects of bandwidth based stream selection. We achieve this without client assistance by designing a play buffer estimation algorithm. We prototyped the proposed scheme in our streaming service testbed which supports pre-transcoding and live-transcoding of the source media file. Our evaluation results show that the proposed quality control performs very well both in simulated and real environments.  相似文献   

13.
曾玉龙  罗志年 《计算机仿真》2012,(8):131-133,416
研究提高通信性能问题,在SC-FDE宽带无线系统中,多径效应存在制约系统传输带宽提高的符号间干扰问题。目前采用的分块传输,是将数据分块和导频分块,前者导频利用率不高,而后者信道估计不够精确。为了提高系统频带利用率和信道估计精确度,在数据分块和导频分块基础上,本文提出了一种改进的数据帧结构,即对数据块和导频可同时划分多个块,并将位置相近的分块导频进行重复利用,可进行多次信道估计。结果表明,与单一的导频分块或者数据分块方法相比,系统性能提高了约1dB。仿真证明,改进方法对SC-FDE中的信道通信性能的提高具有参考价值。  相似文献   

14.
Software cost estimation is one of the most crucial activities in software development process. In the past decades, many methods have been proposed for cost estimation. Case based reasoning (CBR) is one of these techniques. Feature selection is an important preprocessing stage of case based reasoning. Most existing feature selection methods of case based reasoning are ‘wrappers’ which can usually yield high fitting accuracy at the cost of high computational complexity and low explanation of the selected features. In our study, the mutual information based feature selection (MICBR) is proposed. This approach hybrids both ‘wrapper’ and ‘filter’ mechanism which is another kind of feature selector with much lower complexity than wrappers, and the features selected by filters are likely to be generalized to other conditions. The MICBR is then compared with popular feature selectors and the published works. The results show that the MICBR is an effective feature selector for case based reasoning by overcoming some of the limitations and computational complexities of other feature selection techniques in the field.  相似文献   

15.
We describe a method for automatic determination of the regularization parameters for the class of simultaneous super-resolution (SR) algorithms. This method, proposed in (Zibetti et al., 2008c), is based on the joint maximum a posteriori (JMAP) estimation technique, which is a fast alternative to estimate the parameters. However, the classical JMAP technique can be unstable and may generate multiple local minima. In order to stabilize the JMAP estimation, while achieving a cost function with a unique global solution, we derive an improved solution by modeling the JMAP hyperparameters with a gamma prior distribution. In this work, experimental results are provided to illustrate the effectiveness of the proposed method for automatic determination of the regularization parameters for the simultaneous SR. Moreover, we contrast the proposed method to a reference method with known fixed parameters as well as to other parameter selection methods based on the L-curve. These results validate the proposed method as a very attractive alternative for estimating the regularization parameters.  相似文献   

16.
17.
The plug-in bandwidth selection method in nonparametric kernel hazard estimation is considered, and a weak dependence on the sample data is assumed. A general result of asymptotic optimality for the plug-in bandwidth is presented, that is valid for the hazard function, as well as for the density and distribution functions. In a simulation study, this method is compared with the “leave more than one out” cross-validation criterion under dependence. Simulations show that smaller errors and much less sample variability can be reached, and that a good selection of the pilot bandwidth can be done by means of “leave one out” cross-validation. Finally, an application to an earthquake data set is made.  相似文献   

18.

In this paper, we analyze the relationship between sensor bandwidth and heading drift, and improve a pedestrian dead-reckoning (PDR) system considering the heading drift by the insufficient bandwidth. The PDR system using foot-mounted inertial measurement unit (IMU) is generally based on an inertial navigation system (INS). In order to reduce the estimated position error, INS is combined with the zero-velocity update (ZUPT), which assumes that the pedestrian shoe velocity is zero at the stance phase. Although the error can be reduced through ZUPT, the estimation errors due to other causes remain. The angular rate and acceleration signals measured from the inertial sensor have various frequency components depending on the motion of the shoe. In the heel strike phase, the signals change sharply due to the impact, and the high frequency components are generated compared to the other phase. Considering only the accuracy of the inertial sensor and using a sensor with insufficient bandwidth, estimation errors occur in the PDR system. In the standard PDR system, loss of information due to narrow bandwidth causes heading drift, which is the unobservable state in the filter. In order to compensate for the heading drift due to the insufficient bandwidth, we analyze the estimation errors according to the bandwidth. Moreover, we propose a PDR system considering the heading drift estimated based on sensor bandwidth. To improve the estimation performance of the PDR system, the proposed system compensates the heading drift according to the sensor bandwidth in the heel strike with the high frequency components. The experimental results show that the improved performance of the proposed system compared with the standard algorithm.

  相似文献   

19.
分段计算密度的瓶颈带宽测量方法   总被引:1,自引:0,他引:1  
瓶颈带宽包对测量方法的核估计过滤算法窗宽选择对于计算的精度和速度有着重要意义,针对这个问题,提出了一种有效的分段计算密度的瓶颈带宽测量方法.对网络瓶颈带宽进行初始测量并对得到的样本进行过滤,通过计算得出瓶颈带宽的大概测量范围和测量时受背景流量的干扰程度,然后动态划分密度计算区间,按照划分的计算区间作为窗宽对测量样本进行核估计.该方法相对于用固定比例窗宽的核密度算法过滤瓶颈带宽测量采样值更加合理,既不影响计算的精度,又提高了计算速度和效率,适合在需要经常监控瓶颈带宽的情况下使用.  相似文献   

20.
在多传感信息融合系统中,受系统过程噪声和相关的量测噪声等因素影响,局部估计误差之间存在一定的相关性。针对考虑局部估计误差相关性情况下的传感器选择融合问题,构造了基于融合估计精度的优化指标;引入传感器子集的势约束,将传感器选择融合问题转化为一个组合优化问题;采用交叉熵优化方法,通过交替执行抽样和更新抽样分布参数两个步骤,获得了优化问题的解。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号