首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Pie and doughnut charts nicely convey the part–whole relationship and they have become the most recognizable chart types for representing proportions in business and data statistics. Many experiments have been carried out to study human perception of the pie chart, while the corresponding aspects of the doughnut chart have seldom been tested, even though the doughnut chart and the pie chart share several similarities. In this paper, we report on a series of experiments in which we explored the effect of a few fundamental design parameters of doughnut charts, and additional visual cues, on the accuracy of such charts for proportion estimates. Since mobile devices are becoming the primary devices for casual reading, we performed all our experiments on such device. Moreover, the screen size of mobile devices is limited and it is therefore important to know how such size constraint affects the proportion accuracy. For this reason, in our first experiment we tested the chart size and we found that it has no significant effect on proportion accuracy. In our second experiment, we focused on the effect of the doughnut chart inner radius and we found that the proportion accuracy is insensitive to the inner radius, except the case of the thinnest doughnut chart. In the third experiment, we studied the effect of visual cues and found that marking the centre of the doughnut chart or adding tick marks at 25% intervals improves the proportion accuracy. Based on the results of the three experiments, we discuss the design of doughnut charts and offer suggestions for improving the accuracy of proportion estimates.  相似文献   

2.
The odometry information used in mobile robot localization can contain a significant number of errors when robot experiences slippage. To offset the presence of these errors, the use of a low-cost gyroscope in conjunction with Kalman filtering methods has been considered by many researchers. However, results from conventional Kalman filtering methods that use a gyroscope with odometry can unfeasible because the parameters are estimated regardless of the physical constraints of the robot. In this paper, a novel constrained Kalman filtering method is proposed that estimates the parameters under the physical constraints using a general constrained optimization technique. The state observability is improved by additional state variables and the accuracy is also improved through the use of a nonapproximated Kalman filter design. Experimental results show that the proposed method effectively offsets the localization error while yielding feasible parameter estimation.  相似文献   

3.
Due to the low cost and capabilities of sensors, wireless sensor networks (WSNs) are promising for military and civilian surveillance of people and vehicles. One important aspect of surveillance is target localization. A location can be estimated by collecting and analyzing sensing data on signal strength, time of arrival, time difference of arrival, or angle of arrival. However, this data is subject to measurement noise and is sensitive to environmental conditions, so its location estimates can be inaccurate. In this paper, we add a novel process to further improve the localization accuracy after the initial location estimates are obtained from some existing algorithm. Our idea is to exploit the consistency of the spatial–temporal relationships of the targets we track. Spatial relationships are the relative target locations in a group and temporal relationships are the locations of a target at different times. We first develop algorithms that improve location estimates using spatial and temporal relationships of targets separately, and then together. We prove mathematically that our methods improve the localization accuracy. Furthermore, we relax the condition that targets should strictly keep their relative positions in the group and also show that perfect time synchronization is not required. Simulations were also conducted to test the algorithms. They used initial target location estimates from existing signal-strength and time-of-arrival algorithms and implemented our own algorithms. The results confirmed improved localization accuracy, especially in the combined algorithms. Since our algorithms use the features of targets and not the underlying WSNs, they can be built on any localization algorithm whose results are not satisfactory.  相似文献   

4.
This article examines the factors influencing the identification and observability of kinematic parameters during robot calibration. A generalized calibration experiment has been simulated using two different identification techniques. Details of the identification techniques and considerations for implementing them using standard IMSL routines are presented. The factors considered during the simulations include: initial estimates of parameters, measurement accuracy and noise, encoder resolution and uncertainty, selection of measurement configurations, number of measurements, and range of motion of the joints during observations. Results are tabulated for the various cases and suggestions are made for the design of robot calibration experiments.  相似文献   

5.
In the last decade, spatio-temporal database research focuses on the design of effective and efficient indexing structures in support of location-based queries such as predictive range queries and nearest neighbor queries. While a variety of indexing techniques have been proposed to accelerate the processing of updates and queries, not much attention has been paid to the updating protocol, which is another important factor affecting the system performance. In this paper, we propose a generic and adaptive updating protocol for moving object databases with less number of updates between objects and the database server, thereby reducing the overall workload of the system. In contrast to the approach adopted by most conventional moving object database systems where the exact locations and velocities last disclosed are used to predict their motions, we propose the concept of Spatio-temporal safe region to approximate possible future locations. Spatio-temporal safe regions provide larger space of tolerance for moving objects, freeing them from location and velocity updates as long as the errors remain predictable in the database. To answer predictive queries accurately, the server is allowed to probe the latest status of objects when their safe regions are inadequate in returning the exact query results. Spatio-temporal safe regions are calculated and optimized by the database server with two contradictory objectives: reducing update workload while guaranteeing query accuracy and efficiency. To achieve this, we propose a cost model that estimates the composition of active and passive updates based on historical motion records and query distribution. More system performance improvements can be obtained by cutting more updates from the clients, when the users of system are comfortable with incomplete but accuracy bounded query results. We have conducted extensive experiments to evaluate our proposal on a variety of popular indexing structures. The results confirm the viability, robustness, accuracy and efficiency of our proposed protocol.  相似文献   

6.
As controlled experimentation becomes more common in economics, it behoves economists to pay more attention to the peculiarities of economic experimentation. In particular, while most classical experimentation is essentially atemporal, economic experiments occur over a period of time, because individual economic units must adjust their activities to react to the incentive structure of the experiments. In this paper the choice of control strategies to improve estimation of the parameters in dynamic economic models with time-varying parameters is considered. Since restrictions of coefficients implied by the model structure cause the parameters also to appear in the solution of the estimation control problem, a sequential procedure immediately suggests itself. The combination of sequential estimation and design control strategies is shown to feature a marked improvement in the behavior of estimates over the nonsequential formulation. The maximum accuracy control problem considered in this paper can also be treated as an initial phase of a stochastic control problem. This will avoid solution to a difficult dual control problem. Finally, examples are presented illustrating the improvement in estimation accuracy that can be obtained.  相似文献   

7.
Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.  相似文献   

8.
目的 符合曼哈顿假设的结构化场景简称曼哈顿世界,具有丰富的场景结构特征。消失点作为直线的潜在观测,是一种全局信息,可以显式地体现载体坐标系与世界坐标系之间的姿态关系。为更加准确地估计消失点,本文针对单目图像,同时考虑实时性和准确性,提出了具有更高精度的基于非线性优化的消失点估计算法。方法 分析目前性能最优的基于随机采样一致性(random sample consistency,RANSAC)的消失点估计方法,通过对直线单参数化、利用正交性约束生成候选假设以及RANSAC过程的重点分析与改进,更加快速准确地得到消失点估计,为后续优化提供初值。利用直线分类时计算的误差度量构建最小二乘优化模型,采用非线性优化方法迭代求解,并采用鲁棒核函数保证迭代的精确性和结果的最优性。结果 通过仿真实验和基于公共数据集的实验对本文提出算法与目前性能最优算法进行比较。仿真实验中,相比于RCM(R3_CM1)及RCMI(R3_CM1_Iter),本文算法结果在轴角形式下的角度偏差减小了24.6%;有先验信息约束时,角度偏差降低一个数量级,仅为0.06°,精度大幅提高。在YUD(York urban city databbase)数据集中,本文算法相较于RCM和RCMI,角度偏差分别减小了27.2%和23.8%,并且80%的消失点估计结果角度偏差小于1.5°,性能明显提升且更具稳定性。此外,本文算法在仿真实验对每一图像帧的平均优化耗时为0.008 s,可保证整体的实时性。结论 本文提出的消失点估计算法,对基于RANSAC的方法进行了改进,在不影响实时性的基础上,估计结果更加准确、鲁棒,并且更具稳定性。  相似文献   

9.

In this study, a novel micro-gripper using a piezoelectric actuator was designed and improved by the design of experiments (DOE) approach. Using a bending PZT actuator connected to the micro-gripper by a rigid wedge can be considered as a novel approach in this field. Almost all of the similar grippers in this category were former actuated by a piezo-stack which has some limitations and difficulties like fabrication in MEMS proportions. The basic design was borrowed from compliant mechanisms that are suitable for MEMS application and easy to manufacture in micro-scale because of the intrinsic integration characteristic. Since stress concentration is common in flexure hinge compliant mechanisms, our focus was to consider strength as an important factor in our design. Finite element analysis tools were used to implement the DOE based on two criteria; minimizing stress concentration and maximizing the output displacement in the micro-gripper structure as much as possible with the consideration of the total size of the gripper. The experiment was performed to validate the simulation results and experiment results agreed well with the simulation one. The slight geometrical discrepancy in significant portions of structure like flexure hinges partially contributes to the accumulated error between the simulation and the experiments.

  相似文献   

10.
目前深度神经网络模型需要部署在资源受限的环境中,故需要设计高效紧凑的网络结构。针对设计紧凑的神经网络提出一种基于改进注意力迁移的模型压缩方法(KE),主要使用一个宽残差教师网络(WRN)指导一个紧凑的学生网络(KENet),将空间和通道的注意力迁移到学生网络来提升性能,并将该方法应用于实时目标检测。在CIFAR上的图像分类实验验证了经过改进注意力迁移的知识蒸馏方法能够提升紧凑模型的性能,在VOC上的目标检测实验验证了模型KEDet具有很好的精度(72.7 mAP)和速度(86 fps)。实验结果充分说明基于改进注意力迁移的目标检测模型具有很好的准确性和实时性。  相似文献   

11.
《Advanced Robotics》2013,27(3):329-348
Accurate robot dynamic models require the estimation and validation of the dynamic parameters through experiments. To this end, when performing the experiments, the system has to be properly excited so that the unknown parameters can be accurately estimated. The experiment design basically consists of optimizing the trajectory executed by the robot during the experiment. Due to the restricted workspace with parallel robots this task is more challenging than for serial robots; thus, this paper is focused on the experiment design aimed at dynamic parameter identification of parallel robots. Moreover, a multicriteria algorithm is proposed in order to reduce the deficiencies derived from the single-criterion optimization. The results of the identification using trajectories based on a single criterion and the multicriteria approaches are compared, showing that the proposed optimization can be considered as a suitable procedure for designing exciting trajectories for parameter identification.  相似文献   

12.
概率距离在联机汉字识别中的应用   总被引:3,自引:0,他引:3  
借助概率理论,将汉字信息看成随机事件,提出了概率距离的概念。概念距离依据事件的稳定性决定事件对识别距离的影响力,并以此为依据实现了在识别过程中的特征选择。经试验分析,有很好的效果,已将识别率提高了8个百分点。  相似文献   

13.
The existing procedures for crop yield estimation involve Crop Cutting Experiments ( CCEs) conducted during harvesting time in the plots selected based on a pre-designed sampling scheme using available ground data. These ground sampling designs do not consider the crop condition which is directly related to the yield during the season, for stratification and subsequent sample selection leading to biased distribution of plots. Moreover these experiments are capable of providing estimates only at larger areal units such as the total command area. Hence there is a need to improve the sampling design to achieve more accurate estimates. An alternate methodology exploiting the information on crop area and crop condition, derived from satellite remote sensing data on near real-time basis, for improving the ground sampling design has been proposed in this paper. The methodology is demonstrated in the Davangere and Malebennur divisions of the Bhadra project command area to estimate the average yield of paddy during Rabi 1992-93. The results obtained from conventional methodology and the improved procedure showed that the latter has increased the accuracy of estimates. The yield values obtained from CCE plots have been regressed with corresponding Normalized Difference Vegetation Index (NDVI) statistics and thus the derived paddy yield model is capable of providing the yield estimates at smaller area 1 units, such as within distributary command.  相似文献   

14.
Indoor localization using signal strength in wireless local area networks (WLANs) is becoming increasingly prevalent in today’s pervasive computing applications. In this paper, we propose an indoor tracking algorithm under the Bayesian filtering and machine learning framework. The main idea is to apply a graph-based particle filter to track a person’s location on an indoor floor map, and to utilize the machine learning method to approximate the likelihood of an observation at various locations based on the calibration data. Nadaraya–Watson kernel regression is adopted to interpolate the Received Signal Strength (RSS) distribution for nonsurvey points. The success of the proposed kernel-based particle filter (KBPF) lies in the fact that KBPF incorporates the environmental and motion constraints into the model and restricts particles to propagate on the graph which precludes the locations that the person is unlikely to be at, and that the developed nonlinear interpolation method is effective in inferring the RSS distribution for the non-survey location points which makes it possible to reduce the total number of survey locations. In addition, missing value problem is addressed in this paper, and different methods are compared through experiments. We conducted a series of experiments in a typical office environment. Results show that KBPF achieves superior performance than other existing algorithms. It even yields higher accuracy with only a small fraction of training data than others with a full training data set. As a consequence, by applying KBPF, sub-meter accuracy can be obtained while extensive calibration effort can be greatly reduced. Although KBPF is more computationally complex, it can still provide real time estimates.  相似文献   

15.
Numerous studies have been conducted to compare the classification accuracy of coral reef maps produced from satellite and aerial imagery with different sensor characteristics such as spatial or spectral resolution, or under different environmental conditions. However, in additional to these physical environment and sensor design factors, the ecologically determined spatial complexity of the reef itself presents significant challenges for remote sensing objectives. While previous studies have considered the spatial resolution of the sensors, none have directly drawn the link from sensor spatial resolution to the scale and patterns in the heterogeneity of reef benthos. In this paper, we will study how the accuracy of a commonly used maximum likelihood classification (MLC) algorithm is affected by spatial elements typical of a Caribbean atoll system present in high spectral and spatial resolution imagery.The results indicate that the degree to which ecologically determined spatial factors influence accuracy is dependent on both the amount of coral cover on the reef and the spatial resolution of the images being classified, and may be a contributing factor to the differences in the accuracies obtained for mapping reefs in different geographical locations. Differences in accuracy are also obtained due to the methods of pixel selection for training the maximum likelihood classification algorithm. With respect to estimation of live coral cover, a method which randomly selects training samples from all samples in each class provides better estimates for lower resolution images while a method biased to select the pixels with the highest substrate purity gave better estimations for higher resolution images.  相似文献   

16.
In system design, the best system designed under a simple experimental environment may not be suitable for application in real world if dramatic changes caused by uncertainties contained in the real world are considered. To deal with the problem caused by uncertainties, designers should try their best to get the most robust solution. The most robust solution can be obtained by constrained min–max optimization algorithms. In this paper, the scheme of generating escape vectors has been proposed to solve the problem of premature convergence of differential evolution. After applying the proposed scheme to the constrained min–max optimization algorithm, the performance of the algorithm could be greatly improved. To evaluate the performance of constrained min–max optimization algorithms, more complex test problems have also been proposed in this paper. Experimental results show that the improved constrained min–max optimization algorithm is able to achieve a quite satisfied success rate on all considered test problems under limited accuracy.  相似文献   

17.
应用模糊多目标理论建立驾驶员换道决策模型,根据实际道路实验所得数据,计算出不考虑和考虑驾驶倾向差异的换道模型预测结果,将模拟出的交通流宏观参数(换道率)与道路实验情况相对比,进行驾驶倾向性推理效果的验证。实验结果表明,所提方法可以明显改善汽车驾驶倾向性辨识模型的准确率。  相似文献   

18.
Service-oriented development methodologies are very often considered for distributed system development. The quality of service-oriented computing can be best assessed by the use of software metrics that are considered to design the prediction model. Feature selection technique is a process of selecting a subset of features that may lead to build improved prediction models. Feature selection techniques can be broadly classified into two subclasses such as feature ranking and feature subset selection technique. In this study, eight different types of feature ranking and four different types of feature subset selection techniques have been considered for improving the performance of a prediction model focusing on maintainability criterion. The performance of these feature selection techniques is evaluated using support vector machine with different types of kernels over a case study, i.e., five different versions of eBay Web service. The performances are measured using accuracy and F-measure value. The results show that maintainability of the service-oriented computing paradigm can be predicted by using object-oriented metrics. The results also show that it is possible to find a small subset of object-oriented metrics which helps to predict maintainability with higher accuracy and also reduces the value of misclassification errors.  相似文献   

19.
蝙蝠算法(Bat Algorithm,BA)是一类新型元启发式算法,针对其在算法后期寻优精度降低、易陷入局部极值的不足,提出一种具有自适应多普勒策略及动态邻域策略的改进算法。根据蝙蝠个体在捕食过程中与猎物间存在的相对运动现象,引入自适应多普勒策略改进频率参数,增强算法全局探索的寻优能力。将动态邻域策略与BA算法有机结合,增加蝙蝠个体寻优结构的多样性,改善算法易陷入局部最优的不足。从理论上分析了改进后算法的收敛性和运算复杂性。在数值实验部分对改进后的算法进行了性能及应用测试:对10个经典标准测试函数在不同维度下进行对比实验,将其应用于求解螺旋压缩弹簧优化设计问题,并与其他算法进行了对比分析。实验结果证明了具有自适应多普勒策略及动态邻域策略的改进算法具有更优的收敛速度、收敛精度以及稳定鲁棒性。  相似文献   

20.
When performing structural optimization of large scale engineering problems, the choice of experiment design is important. However, classical experiment designs are developed to deal with undesired but inevitable scatter and are thus not ideal for sampling of deterministic computational responses. In this paper, a novel screening and design of computer experiments algorithm is presented. It is based on the concept of orthogonal design variable significances and is applicable for problems where design variables do not simultaneously have a significant influence on any of the constraints. The algorithm presented uses significance orthogonality to combine several one-factor-at-a-time experiments in one several-factors-at-a-time experiment. The procedure results in a reduced experiment design matrix. In the reduced experiment design, each variable is varied exactly once but several variables may be varied simultaneously, if their significances with respect to the constraints are orthogonal. Moreover, a measure of influence, as well as an influence significance threshold, is defined. In applications, the value of the threshold is left up to the engineer. To assist in this choice, a relation between model simplification, expressed in terms of the significance threshold, and computational cost is established in a screening. The relation between efficiency and loss of accuracy for the proposed approach is discussed and demonstrated. For two solid mechanics type problems studied herein, the necessary number of simulations could be reduced by 25% and 64%, respectively, with negligible losses in accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号