首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
提出一种改进LOD的大规模三维漫游场景简化策略及分割算法,首先应用基于最小二乘粗糙度的节点简化策略对复杂场景网格进行简化,通过改进Lindstrom算法,增加简化准则来解决误差模型精确度下降和粗糙度值变化的问题;然后对简化后的模型采用多通道切割算法进化分割;分割后的场景数据,应用基于误差因子的评价系统来判断;最后实验,通过与传统算法分析比较,证明本文提出的简化策略和分割算法对降低大规模三维场景绘制复杂度,加快场景实时性显示等方面具有较明显的优势.  相似文献   

2.
基于对比度视觉模型的多聚焦融合图像的精确重构   总被引:1,自引:0,他引:1  
采用基于对比度视觉模型的图像融合最优分块搜索算法 ,对同一场景两幅严格配准的多聚焦图像的清晰恢复进行了深入研究 .结果表明 :应用二十多幅各种不同图像实例 ,该方法的评价指标 RMSE(均方根误差 )不仅远小于目前国内外文献中研究的数值 ,甚至在一些情况下达到 0 ,实现了多聚焦图像对原始图像的精确重构 .该算法亦具有简捷性和可扩展性 ,对于非严格配准的图像也能得到很大程度地再现  相似文献   

3.
LOD模型生成算法研究   总被引:5,自引:1,他引:5  
彭雷  戴光明 《微机发展》2005,15(4):27-29,32
细节层次模型(LOD)是指对同一个场景或场景中的物体,使用具有不同细节的方法得到一组模型,供绘制时使用。建立LOD模型能很有效地降低数据量和复杂度,实现三维场景的实时处理。文中通过对LOD模型生成算法进行分析,研究了基于点删除算法的多分辨率模型生成方法、递进网格的简化算法、基于二次误差度量的几何简化算法等三种算法的实现步骤和误差计算方法。为以后在地形可视化、虚拟现实等领域的应用提供了理论依据。  相似文献   

4.
Auditory Segmentation Based on Onset and Offset Analysis   总被引:1,自引:0,他引:1  
A typical auditory scene in a natural environment contains multiple sources. Auditory scene analysis (ASA) is the process in which the auditory system segregates a scene into streams corresponding to different sources. Segmentation is a major stage of ASA by which an auditory scene is decomposed into segments, each containing signal mainly from one source. We propose a system for auditory segmentation by analyzing onsets and offsets of auditory events. The proposed system first detects onsets and offsets, and then generates segments by matching corresponding onset and offset fronts. This is achieved through a multiscale approach. A quantitative measure is suggested for segmentation evaluation. Systematic evaluation shows that most of target speech, including unvoiced speech, is correctly segmented, and target speech and interference are well separated into different segments  相似文献   

5.
基于重要点的时间序列线性分段算法能在较好地保留时间序列的全局特征的基础上达到较好的拟合精度。但传统的基于重要点的时间序列分段算法需要指定误差阈值等参数进行分段,这些参数与原始数据相关,用户不方便设定,而且效率和拟合效果有待于进一步提高。为了解决这一问题,提出一种基于时间序列重要点的分段算法——PLR_TSIP,该方法首先综合考虑到了整体拟合误差的大小和序列长度,接着针对优先级较高的分段进行预分段处理以期找到最优的分段;最后在分段时考虑到了分段中最大值点和最小值点的同异向关系,可以一次进行多个重要点的划分。通过多个数据集的实验分析对比,与传统的分段算法相比,减小了拟合误差,取得了更好的拟合效果;与其他重要点分段算法相比,在提高拟合效果的同时,较大地提高了分段效率。  相似文献   

6.
目的 视觉里程计(visual odometry,VO)仅需要普通相机即可实现精度可观的自主定位,已经成为计算机视觉和机器人领域的研究热点,但是当前研究及应用大多基于场景为静态的假设,即场景中只有相机运动这一个运动模型,无法处理多个运动模型,因此本文提出一种基于分裂合并运动分割的多运动视觉里程计方法,获得场景中除相机运动外多个运动目标的运动状态。方法 基于传统的视觉里程计框架,引入多模型拟合的方法分割出动态场景中的多个运动模型,采用RANSAC(random sample consensus)方法估计出多个运动模型的运动参数实例;接着将相机运动信息以及各个运动目标的运动信息转换到统一的坐标系中,获得相机的视觉里程计结果,以及场景中各个运动目标对应各个时刻的位姿信息;最后采用局部窗口光束法平差直接对相机的姿态以及计算出来的相机相对于各个运动目标的姿态进行校正,利用相机运动模型的内点和各个时刻获得的相机相对于运动目标的运动参数,对多个运动模型的轨迹进行优化。结果 本文所构建的连续帧运动分割方法能够达到较好的分割结果,具有较好的鲁棒性,连续帧的分割精度均能达到近100%,充分保证后续估计各个运动模型参数的准确性。本文方法不仅能够有效估计出相机的位姿,还能估计出场景中存在的显著移动目标的位姿,在各个分段路径中相机自定位与移动目标的定位结果位置平均误差均小于6%。结论 本文方法能够同时分割出动态场景中的相机自身运动模型和不同运动的动态物体运动模型,进而同时估计出相机和各个动态物体的绝对运动轨迹,构建出多运动视觉里程计过程。  相似文献   

7.
Current state-of-the-art image-based scene reconstruction techniques are capable of generating high-fidelity 3D models when used under controlled capture conditions. However, they are often inadequate when used in more challenging environments such as sports scenes with moving cameras. Algorithms must be able to cope with relatively large calibration and segmentation errors as well as input images separated by a wide-baseline and possibly captured at different resolutions. In this paper, we propose a technique which, under these challenging conditions, is able to efficiently compute a high-quality scene representation via graph-cut optimisation of an energy function combining multiple image cues with strong priors. Robustness is achieved by jointly optimising scene segmentation and multiple view reconstruction in a view-dependent manner with respect to each input camera. Joint optimisation prevents propagation of errors from segmentation to reconstruction as is often the case with sequential approaches. View-dependent processing increases tolerance to errors in through-the-lens calibration compared to global approaches. We evaluate our technique in the case of challenging outdoor sports scenes captured with manually operated broadcast cameras as well as several indoor scenes with natural background. A comprehensive experimental evaluation including qualitative and quantitative results demonstrates the accuracy of the technique for high quality segmentation and reconstruction and its suitability for free-viewpoint video under these difficult conditions.  相似文献   

8.

In computer vision, scene text component recognition is an important problem in end-to-end scene text reading systems. It involves two major sub-problems - segmentation of such components into scene characters and classification of segmented characters into known character classes. Significant attention and increasingly focused research efforts are being put forth and reasonable progress in this field has already been made, though a diversity of challenges like background complexity, variety of text appearances, noise, blur, distortion and various other degradation and deformation issues are still left to address. In this paper, we present (i) a detail survey of scene component segmentation and/or recognition methods reported so far in literature, (ii) related datasets available for quantitative evaluation and benchmarking segmentation and/or recognition performance, (iii) comparative results and analysis over the reported methods, and (iv) discussion on open areas to be looked into in order to achieve the desired goal of end-to-end scene text recognition. Moreover, this paper provides an acceptable reference for researcher in the area of scene text components segmentation and recognition.

  相似文献   

9.
In the last 15 years much effort has been made in the field of segmentation of videos into scenes. We give a comprehensive overview of the published approaches and classify them into seven groups based on three basic classes of low-level features used for the segmentation process: (1) visual-based, (2) audio-based, (3) text-based, (4) audio-visual-based, (5) visual-textual-based, (6) audio-textual-based and (7) hybrid approaches. We try to make video scene detection approaches better assessable and comparable by making a categorization of the evaluation strategies used. This includes size and type of the dataset used as well as the evaluation metrics. Furthermore, in order to let the reader make use of the survey, we list eight possible application scenarios, including an own section for interactive video scene segmentation, and identify those algorithms that can be applied to them. At the end, current challenges for scene segmentation algorithms are discussed. In the appendix the most important characteristics of the algorithms presented in this paper are summarized in table form.  相似文献   

10.
基于多层BP神经网络和无参数微调的人群计数方法   总被引:1,自引:0,他引:1  
徐洋  陈燚  黄磊  谢晓尧 《计算机科学》2018,45(10):235-239
针对大部分现有的人群计数方法被应用到新的场景时性能下降的问题,在多层BP神经网络框架下,提出一种具有无参数微调的人群计数方法。首先,从训练图像中裁切图像块,将获得的相似尺度的行人作为人群BP神经网络模型的输入;然后,BP神经网络模型通过学习预测密度图,得到了一个具有代表性的人群块;最后,为了处理新场景,对训练好的BP神经网络模型进行目标场景微调,可追求有相同属性的样本,包括候选块检索和局部块检索。实验数据集包括PETS2009数据集、UCSD数据集和UCF_CC_50数据集。这些场景的实验结果验证了提出方法的有效性。相比于全局回归计数法和密度估计计数法,提出的方法在平均绝对误差和均方误差方面均有较大优势, 消除了场景间区别和前景分割的影响。  相似文献   

11.
目的 随着自动驾驶技术不断引入生活,机器视觉中道路场景分割算法的研究已至关重要。传统方法中大多数研究者使用机器学习方法对阈值分割,而近年来深度学习的引入,使得卷积神经网络被广泛应用于该领域。方法 针对传统阈值分割方法难以有效提取多场景下道路图像阈值的问题和直接用深度神经网络来训练数据导致过分割严重的问题,本文提出了结合KSW(key seat wiper)和全卷积神经网络(FCNN)的道路场景分割方法,该方法结合了KSW熵法及遗传算法,利用深度学习在不同场景下的特征提取,并将其运用到无人驾驶技术的道路分割中。首先对道路场景测试集利用KSW熵法及遗传算法得到训练集,然后导入到全卷积神经网络中进行训练得到有效训练模型,最后通过训练模型实现对任意一幅道路场景图分割。结果 实验结果表明,在KITTI数据集中进行测试,天空和树木的分割精度分别达到91.3%和94.3%,道路、车辆、行人的分割精度提高了2%左右。从分割结果中明显看出,道路图像中的积水、泥潭、树木等信息存在的过分割现象有良好的改观。结论 相比传统机器学习道路场景分割方法,本文方法在一定程度上提高了分割精度。对比深度学习直接应用于道路场景分割的方法,本文方法在一定程度上避免了过分割现象,提高了模型的鲁棒性。综上所述,本文提出的结合KSW和FCNN的道路场景分割算法有广泛的研究前景,有望应用于医学图像和遥感图像的处理中。  相似文献   

12.
自适应最小误差阈值分割算法   总被引:31,自引:4,他引:27  
对二维最小误差法进行三维推广, 并结合三维直方图重建和降维思想提出了一种鲁 棒的最小误差阈值分割算法. 但该方法为全局算法, 仅适用于分割均匀光照图像. 为 提高其自适应性, 本文采用Water flow模型对非均匀光照图像进行背景估计, 以此获 得原始图像与背景图像的差值图像, 达到降低非均匀光照对图像分割造成干扰的目的. 为进 一步提高分割性能, 本文对差值图像采用γ 矫正进行增强, 然后采用鲁棒最小误差 法进行全局分割, 从而完成目标提取. 最后本文对均匀光照下以及非均匀光照下图像进行了 实验, 并与一维最小误差法、二维最小误差法、三维直方图重建和降维的Otsu阈值分割 算法、灰度波动变换自适应阈值方法以及一种改进的FCM方法在错误分割率和运行时间上进 行了对比. 实验结果表明, 相对于以上方法, 本算法的分割性能均有明显提升.  相似文献   

13.
The task of texture segmentation is to identify image curves that separate different textures. To segment textured images, one must first be able to discriminate textures. A segmentation algorithm performs texture-discrimination tests at densely spaced image positions, then interprets the results to localize edges. This article focuses on the first stage, texture discrimination.We distinguish between perceptual and physical texture differences: the former differences are those perceived by humans, while the latter, on which we concentrate, are those defined by differences in the processes that create the texture in the scene. Physical texture discrimination requires computing image texture measures that allow the inference of physical differences in texture processes, which in turn requires modeling texture in the scene. We use a simple texture model that describes textures by distributions of shape, position, and color of substructures. From this model, a set of image texture measures is derived that allows reliable texture discrimination. These measures are distributions of overall substructure length, width, and orientation; edge length and orientation; and differences in averaged color. Distributions are estimated without explicitly isolating image substructures. Tests of statistical significance are used to compare texture measures.A forced-choice method for evaluating texture measures is described. The proposed measures provide empirical discrimination accuracy of 84 to 100% on a large set of natural textures. By comparison, Laws' texture measures provide less than 50% accuracy when used with the same texture-edge detector. Finally, the measures can distinguish textures differing in second-order statistics, although those statistics are not explicitly measured.The author was with the Robotics Laboratory, Computer Science Department, Stanford University, Stanford, California 94305. He is now with the Institut National de Recherche en Informatique et en Automatique (INRIA), Sophia-Antipolis, 2004 Route des Lucioles, 06565 Valbonne Cedex, France.  相似文献   

14.
The development of common and reasonable criteria for evaluating and comparing the performance of segmentation algorithms has always been a concern for researchers in the area. As it is discussed in the paper, some of the measures proposed are not adequate for general images (i.e. images of any sort of scene, without any assumption about the features of the scene objects or the illumination distribution) because they assume a certain distribution of pixel gray-level or colour values for the interior of the regions. This paper reviews performance measures not performing such an assumption and proposes a set of new performance measures in the same line, called the percentage of correctly grouped pixels (CG), the percentage of over-segmentation (OS) and the percentage of under-segmentation (US). Apart from accounting for misclassified pixels, the proposed set of new measures are intended to compute the level of fragmentation of reference regions into output regions and vice versa. A comparison involving similar measures is provided at the end of the paper.  相似文献   

15.
目的 由于室内点云场景中物体的密集性、复杂性以及多遮挡等带来的数据不完整和多噪声问题,极大地限制了室内点云场景的重建工作,无法保证场景重建的准确度。为了更好地从无序点云中恢复出完整的场景,提出了一种基于语义分割的室内场景重建方法。方法 通过体素滤波对原始数据进行下采样,计算场景三维尺度不变特征变换(3D scale-invariant feature transform,3D SIFT)特征点,融合下采样结果与场景特征点从而获得优化的场景下采样结果;利用随机抽样一致算法(random sample consensus,RANSAC)对融合采样后的场景提取平面特征,将该特征输入PointNet网络中进行训练,确保共面的点具有相同的局部特征,从而得到每个点在数据集中各个类别的置信度,在此基础上,提出了一种基于投影的区域生长优化方法,聚合语义分割结果中同一物体的点,获得更精细的分割结果;将场景物体的分割结果划分为内环境元素或外环境元素,分别采用模型匹配的方法、平面拟合的方法从而实现场景的重建。结果 在S3DIS (Stanford large-scale 3D indoor space dataset)数据集上进行实验,本文融合采样算法对后续方法的效率和效果有着不同程度的提高,采样后平面提取算法的运行时间仅为采样前的15%;而语义分割方法在全局准确率(overall accuracy,OA)和平均交并比(mean intersection over union,mIoU)两个方面比PointNet网络分别提高了2.3%和4.2%。结论 本文方法能够在保留关键点的同时提高计算效率,在分割准确率方面也有着明显提升,同时可以得到高质量的重建结果。  相似文献   

16.
Image segmentation quality significantly affects subsequent image classification accuracy. It is necessary to develop effective methods for assessing image segmentation quality. In this paper, we present a novel method for assessing the segmentation quality of high-spatial resolution remote-sensing images by measuring both area and position discrepancies between the delineated image region (DIR) and the actual image region (AIR) of a scene object. In comparison with the most frequently used area coincidence-based methods, our method can assess the segmentation quality more objectively in that it takes into consideration all image objects intersecting with the AIR of a scene object. Moreover, the proposed method is more convenient to use than the existing boundary coincidence-based methods in that the calculation of the distance between the boundary of the image object and that of the corresponding AIR of the scene object is not required. Another benefit of this method over the two types of method above is that the assessment procedure of the segmentation quality can be conducted with less human intervention. The obtained optimal segmentation result can ensure maximal delineation of the extent of scene objects and can be beneficial to subsequent classification operations. The experimental results have shown the effectiveness of this new method for both segmentation quality assessment and optimal segmentation parameter selection.  相似文献   

17.
基于分段重叠的时间配准方法   总被引:1,自引:0,他引:1  
通过分析时间误差在组合导航系统速度估计和位置估计中的作用机理,导航系统中的时间误差不容忽视(尤其在机动条件下)。针对现有时间配准算法只能在特定条件下使用而无法有效解决多传感器导航系统中时间配准的不足,提出基于分段重叠的时间配准算法,通过分段处理使不同传感器的量测信息统一到同一处理时刻,通过重叠分段区,提高采样率,提高量测精度。仿真实验表明:新算法有效抑制了不规律的时间误差,保持了系统的精度和平稳性。  相似文献   

18.
There are three main challenging issues associated with processing range data of large-scale outdoor scene: (a) significant disparity in the size of features, (b) existence of complex and multiple structures; and (c) high uncertainty in data due to the construction error or moving objects. Existing range segmentation methods in computer vision literature have been generally developed for laboratory-sized objects or shapes with simple geometric features and do not address these issues. This paper studies the main problems involved in segmenting the range data of large building exteriors and presents a robust hierarchical segmentation strategy to extract fine as well as large details from such data. The proposed method employs a high breakdown robust estimator in a coarse-to-fine approach to deal with the existing discrepancies in size and sampling rates of various features of large outdoor objects. The segmentation algorithm is tested on several outdoor range datasets obtained by different laser rangescanners. The results show that the proposed method is an accurate and computationally cost-effective tool that facilitates automatic generation of 3D models of large-scale objects in general and building exteriors in particular.  相似文献   

19.
一种优化的消失点估计方法及误差分析   总被引:1,自引:0,他引:1  
空间一组平行直线在图像平面上所成的像的交点称为消失点. 消失点可以提供大量的场景三维结构信息. 本文提出一种新的优化的消失点估计方法. 该方法基于随机采样一致算法(Random sample consensus, RANSAC)对图像空间中的线段进行聚类, 通过最小化Sampson误差获得消失点的极大似然估计(Maximum likelihood estimation, MLE). 该方法不需要预知摄像机参数及直线的三维位置信息. 为了对该算法进行定量评估, 构造了基于反向传播的消失点误差传递模型. 实验结果验证了本文提出算法的有效性.  相似文献   

20.
《国际计算机数学杂志》2012,89(5):1009-1022
A new class of weighted, homogeneous, differentiable means is introduced, referred to as θ-means, which may be extended to the whole plane. These means are applied to Evans–Sanugi nonlinear one-step methods for initial value problems of scalar differential equations. Their general local truncation error is obtained, showing the first order of these methods for θ≠1/2 and second order for θ=1/2. Numerical results for scalar DETEST problems using Arithmetic, Harmonic, Contraharmonic, Quadratic, Geometric, Heronian, Centroidal and Logarithmic θ-means are presented. Both the local error of the methods and the global error of the numerical results have the same functional dependence with the parameters of the numerical method. The results show that a comparison of different methods for scalar problems may be based on the numerical evaluation of local truncation error.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号