首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We propose a novel approach to robot‐operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real‐time voxel‐based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning.  相似文献   

2.
目的 由于室内点云场景中物体的密集性、复杂性以及多遮挡等带来的数据不完整和多噪声问题,极大地限制了室内点云场景的重建工作,无法保证场景重建的准确度。为了更好地从无序点云中恢复出完整的场景,提出了一种基于语义分割的室内场景重建方法。方法 通过体素滤波对原始数据进行下采样,计算场景三维尺度不变特征变换(3D scale-invariant feature transform,3D SIFT)特征点,融合下采样结果与场景特征点从而获得优化的场景下采样结果;利用随机抽样一致算法(random sample consensus,RANSAC)对融合采样后的场景提取平面特征,将该特征输入PointNet网络中进行训练,确保共面的点具有相同的局部特征,从而得到每个点在数据集中各个类别的置信度,在此基础上,提出了一种基于投影的区域生长优化方法,聚合语义分割结果中同一物体的点,获得更精细的分割结果;将场景物体的分割结果划分为内环境元素或外环境元素,分别采用模型匹配的方法、平面拟合的方法从而实现场景的重建。结果 在S3DIS (Stanford large-scale 3D indoor space dataset)数据集上进行实验,本文融合采样算法对后续方法的效率和效果有着不同程度的提高,采样后平面提取算法的运行时间仅为采样前的15%;而语义分割方法在全局准确率(overall accuracy,OA)和平均交并比(mean intersection over union,mIoU)两个方面比PointNet网络分别提高了2.3%和4.2%。结论 本文方法能够在保留关键点的同时提高计算效率,在分割准确率方面也有着明显提升,同时可以得到高质量的重建结果。  相似文献   

3.
4.
Segmentation of Magnetic Resonance Imaging (MRI) brain image data has a significant impact on the computer guided medical image diagnosis and analysis. However, due to limitation of image acquisition devices and other related factors, MRI images are severely affected by the noise and inhomogeneity artefacts which lead to blurry edges in the intersection of the intra-organ soft tissue regions, making the segmentation process more difficult and challenging. This paper presents a novel two-stage fuzzy multi-objective framework (2sFMoF) for segmenting 3D MRI brain image data. In the first stage, a 3D spatial fuzzy c-means (3DSpFCM) algorithm is introduced by incorporating the 3D spatial neighbourhood information of the volume data to define a new local membership function along with the global membership function for each voxel. In particular, the membership functions actually define the underlying relationship between the voxels of a close cubic neighbourhood and image data in 3D image space. The cluster prototypes thus obtained are fed into a 3D modified fuzzy c-means (3DMFCM) algorithm, which further incorporates local voxel information to generate the final prototypes. The proposed framework addresses the shortcomings of the traditional FCM algorithm, which is highly sensitive to noise and may stuck into a local minima. The method is validated on a synthetic image volume and several simulated and in-vivo 3D MRI brain image volumes and found to be effective even in noisy data. The empirical results show the supremacy of the proposed method over the other FCM based algorithms and other related methods devised in the recent past.  相似文献   

5.
A new strategy for automatic object extraction in highly complex scenes is presented in this paper. The method proposed gives a solution for 3D segmentation avoiding most restrictions imposed in other techniques. Thus, our technique is applicable on unstructured 3D information (i.e. cloud of points), with a single view of the scene, scenes consisting of several objects where contact, occlusion and shadows are allowed, objects with uniform intensity/texture and without restrictions of shape, pose or location. In order to have a fast segmentation stopping criteria, the number of objects in the scene is taken as input. The method is based on a new distributed segmentation technique that explores the 3D data by establishing a set of suitable observation directions. For each exploration viewpoint, a strategy [3D data]-[2D projected data]-[2D segmentation]-[3D segmented data] is accomplished. It can be said that this strategy is different from current 3D segmentation strategies. This method has been successfully tested in our lab on a set of real complex scenes. The results of these experiments, conclusions and future improvements are also shown in the paper.  相似文献   

6.
7.
3D line voxelization and connectivity control   总被引:5,自引:0,他引:5  
Voxelization algorithms that convert a 3D continuous line representation into a discrete line representation have a dual role in graphics. First, these algorithms synthesize voxel-based objects in volume graphics. The 3D line itself is a fundamental primitive, also used as a building block for voxelizing more complex objects. For example, sweeping a 3D voxelized line along a 3D voxelized circle generates a voxelized cylinder. The second application of 3D line voxelization algorithms is for ray traversal in voxel space. Rendering techniques that cast rays through a volume of voxels are based on algorithms that generate the set of voxels visited (or pierced) by the continuous ray. Discrete ray algorithms have been developed for traversing a 3D space partition or a 3D array of sampled or computed data. These algorithms produce one discrete point per step, in contrast to ray casting algorithms for volume rendering, which track a continuous ray at constant intervals, and to voxelization algorithms that generate nonbinary voxel values (for example, partial occupancies). Before considering algorithms for generating discrete lines, we introduce the topology and geometry of discrete lines  相似文献   

8.
Displaying of details in subvoxel accuracy   总被引:2,自引:0,他引:2       下载免费PDF全文
Under the volume segmentation in voxel space,a lot of details,some fine and thin objects,are ignored.In order to accurately display these details,this paper has developed a methodology for volume segmentation in subvoxel space.In the subvoxel space,most of the “bridges”between adjacent layers are broken down.Based on the subvoxel space,an automatic segmentation algorithm reserving details is discussed.After segmentation,volume data in subvoxel space are reduced to original voxel space.Thus,the details with width of only one or several voxels are extracted and displayed.  相似文献   

9.
This paper reviews volumetric methods for fusing sets of range images to create 3D models of objects or scenes. It also presents a new reconstruction method, which is a hybrid that combines several desirable aspects of techniques discussed in the literature. The proposed reconstruction method projects each point, or voxel, within a volumetric grid back onto a collection of range images. Each voxel value represents the degree of certainty that the point is inside the sensed object. The certainty value is a function of the distance from the grid point to the range image, as well as the sensor's noise characteristics. The super-Bayesian combination formula is used to fuse the data created from the individual range images into an overall volumetric grid. We obtain the object model by extracting an isosurface from the volumetric data using a version of the marching cubes algorithm. Results are shown from simulations and real range finders.  相似文献   

10.
The Fuzzy C-Means (FCM) algorithm is a widely used and flexible approach to automated image segmentation, especially in the field of brain tissue segmentation from 3D MRI, where it addresses the problem of partial volume effects. In order to improve its robustness to classical image deterioration, namely noise and bias field artifacts, which arise in the MRI acquisition process, we propose to integrate into the FCM segmentation methodology concepts inspired by the non-local (NL) framework, initially defined and considered in the context of image restoration. The key algorithmic contributions of this article are the definition of an NL data term and an NL regularisation term to efficiently handle intensity inhomogeneities and noise in the data. The resulting new energy formulation is then built into an NL-FCM brain tissue segmentation algorithm. Experiments performed on both synthetic and real MRI data, leading to the classification of brain tissues into grey matter, white matter and cerebrospinal fluid, indicate a significant improvement in performance in the case of higher noise levels, when compared to a range of standard algorithms.  相似文献   

11.
Shape-aware Volume Illustration   总被引:1,自引:0,他引:1  
We introduce a novel volume illustration technique for regularly sampled volume datasets. The fundamental difference between previous volume illustration algorithms and ours is that our results are shape-aware, as they depend not only on the rendering styles, but also the shape styles. We propose a new data structure that is derived from the input volume and consists of a distance volume and a segmentation volume. The distance volume is used to reconstruct a continuous field around the object boundary, facilitating smooth illustrations of boundaries and silhouettes. The segmentation volume allows us to abstract or remove distracting details and noise, and apply different rendering styles to different objects and components. We also demonstrate how to modify the shape of illustrated objects using a new 2D curve analogy technique. This provides an interactive method for learning shape variations from 2D hand-painted illustrations by drawing several lines. Our experiments on several volume datasets demonstrate that the proposed approach can achieve visually appealing and shape-aware illustrations. The feedback from medical illustrators is quite encouraging.  相似文献   

12.
在分析了三维数据场实时体绘制研究现状的基础上,重点探讨了三维数据场实时体绘制的五种方法:降低采样维数法、空间相关性法、跳过空体元法、基于硬件的方法及并行处理的方法,并比较了各种绘制算法的特点,从而指明了三维数据场实时体绘制进一步研究的方向。  相似文献   

13.
目的 针对现有的血管分割方法对血管的分割精度尚有不足,尤其是对噪声等影响下的断裂血管,基于Stein-Weiss函数的解析性提出了一种新的3维血管分割算法,能够分割出更精细更清晰的血管。方法 首先,通过图像增强和窗宽窗位调节的预处理来增加血管点与背景的对比度。然后,将Stein-Weiss函数与梯度算子结合起来,把CT体数据的每一个体素都表示为一个Stein-Weiss函数,体素6邻域的灰度值作为Stein-Weiss函数各组成部分的系数。再求出Stein-Weiss函数在xyz 3个方向上的梯度值,大于某一个阈值时,便将此体素视为血管边缘上的点。最后,根据提取出血管边缘的2维CT切片重建出3维的血管。结果 对肝静脉的造影数据S70进行肝脏血管分割与3维重建的实验结果表明,利用该算法进行血管分割的敏感性和特异性相对于区域生长算法和八元数解析分割算法都较高。尤其是对于血管分割的去噪方面有明显优势,因此能够快速有效地分割出更清晰更精细的血管。结论 提出了一种新的血管分割算法,利用Stein-Weiss函数的解析性来提取血管的边缘,实验结果表明,此算法可以有效快速地去除血管噪声并得到更精细的分割结果。由于Stein-Weiss解析的性质可以适合任意维数,所以利用Stein-Weiss解析函数性质可以进行2维或更高维的图像边缘识别。  相似文献   

14.
We address the computational resource requirements of 3D example-based synthesis with an adaptive synthesis technique that uses a tree-based synthesis map. A signed-distance field (SDF) is determined for the 3D exemplars, and then new models can be synthesized as SDFs by neighborhood matching. Unlike voxel synthesis approach, our input is posed in the real domain to preserve maximum detail. In comparison to straightforward extensions to the existing volume texture synthesis approach, we made several improvements in terms of memory requirements, computation times, and synthesis quality. The inherent parallelism in this method makes it suitable for a multicore CPU. Results show that computation times and memory requirements are very much reduced, and large synthesized scenes exhibit fine details which mimic the exemplars.  相似文献   

15.
在传统马尔可夫场模型的基础上,建立了模糊马尔可夫场模型。通过对模型的分析得出图像像素对不同类的隶属度计算公式,提出了一种高效、无监督的图像分割算法,从而实现了对脑部MR图像的精确分割。通过对模拟脑部MR图像和临床脑部MR图像分割实验,表明新算法比传统的基于马尔可夫场的图像分割算法和模糊C-均值等图像分割算法有更精确的图像分割能力。  相似文献   

16.
目的 雷达点云语义分割是3维环境感知的重要环节,准确分割雷达点云对象对无人驾驶汽车和自主移动机器人等应用具有重要意义。由于雷达点云数据具有非结构化特征,为提取有效的语义信息,通常将不规则的点云数据投影成结构化的2维图像,但会造成点云数据中几何信息丢失,不能得到高精度分割效果。此外,真实数据集中存在数据分布不均匀问题,导致小样本物体分割效果较差。为解决这些问题,本文提出一种基于稀疏注意力和实例增强的雷达点云分割方法,有效提高了激光雷达点云语义分割精度。方法 针对数据集中数据分布不平衡问题,采用实例注入方式增强点云数据。首先,通过提取数据集中的点云实例数据,并在训练中将实例数据注入到每一帧点云中,实现实例增强的效果。由于稀疏卷积网络不能获得较大的感受野,提出Transformer模块扩大网络的感受野。为了提取特征图的关键信息,使用基于稀疏卷积的空间注意力机制,显著提高了网络性能。另外,对不同类别点云对象的边缘,提出新的TVloss用于增强网络的监督能力。结果 本文提出的模型在SemanticKITTI和nuScenes数据集上进行测试。在SemanticKITTI数据集上,本文方法在线单帧...  相似文献   

17.
18.
We present a discrete contour model for the segmentation of image data with any dimension of image domain and value range. The model consists of a representation using simplex meshes and a mechanical formulation of influences that drive an iterative segmentation. The object's representation as well as the influences are valid for any dimension of the image domain. The image influences introduced here, can combine information from independent channels of higher-dimensional value ranges. Additionally, the topology of the model automatically adapts to objects contained in images. Noncontextual tests have validated the ability of the model to reproducibly delineate synthetic objects. In particular, images with a signal to noise ratio of SNR /spl les/ 0.5 are delineated within two pixels of their ground truth contour. Contextual validations have shown the applicability of the model for medical image analysis in image domains of two, three, and four dimensions in single as well as multichannel value ranges.  相似文献   

19.
For reconstructing sparse volumes of 3D objects from projection images taken from different viewing directions, several volumetric reconstruction techniques are available. Most popular volume reconstruction methods are algebraic algorithms (e.g. the multiplicative algebraic reconstruction technique, MART). These methods which belong to voxel-oriented class allow volume to be reconstructed by computing each voxel intensity. A new class of tomographic reconstruction methods, called “object-oriented” approach, has recently emerged and was used in the Tomographic Particle Image Velocimetry technique (Tomo-PIV). In this paper, we propose an object-oriented approach, called Iterative Object Detection—Object Volume Reconstruction based on Marked Point Process (IOD-OVRMPP), to reconstruct the volume of 3D objects from projection images of 2D objects. Our approach allows the problem to be solved in a parsimonious way by minimizing an energy function based on a least squares criterion. Each object belonging to 2D or 3D space is identified by its continuous position and a set of features (marks). In order to optimize the population of objects, we use a simulated annealing algorithm which provides a “Maximum A Posteriori” estimation. To test our approach, we apply it to the field of Tomo-PIV where the volume reconstruction process is one of the most important steps in the analysis of volumetric flow. Finally, using synthetic data, we show that the proposed approach is able to reconstruct densely seeded flows.  相似文献   

20.
The visualization of complex 3D images remains a challenge, a fact that is magnified by the difficulty to classify or segment volume data. In this paper, we introduce size-based transfer functions, which map the local scale of features to color and opacity. Features in a data set with similar or identical scalar values can be classified based on their relative size. We achieve this with the use of scale fields, which are 3D fields that represent the relative size of the local feature at each voxel. We present a mechanism for obtaining these scale fields at interactive rates, through a continuous scale-space analysis and a set of detection filters. Through a number of examples, we show that size-based transfer functions can improve classification and enhance volume rendering techniques, such as maximum intensity projection. The ability to classify objects based on local size at interactive rates proves to be a powerful method for complex data exploration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号