共查询到20条相似文献,搜索用时 93 毫秒
1.
Aiming at the detail rendering in volume data, a new volume illumination model, called Composed Scattering Model (CSM), is presented. In order to enhance different details in volume data, scattering intensity is decomposed into volume scattering intensity and surface scattering intensity with different weight functions. According to the Gauss probability distribution of gray and gradient of data, we propose an accurate method to detect the materials in a voxel, called composed segmentation. In addition, we discuss the principle of constructing these weight functions based on the operators defined in composed segmentation. CSM can generate images containing more details than most popular volume rendering models. This model has been applied to the direct volume rendering of 3D data sets obtained by CT and MRI. The resultant images show not only rich details but also clear boundary surfaces. CSM is demonstrated as an accurate volume rendering model suited for detail enhancement in volume data sets. 相似文献
2.
Probabilistic multiscale image segmentation 总被引:3,自引:0,他引:3
Vincken K.L. Koster A.S.E. Viergever M.A. 《IEEE transactions on pattern analysis and machine intelligence》1997,19(2):109-120
A method is presented to segment multidimensional images using a multiscale (hyperstack) approach with probabilistic linking. A hyperstack is a voxel-based multiscale data structure whose levels are constructed by convolving the original image with a Gaussian kernel of increasing width. Between voxels at adjacent scale levels, child-parent linkages are established according to a model-directed linkage scheme. In the resulting tree-like data structure, roots are formed to indicate the most plausible locations in scale space where segments in the original image are represented by a single voxel. The final segmentation is obtained by tracing back the linkages for all roots. The present paper deals with probabilistic (or multiparent) linking. The multiparent linkage structure is translated into a list of probabilities that are indicative of which voxels are partial volume voxels and to which extent. Probability maps are generated to visualize the progress of weak linkages in scale space when going from fine to coarser scale. It is demonstrated that probabilistic linking gives a significantly improved segmentation as compared with conventional (single-parent) linking 相似文献
3.
Jiuxiang Hu Razdan A. Nielson G.M. Farin G.E. Baluch D.P. Capco D.G. 《IEEE transactions on visualization and computer graphics》2003,9(3):320-328
This paper presents a coarse-grain approach for segmentation of objects with gray levels appearing in volume data. The input data is on a 3D structured grid of vertices v(i. j. k), each associated with a scalar value. In this paper, we consider a voxel as a /spl kappa/ /spl times/ /spl kappa/ /spl times/ /spl kappa/ cube and each voxel is assigned two values: expectancy and standard deviation (E-SD). We use the Weibull noise index to estimate the noise in a voxel and to obtain more precise E-SD values for each voxel. We plot the frequency of voxels which have the same E-SD, then 3D segmentation based on the Weibull E-SD field is presented. Our test bed includes synthetic data as well as real volume data from a confocal laser scanning microscope (CLSM). Analysis of these data all show distinct and defining regions in their E-SD fields. Under the guide of the E-SD field, we can efficiently segment the objects embedded in real and simulated 3D data. 相似文献
4.
A level-set approach for the metamorphosis of solid models 总被引:8,自引:0,他引:8
Breen D.E. Whitaker R.T. 《IEEE transactions on visualization and computer graphics》2001,7(2):173-192
We present a new approach to 3D shape metamorphosis. We express the interpolation of two shapes as a process where one shape deforms to maximize its similarity with another shape. The process incrementally optimizes an objective function while deforming an implicit surface model. We represent the deformable surface as a level set (iso-surface) of a densely sampled scalar function of three dimensions. Such level-set models have been shown to mimic conventional parametric deformable surface models by encoding surface movements as changes in the grayscale values of a volume data set. Thus, a well-founded mathematical structure leads to a set of procedures that describes how voxel values can be manipulated to create deformations that are represented as a sequence of volumes. The result is a 3D morphing method that offers several advantages over previous methods, including minimal need for user input, no model parameterization, flexible topology, and subvoxel accuracy 相似文献
5.
Faten Chaieb Tarek Ben Said Sabra Mabrouk Faouzi Ghorbel 《Journal of Real-Time Image Processing》2017,13(1):121-133
Segmentation and volume measurement of liver tumor are important tasks for surgical planning and cancer follow-up. In this work, a segmentation method from four-phase computed tomography images is proposed. It is based on the combination of the Expectation-Maximization algorithm and the Hidden Markov Random Fields. The latter considers the spatial information given by voxel neighbors of two contrast phases. The segmentation algorithm is applied on a volume of interest that decreases the number of processed voxels. To accelerate the classification steps within the segmentation process, a Bootstrap resampling scheme is also adopted. It consists in selecting randomly an optimal representative set of voxels. The experimental results carried out on three clinical datasets show the performance of our liver tumor segmentation method. It has been notably observed that the computing time of the classification algorithm is reduced without any significant impact on the segmentation accuracy. 相似文献
6.
7.
In volume data visualization, the classification step is used to determine voxel visibility and is usually carried out through the interactive editing of a transfer function that defines a mapping between voxel value and color/opacity. This approach is limited by the difficulties in working effectively in the transfer function space beyond two dimensions. We present a new approach to the volume classification problem which couples machine learning and a painting metaphor to allow more sophisticated classification in an intuitive manner. The user works in the volume data space by directly painting on sample slices of the volume and the painted voxels are used in an iterative training process. The trained system can then classify the entire volume. Both classification and rendering can be hardware accelerated, providing immediate visual feedback as painting progresses. Such an intelligent system approach enables the user to perform classification in a much higher dimensional space without explicitly specifying the mapping for every dimension used. Furthermore, the trained system for one data set may be reused to classify other data sets with similar characteristics. 相似文献
8.
9.
Segmentation of Magnetic Resonance Imaging (MRI) brain image data has a significant impact on the computer guided medical image diagnosis and analysis. However, due to limitation of image acquisition devices and other related factors, MRI images are severely affected by the noise and inhomogeneity artefacts which lead to blurry edges in the intersection of the intra-organ soft tissue regions, making the segmentation process more difficult and challenging. This paper presents a novel two-stage fuzzy multi-objective framework (2sFMoF) for segmenting 3D MRI brain image data. In the first stage, a 3D spatial fuzzy c-means (3DSpFCM) algorithm is introduced by incorporating the 3D spatial neighbourhood information of the volume data to define a new local membership function along with the global membership function for each voxel. In particular, the membership functions actually define the underlying relationship between the voxels of a close cubic neighbourhood and image data in 3D image space. The cluster prototypes thus obtained are fed into a 3D modified fuzzy c-means (3DMFCM) algorithm, which further incorporates local voxel information to generate the final prototypes. The proposed framework addresses the shortcomings of the traditional FCM algorithm, which is highly sensitive to noise and may stuck into a local minima. The method is validated on a synthetic image volume and several simulated and in-vivo 3D MRI brain image volumes and found to be effective even in noisy data. The empirical results show the supremacy of the proposed method over the other FCM based algorithms and other related methods devised in the recent past. 相似文献
10.
Ebert D.S. Morris C.J. Rheingans P. Yoo T.S. 《IEEE transactions on visualization and computer graphics》2002,8(2):183-197
Photographic volumes present a unique, interesting challenge for volume rendering. In photographic volumes, the voxel color is pre-determined, making color selection through transfer functions unnecessary. However, photographic data does not contain a clear mapping from the multi-valued color values to a scalar density or opacity, making projection and compositing much more difficult than with traditional volumes. Moreover, because of the nonlinear nature of color spaces, there is no meaningful norm for the multi-valued voxels. Thus, the individual color channels of photographic data must be treated as incomparable data tuples rather than as vector values. Traditional differential geometric tools, such as intensity gradients, density and Laplacians, are distorted by the nonlinear non-orthonormal color spaces that are the domain of the voxel values. We have developed different techniques for managing these issues while directly rendering volumes from photographic data. We present and justify the normalization of color values by mapping RGB values to the CIE L*u*v* color space. We explore and compare different opacity transfer functions that map three-channel color values to opacity. We apply these many-to-one mappings to the original RGB values as well as to the voxels after conversion to L*u*v* space. Direct rendering using transfer functions allows us to explore photographic volumes without having to commit to an a-priori segmentation that might mask fine variations of interest. We empirically compare the combined effects of each of the two color spaces with our opacity transfer functions using source data from the Visible Human project 相似文献
11.
Finding a correct a priori back-to-front (BTF) visibility ordering for the perspective projection of the voxels of a rectangular
volume poses interesting problems. The BTF ordering presented by Frieder et al. [6] and the permuted BTF presented by Westover
[14] are correct for parallel projection but not for perspective projection [12]. Swan presented a constructive proof for
the correctness of the perspective BTF (PBTF) ordering [12]. This was a significant improvement on the existing orderings.
However, his proof assumes that voxel projections are not larger than a pixel, i.e. voxel projections do not overlap in screen
space. Very often the voxel projections do overlap, e.g. with splatting algorithms. In these cases, the PBTF ordering results in highly visible and characteristic rendering
artefacts.
In this paper we analyse the PBTF and show why it yields these rendering artefacts. We then present an improved visibility
ordering that remedies the artefacts. Our new ordering is as good as the PBTF, but it is also valid for cases where voxel
projections are larger than a single pixel, i.e. when voxel projections overlap in screen space. We demonstrate why and how
our ordering works at fundamental and implementation levels. 相似文献
12.
《Computer Vision and Image Understanding》2002,85(1):54-69
In this paper, we present a fuzzy Markovian method for brain tissue segmentation from magnetic resonance images. Generally, there are three main brain tissues in a brain dataset: gray matter, white matter, and cerebrospinal fluid. However, due to the limited resolution of the acquisition system, many voxels may be composed of multiple tissue types (partial volume effects). The proposed method aims at calculating a fuzzy membership in each voxel to indicate the partial volume degree, which is statistically modeled. Since our method is unsupervised, it first estimates the parameters of the fuzzy Markovian random field model using a stochastic gradient algorithm. The fuzzy Markovian segmentation is then performed automatically. The accuracy of the proposed method is quantitatively assessed on a digital phantom using an absolute average error and qualitatively tested on real MRI brain data. A comparison with the widely used fuzzy C-means algorithm is carried out to show numerous advantages of our method. 相似文献
13.
In recent years, there have been a lot of interests in incorporating semantics into simultaneous localization and mapping (SLAM) systems. This paper presents an approach to generate an outdoor large-scale 3D dense semantic map based on binocular stereo vision. The inputs to system are stereo color images from a moving vehicle. First, dense 3D space around the vehicle is constructed, and the motion of camera is estimated by visual odometry. Meanwhile, semantic segmentation is performed through the deep learning technology online, and the semantic labels are also used to verify the feature matching in visual odometry. These three processes calculate the motion, depth and semantic label of every pixel in the input views. Then, a voxel conditional random field (CRF) inference is introduced to fuse semantic labels to voxel. After that, we present a method to remove the moving objects by incorporating the semantic labels, which improves the motion segmentation accuracy. The last is to generate the dense 3D semantic map of an urban environment from arbitrary long image sequence. We evaluate our approach on KITTI vision benchmark, and the results show that the proposed method is effective. 相似文献
14.
15.
Yan Wang Chen Zu Guangliang Hu Yong Luo Zongqing Ma Kun He Xi Wu Jiliu Zhou 《Neural Processing Letters》2018,48(3):1323-1334
Accurate tumor delineation in medical images is of great importance in guiding radiotherapy. In nasopharyngeal carcinoma (NPC), due to its high variability, low contrast and discontinuous boundaries in magnetic resonance images (MRI), the margin of the tumor is especially difficult to be identified, making the radiotherapy planning a more challenging problem. The objective of this paper is to develop an automatic segmentation method of NPC in MRI for radiosurgery applications. To this end, we present to segment NPC using a deep convolutional neural network. Specifically, to obtain spatial consistency as well as accurate feature details for segmentation, multiple convolution kernel sizes are employed. The network contains a large number of trainable parameters which capture the relationship between the MRI intensity images and the corresponding label maps. When trained on subjects with pre-labeled MRI, the network can estimate the label class of each voxel for the testing subject which is only given the intensity image. To demonstrate the segmentation performance, we carry on our method on the T1-weighted images of 15 NPC patients, and compare the segmentation results against the radiologist’s reference outline. Experimental results show that the proposed method outperforms the traditional hand-crafted features based segmentation methods. The presented method in this paper could be useful for NPC diagnosis and helpful for guiding radiotherapy. 相似文献
16.
目的 海马子区体积极小且结构复杂,现有多图谱的分割方法难以取得理想的分割结果,为此提出一种字典学习和稀疏表示的海马子区分割方法。方法 该方法为目标图像中的每个体素点建立稀疏表示和字典学习模型以获取该点的标记。其中,字典学习模型由图谱灰度图像中的图像块构建。提出利用图谱标记图像的局部二值模式(LBP)特征增强训练字典的判别性;然后求解目标图像块在训练字典中的稀疏表示以确定该点标记;最后依据图谱的先验知识纠正分割结果中的错误标记。结果 与现有典型的多图谱方法进行定性和定量对比,该方法优于现有典型的多图谱分割方法,对较大海马子区的平均分割准确率可达到0.890。结论 本文方法适用于在大脑核磁共振图像中精确分割海马子区,且具有较强的鲁棒性,可为神经退行性疾病的诊断提供可靠的依据。 相似文献
17.
In many three-dimensional imaging applications, the three-dimensional space is represented by an array of cubical volume elements (voxels) and a subset of the voxels is specified by some property. Objects in the scene are then recognised by being components of the specified set and individual boundaries are recognised as sets of voxel faces separating objects from components in the complement of the specified set. This paper deals with the problem of algorithmic tracking of such a boundary specified by one of the voxel faces lying in it. The paper is expository in that all ideas are carefully motivated and introduced. Its original contribution is the investigation of the question of whether the use of a queue (of loose ends in the tracking process which are to be picked up again to complete the tracking) is necessary for an algorithmic tracker of boundaries in three-dimensional space. Such a queue is not needed for two-dimensional boundary tracking, but published three-dimensional boundary trackers all make use of such a thing. We prove that this is not accidental: under some mild assumptions, a boundary tracker without a queue will fail its task on some three-dimensional boundaries. 相似文献
18.
19.
Lintao Zheng Chenyang Zhu Jiazhao Zhang Hang Zhao Hui Huang Matthias Niessner Kai Xu 《Computer Graphics Forum》2019,38(7):103-114
We propose a novel approach to robot‐operated active understanding of unknown indoor scenes, based on online RGBD reconstruction with semantic segmentation. In our method, the exploratory robot scanning is both driven by and targeting at the recognition and segmentation of semantic objects from the scene. Our algorithm is built on top of a volumetric depth fusion framework and performs real‐time voxel‐based semantic labeling over the online reconstructed volume. The robot is guided by an online estimated discrete viewing score field (VSF) parameterized over the 3D space of 2D location and azimuth rotation. VSF stores for each grid the score of the corresponding view, which measures how much it reduces the uncertainty (entropy) of both geometric reconstruction and semantic labeling. Based on VSF, we select the next best views (NBV) as the target for each time step. We then jointly optimize the traverse path and camera trajectory between two adjacent NBVs, through maximizing the integral viewing score (information gain) along path and trajectory. Through extensive evaluation, we show that our method achieves efficient and accurate online scene parsing during exploratory scanning. 相似文献
20.
针对现有机械制造领域网格模型分割结果缺少工程含义的现状,提出了一种三角网格模型体素特征分割方法。首先在对三角网格模型分割的基础上,对由网格分割得到的每个子网格进行曲面类型识别,然后在基本体素及典型结构显著特征表示的基础上,把识别出的曲面集合与基本体素及典型结构进行匹配,从而将分割结果分类为自由曲面、基本体素和复杂体素,实现具有工程含义的体素特征分割。该方法可以降低模型重构的难度,加快模型重构的速度。 相似文献