首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   0篇
无线电   3篇
冶金工业   1篇
自动化技术   3篇
  2011年   1篇
  2004年   1篇
  2000年   1篇
  1999年   1篇
  1998年   2篇
  1996年   1篇
排序方式: 共有7条查询结果,搜索用时 7 毫秒
1
1.
Consider a binary image containing one or more objects. A signed distance transform assigns to each pixel (voxel, etc.), both inside and outside of any objects, the minimum distance from that pixel to the nearest pixel on the border of an object. By convention, the sign of the assigned distance value indicates whether or not the point is within some object (positive) or outside of all objects (negative). Over the years, many different algorithms have been proposed to calculate the distance transform of an image. These algorithms often trade accuracy for efficiency, exhibit varying degrees of conceptual complexity, and some require parallel processors. One algorithm in particular, the Chamfer distance [J. ACM 15 (1968) 600, Comput. Vis. Graph. Image Process. 34 (1986) 344], has been analyzed for accuracy, is relatively efficient, requires no special computing hardware, and is conceptually straightforward. It is understandably, therefore, quite popular and widely used. We present a straightforward modification to the Chamfer distance transform algorithm that allows it to produce more accurate results without increasing the window size. We call this new algorithm Dead Reckoning as it is loosely based on the concept of continual measurements and course correction that was employed by ocean going vessel navigation in the past. We compare Dead Reckoning with a wide variety of other distance transform algorithms based on the Chamfer distance algorithm for both accuracy and speed, and demonstrate that Dead Reckoning produces more accurate results with comparable efficiency.  相似文献   
2.
An objective comparison of 3-D image interpolation methods   总被引:28,自引:0,他引:28  
  相似文献   
3.
Shape-based interpolation of multidimensional grey-level images   总被引:15,自引:0,他引:15  
Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. Here, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n+1)-dimensional [(n+1)-D] space. The binary shape-based method is then applied to this image to create an (n+1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation.  相似文献   
4.
In 2003, Maurer et al. (IEEE Trans. Pattern Anal. Mach. Intell. 25:265–270, 2003) published a paper describing an algorithm that computes the exact distance transform in linear time (with respect to image size) for the rectangular binary images in the k-dimensional space ℝ k and distance measured with respect to L p -metric for 1≤p≤∞, which includes Euclidean distance L 2. In this paper we discuss this algorithm from theoretical and practical points of view. On the practical side, we concentrate on its Euclidean distance version, discuss the possible ways of implementing it as signed distance transform, and experimentally compare implemented algorithms. We also describe the parallelization of these algorithms and discuss the computational time savings associated with them. All these implementations will be made available as a part of the CAVASS software system developed and maintained in our group (Grevera et al. in J. Digit. Imaging 20:101–118, 2007). On the theoretical side, we prove that our version of the signed distance transform algorithm, GBDT, returns the exact value of the distance from the geometrically defined object boundary. We provide a complete proof (which was not given of Maurer et al. (IEEE Trans. Pattern Anal. Mach. Intell. 25:265–270, 2003) that all these algorithms work correctly for L p -metric with 1<p<∞. We also point out that the precise form of the algorithm from Maurer et al. (IEEE Trans. Pattern Anal. Mach. Intell. 25:265–270, 2003) is not well defined for L 1 and L metrics. In addition, we show that the algorithm can be used to find, in linear time, the exact value of the diameter of an object, that is, the largest possible distance between any two of its elements.  相似文献   
5.
Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. In a previous paper, we presented a framework for the task-independent comparison of interpolation methods based on certain image-derived figures of merit using a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this work, we present an objective task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of multiple sclerosis (MS) patients. Sixty lesion-detection experiments coming from ten patient studies, two subsampling techniques and the original data, and three interpolation methods are carried out, along with a statistical analysis of the results.  相似文献   
6.
The purpose of this work is to compare the speed of isosurface rendering in software with that using dedicated hardware. Input data consist of 10 different objects from various parts of the body and various modalities (CT, MR, and MRA) with a variety of surface sizes (up to 1 million voxels/2 million triangles) and shapes. The software rendering technique consists of a particular method of voxel-based surface rendering, called shell rendering. The hardware method is OpenGL-based and uses the surfaces constructed from our implementation of the Marching Cubes algorithm. The hardware environment consists of a variety of platforms, including a Sun Ultra I with a Creator3D graphics card and a Silicon Graphics Reality Engine II, both with polygon rendering hardware, and a 300 MHz Pentium PC. The results indicate that the software method (shell rendering) was 18 to 31 times faster than any hardware rendering methods. This work demonstrates that a software implementation of a particular rendering algorithm (shell rendering) can outperform dedicated hardware. We conclude that, for medical surface visualization, expensive dedicated hardware engines are not required. More importantly, available software algorithms (shell rendering) on a 300 MHz Pentium PC outperform the speed of rendering via hardware engines by a factor of 18 to 31  相似文献   
7.
To aid in the display, manipulation, and analysis of biomedical image data, they usually need to he converted to data of isotropic discretization through the process of interpolation. Traditional techniques consist of direct interpolation of the grey values. When user interaction is called for in image segmentation, as a consequence of these interpolation methods, the user needs to segment a much greater (typically 4-10x) amount of data. To mitigate this problem, a method called shape-based interpolation of binary data was developed 121. Besides significantly reducing user time, this method has been shown to provide more accurate results than grey-level interpolation. We proposed an approach for the interpolation of grey data of arbitrary dimensionality that generalized the shape-based method from binary to grey data. This method has characteristics similar to those of the binary shape-based method. In particular, we showed preliminary evidence that it produced more accurate results than conventional grey-level interpolation methods. In this paper, concentrating on the three-dimensional (3-D) interpolation problem, we compare statistically the accuracy of eight different methods: nearest-neighbor, linear grey-level, grey-level cubic spline, grey-level modified cubic spline, Goshtasby et al., and three methods from the grey-level shape-based class. A population of patient magnetic resonance and computed tomography images, corresponding to different parts of the human anatomy, coming from different three-dimensional imaging applications, are utilized for comparison. Each slice in these data sets is estimated by each interpolation method and compared to the original slice at the same location using three measures: mean-squared difference, number of sites of disagreement, and largest difference. The methods are statistically compared pairwise based on these measures. The shape-based methods statistically significantly outperformed all other methods in all measures in all applications considered here with a statistical relevance ranging from 10% to 32% (mean = 15%) for mean-squared difference.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号