首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A solution to the “next best view” (NBV) problem for automated surface acquisition is presented. The NBV problem is to determine which areas of a scanner's viewing volume need to be scanned to sample all of the visible surfaces of an a priori unknown object and where to position/control the scanner to sample them. A method for determining the unscanned areas of the viewing volume is presented. In addition, a novel representation, positional space, is presented which facilitates a solution to the NBV problem by representing what must be and what can be scanned in a single data structure. The number of costly computations needed to determine if an area of the viewing volume would be occluded from some scanning position is decoupled from the number of positions considered for the NBV, thus reducing the computational cost of choosing one. An automated surface acquisition systems designed to scan all visible surfaces of an a priori unknown object is demonstrated on real objects  相似文献   

2.
A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown objects. Instead of maximizing the total size of the expected unknown volume that becomes visible, the next-best view is chosen to observe the border of incomplete objects. Salient regions of space that belong to the objects are detected, without any prior knowledge, by applying a point cloud segmentation algorithm. The system uses a Kinect V2 sensor, which has not been considered in previous works on next-best view planning, and it exploits KinectFusion to maintain a volumetric representation of the environment. A low-level procedure to reduce Kinect V2 invalid points is also presented. The viability of the approach has been demonstrated in a real setup where the robot is fully autonomous. Experiments indicate that the proposed method enables the robot to actively explore the objects faster than a standard next-best view algorithm.  相似文献   

3.
Visibility constraints can aid the segmentation of foreground objects in a scene observed with multiple range imagers. Points may be labeled as foreground if they can be determined to occlude some space in the scene that we expect to be empty. Visibility constraints from a second range view provide evidence of such occlusions. We present an efficient algorithm to estimate foreground points in each range view using explicit epipolar search. In cases where the background pattern is stationary, we show how visibility constraints from other views can generate virtual background values at points with no valid depth in the primary view. We demonstrate the performance of both algorithms for detecting people in indoor office environments with dynamic illumination variation.  相似文献   

4.
体绘制中显示隐含分界面的一种方法及其实现   总被引:4,自引:0,他引:4  
彭延军  石教英 《软件学报》2002,13(9):1887-1892
在普通的体光照模型下,使用直接体绘制显示对象内部的隐含分界面(内部不同介质之间的分界面),需要改变传递函数,确定体素的颜色值和不透明度.虽然能够看到对象内部的结构,但是在这种模型下,要透过物体的表面清晰地看到其内部的隐含分界面是不可能的.这一方面是由于普通体光照模型中的粒子不具有选择透光性,即不能透过波长在一定范围内的可见光而吸收另一部分波长不同的可见光,只能同等地吸收各种波长的光;另一方面是因为普通体光照模型缺乏表面信息部分.该算法使用一种具有选择透光性的体光照模型,在这种模型中加入表面散射部分,这一部分与视线、光源位置无关,同时采用非真实感绘制技术来加大隐含分界面的显示效果.在这种光照模型下,可以清晰地显示出隐含分界面具体的细节部分.  相似文献   

5.
Many 3D scenes (e.g. generated from CAD data) are composed of a multitude of objects that are nested in each other. A showroom, for instance, may contain multiple cars and every car has a gearbox with many gearwheels located inside. Because the objects occlude each other, only few are visible from outside. We present a new technique, Spherical Visibility Sampling (SVS), for real‐time 3D rendering of such – possibly highly complex – scenes. SVS exploits the occlusion and annotates hierarchically structured objects with directional visibility information in a preprocessing step. For different directions, the directional visibility encodes which objects of a scene's region are visible from the outside of the regions' enclosing bounding sphere. Since there is no need to store a separate view space subdivision as in most techniques based on preprocessed visibility, a small memory footprint is achieved. Using the directional visibility information for an interactive walkthrough, the potentially visible objects can be retrieved very efficiently without the need for further visibility tests. Our evaluation shows that using SVS allows to preprocess complex 3D scenes fast and to visualize them in real time (e.g. a Power Plant model and five animated Boeing 777 models with billions of triangles). Because SVS does not require hardware support for occlusion culling during rendering, it is even applicable for rendering large scenes on mobile devices.  相似文献   

6.
Identifying a three-dimensional (3D) object in an image is traditionally dealt with by referencing to a 3D model of the object. In the last few years there has been a growing interest of using not a 3D shape but multiple views of the object as the reference. This paper attempts a further step in the direction, using not multiple views but a single clean view as the reference model. The key issue is how to establish correspondences from the model view where the boundary of the object is explicitly available, to the scene view where the object can be surrounded by various distracting entities and its boundary disturbed by noise. We propose a solution to the problem, which is based upon a mechanism of predicting correspondences from just four particular initial point correspondences. The object is required to be polyhedral or near-polyhedral. The correspondence mechanism has a computational complexity linear with respect to the total number of visible corners of the object in the model view. The limitation of the mechanism is also analyzed thoroughly in this paper. Experimental results over real images are presented to illustrate the performance of the proposed solution.  相似文献   

7.
In this paper we propose new methods for calculating the projection of an opaque 3-dimensional 6-connected volumetric object into a two-dimensional view using 2- and 2 -dimensional seed-filling in the view space. When a 3-dimensional volumetric data set is sampled by parallel rays at a resolution exceeding the Nyquist ratio, the 6-connectivity of objects is maintained from volume lattice to view lattice. Given a seed point for each 6-connected object in the volumetric data set, a seed-filling algorithm may access all sample points in view lattice, while simultaneously composing the rendered view. The algorithms presented in this paper minimize the number of voxels that need to be processed. We implemented these methods on a general purpose computer architecture and tested them with several artificial and real-life medical volumetric data sets. It is shown that the algorithms may be used to speed up the parallel ray casting of opaque medical objects. The actual frame rate achieved by the combined method allows interactive (10 frames/sec) rotation of the object on a common single-processor personal computer without specialized hardware.  相似文献   

8.
This paper proposes a new preprocessing method for interactive rendering of complex polygonal virtual environments. The approach divides the space that observer can reach into many rectangular viewpoint regions. For each region, an outer rectangular volume (ORV) is established to surround it. By adaptively partitioning the boundary of the ORV together with the viewpoint region, all the rays that originate from the viewpoint region are divided into the beams whose potentially visible polygon number is less than a preset threshold. If a resultant beam is the smallest and intersects many potentially visible polygons, the beam is simplified as a fixed number of rays and the averaged color of the hit polygons is recorded. For other beams, their potentially visible sets (PVS) of polygons are stored respectively. During an interactive walkthrough, the visual information related to the current viewpoint is retrieved from the storage. The view volume clipping, visibility culling and detail simplification are efficiently supported by these stored data. The rendering time is independent of the scene complexity.  相似文献   

9.
This paper presents an efficient image-based approach to navigate a scene based on only three wide-baseline uncalibrated images without the explicit use of a 3D model. After automatically recovering corresponding points between each pair of images, an accurate trifocal plane is extracted from the trifocal tensor of these three images. Next, based on a small number of feature marks using a friendly GUI, the correct dense disparity maps are obtained by using our trinocular-stereo algorithm. Employing the barycentric warping scheme with the computed disparity, we can generate an arbitrary novel view within a triangle spanned by three camera centers. Furthermore, after self-calibration of the cameras, 3D objects can be correctly augmented into the virtual environment synthesized by the tri-view morphing algorithm. Three applications of the tri-view morphing algorithm are demonstrated. The first one is 4D video synthesis, which can be used to fill in the gap between a few sparsely located video cameras to synthetically generate a video from a virtual moving camera. This synthetic camera can be used to view the dynamic scene from a novel view instead of the original static camera views. The second application is multiple view morphing, where we can seamlessly fly through the scene over a 2D space constructed by more than three cameras. The last one is dynamic scene synthesis using three still images, where several rigid objects may move in any orientation or direction. After segmenting three reference frames into several layers, the novel views in the dynamic scene can be generated by applying our algorithm. Finally, the experiments are presented to illustrate that a series of photo-realistic virtual views can be generated to fly through a virtual environment covered by several static cameras.  相似文献   

10.
11.
Spectral volume rendering   总被引:2,自引:0,他引:2  
Volume renderers for interactive analysis must be sufficiently versatile to render a broad range of volume images: unsegmented “raw” images as recorded by a 3D scanner, labeled segmented images, multimodality images, or any combination of these. The usual strategy is to assign to each voxel a three component RGB color and an opacity value α. This so-called RGBα approach offers the possibility of distinguishing volume objects by color. However, these colors are connected to the objects themselves, thereby bypassing the idea that in reality the color of an object is also determined by the light source and light detectors c.q. human eyes. The physically realistic approach presented, models light interacting with the materials inside a voxel causing spectral changes in the light. The radiated spectrum falls upon a set of RGB detectors. The spectral approach is investigated to see whether it could enhance the visualization of volume data and interactive tools. For that purpose, a material is split into an absorbing part (the medium) and a scattering part (small particles). The medium is considered to be either achromatic or chromatic, while the particles are considered to scatter the light achromatically, elastically, or inelastically. Inelastic scattering particles combined with an achromatic absorbing medium offer additional visual features: objects are made visible through the surface structure of a surrounding volume object and volume and surface structures can be made visible at the same time. With one or two materials the method is faster than the RGBα approach, with three materials the performance is equal. The spectral approach can be considered as an extension of the RGBα approach with greater visual flexibility and a better balance between quality and speed  相似文献   

12.
13.
A new algorithm for ray tracing generalized cylinders whose axis is an arbitrary three-dimensional space curve and whose cross-sectional contour can be varied according to a general sweeping rule is presented. The main restriction placed on the class of generalized cylinders that can be ray-traced is that the sweeping rule of the generalized cylinder must be invertible. This algorithm handles a broader class of generalized cylinders than any other reported ray tracer. It has been integrated into a general geometric modeling system that can render objects utilizing visible light as well as simulated X rays. Generalized cylinders are often used in modeling systems because they compactly represent objects. Many commonly occurring objects including snakes, horses, airplanes, flower vases, and organs of the human abdomen such as the stomach and liver can be described naturally and conveniently in terms of one or more generalized cylinder primitives. By extending the class of generalized cylinders that can be conveniently modeled, the presented algorithm enhances the utility of modeling systems based on generalized cylinders. X-ray images of the internal bone structure of a knee joint and a visible light image of a fan blade assembly are presented.  相似文献   

14.
This article deals with construction of complete 2 D exact view models of polyhedral objects for visual identification systems. In particular, a new method and an algorithm views generation using the view sphere with perspective concept are described. A set of views generated by this method forms a complete view representation of the object. The method of ensuring completeness of the view representation by controlling covering of the view space (by single-view areas) is used in the presented algorithm. The perspective projection used for calculating the views, the total, tight covering of the view sphere by the single-view areas and -dimensionality of the views ensure, in our opinion, unambiguous and proper identification of polyhedral objects. The method consists in calculating single (any) view, determining the corresponding single-view area (so-called seedling single-view area) and then spiral propagation of neighbouring single-view areas until the whole view sphere is covered by them (i.e., until the border register containing the border between the covered and uncovered parts of the view sphere becomes empty). Having a complete set of single-view areas, we get a complete set of views as well. A method of determining single-view areas for convex polyhedra is also presented.  相似文献   

15.
Boundary cell-based acceleration for volume ray casting   总被引:4,自引:0,他引:4  
Several effective acceleration techniques for volume rendering offer efficient means to skip over empty space, providing significant speedup without affecting image quality. The effectiveness of such an approach depends on its ability to accurately estimate the object boundary inside a volume with minimal computational overhead. We propose a novel boundary cell-based acceleration technique for ray casting which skips over empty space by accurately calculating the intersection distance for each ray. Very short distance estimation time is achieved by exploiting a projection template to calculate the parallel-projection values of each boundary cell and the coherency of adjacent cells. Since no hardware acceleration is used, the projection procedure can also be efficiently parallelized. Experimental results are provided to demonstrate the performance of our new algorithm.  相似文献   

16.
The construction of a surface model from range data may be undertaken at any point in a continuum of scales that reflects the level of detail of the resulting model. This continuum relates the construction parameters to the scale of the model. We propose methods to dynamically reprocess range data at different scales. The construction result from a single scale is automatically evaluated, causing reconstruction at a different scale when user-defined criteria are not met. We demonstrate our methods in constructing a planar b-rep space envelope (a scene representation) for over 400 range images. The experiments demonstrate the ability to construct 100 percent valid models, with the scale of detail within specified requirements  相似文献   

17.
A novel method for representing 3D objects that unifies viewer and model centered object representations is presented. A unified 3D frequency-domain representation, called volumetric frequency representation (VFR), encapsulates both the spatial structure of the object and a continuum of its views in the same data structure. The frequency-domain image of an object viewed from any direction can be directly extracted employing an extension of the projection slice theorem, where each Fourier-transformed view is a planar slice of the volumetric frequency representation. The VFR is employed for pose-invariant recognition of complex objects, such as faces. The recognition and pose estimation is based on an efficient matching algorithm in a four-dimensional Fourier space. Experimental examples of pose estimation and recognition of faces in various poses are also presented  相似文献   

18.
19.
Spacetime ray tracing for animation   总被引:1,自引:0,他引:1  
Techniques for the efficient ray tracing of animated scenes are presented. They are based on two central concepts: spacetime ray tracing, and a hybrid adaptive space subdivision/boundary volume technique for generating efficient, nonoverlapping hierarchies of bounding volumes. In spacetime ray tracing, static objects are rendered in 4-D space-time using 4-D analogs to 3-D techniques. The bounding volume hierarchy combines elements of adaptive space subdivision and bounding volume techniques. The quality of hierarchy and its nonoverlapping character make it an improvement over previous algorithms, because both attributes reduce the number of ray/object intersections that must be computed. These savings are amplified in animation because of the much higher cost of computing ray/object intersections for motion-blurred animation. It is shown that it is possible to ray trace large animations more quickly with space-time ray tracing using this hierarchy than with straightforward frame-by-frame rendering  相似文献   

20.
Abstract The suggestion that new courses can be constructed from existing learning objects appears technically self-evident but remains unproven. Despite increasing evidence that learning objects can provide a suitable structure for constructing courses most evidence is based on creating learning objects, either through restructuring existing materials or as a set of newly created learning objects. There is little evidence for building a new course from existing learning objects where any significant number of these have been developed for a different course elsewhere. Authors who adopt a constructivist model of knowledge might view this lack of evidence as confirmation of the inherent difficulty of integrating resources created within different communities. Those following a scientific model of knowledge would view this as a temporary problem that will be resolved when a sufficient volume of materials exists in repositories. This paper argues that reconfiguration of learning objects to create new courses is significantly more complex than is currently recognized, even within a scientific framework of knowledge. By implication, far more research is needed to understand reconfiguration and re-integration than initial 'creation'.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号