首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a novel optimization framework for estimating the static or dynamic surfaces with details. The proposed method uses dense depths from a structured‐light system or sparse ones from motion capture as the initial positions, and exploits non‐Lambertian reflectance models to approximate surface reflectance. Multi‐stage shape‐from‐shading (SFS) is then applied to optimize both shape geometry and reflectance properties. Because this method uses non‐Lambertian properties, it can compensate for triangulation reconstruction errors caused by view‐dependent reflections. This approach can also estimate detailed undulations on textureless regions, and employs spatial‐temporal constraints for reliably tracking time‐varying surfaces. Experiment results demonstrate that accurate and detailed 3D surfaces can be reconstructed from images acquired by off‐the‐shelf devices. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Abstract— Several rare‐earth‐doped fluoride crystals that are excited to emit visible light by sequential two‐photon absorption have been investigated as display‐medium candidates for static volumetric three‐dimensional displays. Dispersion of powders of these materials in a refractive‐index‐matched polymer is reported because such a medium may result in a scalable display. The scattering problem in such a medium is greatly reduced by index‐matching the polymer to the crystalline particles. An index‐matching condition that optimizes the performance is identified.  相似文献   

3.
基于相邻层轮廓线几何形状匹配的三维重建   总被引:2,自引:0,他引:2  
当轮廓线比较复杂,常用的三维表面重建方法就不是很有效,有时还会出错。为此,提出一种新的基于相邻层轮廓线几何形状匹配的三维重建算法:首先提取轮廓线的关键转折点,再根据相邻层的几何形状来匹配关键点,然后连接上下匹配关键点将轮廓线分成几个独立的部分,再分别拼接各个独立部分,从而完成整个重建的轮廓拼接。实验证明,该方法对凹凸多变、复杂的封闭轮廓线有较好的效果。  相似文献   

4.
Abstract— Techniques for 3‐D display have evolved from stereoscopic 3‐D systems to multiview 3‐D systems, which provide images corresponding to different viewpoints. Currently, new technology is required for application in multiview display systems that use input‐source formats such as 2‐D images to generate virtual‐view images of multiple viewpoints. Due to the changes in viewpoints, occlusion regions of the original image become disoccluded, resulting in problems related to the restoration of output image information that is not contained in the input image. In this paper, a method for generating multiview images through a two‐step process is proposed: (1) depth‐map refinement and (2) disoccluded‐area estimation and restoration. The first step, depth‐map processing, removes depth‐map noise, compensates for mismatches between RGB and depth, and preserves the boundaries and object shapes. The second step, disoccluded‐area estimation and restoration, predicts the disoccluded area by using disparity and restores information about the area by using information about neighboring frames that are most similar to the occlusion area. Finally, multiview rendering generates virtual‐view images by using a directional rendering algorithm with boundary blending.  相似文献   

5.
Abstract— The jerkiness of moving three‐dimensional (3‐D) images produced by a high‐density directional display was studied. Under static viewing conditions in which subjects' heads did not move, jerkiness was more noticeable when moving 3‐D images were displayed in front of the display screen and was less noticeable when moving 3‐D images were displayed behind the screen. We found that the perception of jerkiness depended on the visual angular velocities of moving 3‐D images. Under dynamic viewing conditions in which subjects' heads were forced to move, when moving 3‐D images were displayed in front of the screen, jerkiness was less noticeable when the subjects' heads and 3‐D images moved in opposite directions and was more noticeable when they moved in the same direction. When moving 3‐D images were displayed behind the screen, jerkiness was less noticeable when subjects' heads and 3‐D images moved in the same direction and was more noticeable when they moved in opposite directions.  相似文献   

6.
In this paper, we introduce an approach to high‐level parameterisation of captured mesh sequences of actor performance for real‐time interactive animation control. High‐level parametric control is achieved by non‐linear blending between multiple mesh sequences exhibiting variation in a particular movement. For example, walking speed is parameterised by blending fast and slow walk sequences. A hybrid non‐linear mesh sequence blending approach is introduced to approximate the natural deformation of non‐linear interpolation techniques whilst maintaining the real‐time performance of linear mesh blending. Quantitative results show that the hybrid approach gives an accurate real‐time approximation of offline non‐linear deformation. An evaluation of the approach shows good performance not only for entire meshes but also with specific mesh areas. Results are presented for single and multi‐dimensional parametric control of walking (speed/direction), jumping (height/distance) and reaching (height) from captured mesh sequences. This approach allows continuous real‐time control of high‐level parameters such as speed and direction whilst maintaining the natural surface dynamics of captured movement. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Approach to achieve self‐calibration three‐dimensional (3D) light field display is investigated in this paper. The proposed 3D light field display is constructed up on spliced multi‐LCDs, lens and diaphragm arrays, and directional diffuser. The light field imaging principle, hardware configuration, diffuser characteristic, and image reconstruction simulation are described and analyzed, respectively. Besides the light field imaging, a self‐calibration method is proposed to improve the imaging performance. An image sensor is deployed to capture calibration patterns projected onto and then reflected by the polymer dispersed liquid crystal film, which is attached to and shaped the diffuser. These calibration components are assembled with the display unit and can be switched between display mode and calibration mode. In the calibration mode, the imperfect imaging relations of optical components are captured and calibrated automatically. We demonstrate our design by implementing the prototype of proposed 3D light field display by using modified off‐the‐shelf products. The proposed approach successfully meets the requirement of real application on scalable configuration, fast calibration, large viewing angular range, and smooth motion parallax.  相似文献   

8.
The estimation of the geometric structure of objects located underwater underpins a plethora of applications such as mapping shipwrecks for archaeology, monitoring the health of coral reefs, detecting faults in offshore oil rigs and pipelines, detection and identification of potential threats on the seabed, etc. Acoustic imaging is the most popular choice for underwater sensing. Underwater exploratory vehicles typically employ wide‐aperture Sound Navigation and Ranging (SONAR) imaging sensors. Although their wide aperture enables scouring large volumes of water ahead of them for obstacles, the resulting images produced are blurry due to integration over the aperture. Performing three‐dimensional (3D) reconstruction from this blurry data is notoriously difficult. This challenging inverse problem is further exacerbated by the presence of speckle noise and reverberations. The state‐of‐the‐art methods in 3D reconstruction from sonar either require bulky and expensive matrix‐arrays of sonar sensors or additional narrow‐aperture sensors. Due to its low footprint, the latter induces gaps between reconstructed scans. Avoiding such gaps requires slow and cumbersome scanning by the vehicles that carry the scanners. In this paper, we present two reconstruction methods enabling on‐site 3D reconstruction from imaging sonars of any aperture. The first of these presents an elegant linear formulation of the problem, as a blind deconvolution with a spatially varying kernel. The second method is a simple algorithmic approach for approximate reconstruction, using a nonlinear formulation. We demonstrate that our simple approximation algorithms perform 3D reconstruction directly from the data recorded by wide‐aperture systems, thus eliminating the need for multiple sensors to be mounted on underwater vehicles for this purpose. Additionally, we observe that the wide aperture may be exploited to improve the coverage of the reconstructed samples (on the scanned object's surface). We demonstrate the efficacy of our algorithms on simulated as well as real data acquired using two sensors, and we compare our work to the state of the art in sonar reconstruction. Finally, we show the employability of our reconstruction methods on field data gathered by an autonomous underwater vehicle.  相似文献   

9.
Abstract— Although there are numerous types of floating‐image display systems which can project three‐dimensional (3‐D) images into real space through a convex lens or a concave mirror, most of them provide only one image plane in space to the observer; therefore, they lack an in‐depth feeling. In order to enhance a real 3‐D feeling of floating images, a multi‐plane floating display is required. In this paper, a novel two‐plane electro‐floating display system using 3‐D integral images is proposed. One plane for the object image is provided by an electro‐floating display system, and the other plane for the background image is provided with the 3‐D integral imaging system. Consequently, the proposed two‐plane electro‐floating display system, having a 3‐D background, can provide floated images in front of background integral images resulting in a different perspective to the observer. To show the usefulness of the proposed system, experiments were carried out and their results are presented. In addition, the prototype was practically implemented and successfully tested.  相似文献   

10.
Abstract— A new approach to resolution enhancement of an integral‐imaging (II) three‐dimensional display using multi‐directional elemental images is proposed. The proposed method uses a special lens made up of nine pieces of a single Fresnel lens which are collected from different parts of the same lens. This composite lens is placed in front of the lens array such that it generates nine sets of directional elemental images to the lens array. These elemental images are overlapped on the lens array and produce nine point light sources per each elemental lens at different positions in the focal plane of the lens array. Nine sets of elemental images are projected by a high‐speed digital micromirror device and are tilted by a two‐dimensional scanning mirror system, maintaining the time‐multiplexing sequence for nine pieces of the composite lens. In this method, the concentration of the point light sources in the focal plane of the lens array is nine‐times higher, i.e., the distance between two adjacent point light sources is three times smaller than that for a conventional II display; hence, the resolution of three‐dimensional image is enhanced.  相似文献   

11.
Collision detection is highly important in computer graphics and virtual reality. Most collision detection methods are object‐based, relying on testing the geometrical interference of objects, and their performance therefore depends on the geometrical complexity of the objects. Recently, image‐based methods have gained increasing acceptance for their simplicity in implementation, robustness with respect to the object geometry, and the potential to distribute the computational burden onto graphics hardware. However, all existing image‐based methods require direct calls to OpenGL, but so far there is no direct way to access OpenGL through the Java 3D API. Although Java 3D provides its own built‐in collision detection classes, they are either incorrect or inefficient. In this paper, we present a hybrid image‐based collision detection method in Java 3D, which incorporates the Java 3D built‐in collision detection and the image‐based collision detection in our specially devised scene graph. In addition, we take advantage of the fact that the 3D position of successive offscreen views (i.e. virtual views perceived by the probing object) does not change significantly and thereby reduce the occurrences of offscreen rendering, so that the collision detection becomes even faster (up to 50% in our case). Experimental results prove the correctness and efficiency of our method. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
Crosstalk is a critical defect affecting image quality in multiview lenticular 3D displays. Existing optimization methods require tedious computations and device‐specific optical measurements, and results are often suboptimal. We propose a new method, on the basis of light field acquisition and optimization, for crosstalk reduction in super multiview displays. Theory and algorithms were developed, and experimental validation results showed superior performance.  相似文献   

13.
This paper presents a 3D face reconstruction method using multiple 2D face images. Structure from motion (SfM) methods, which have been widely used to reconstruct 3D faces, are vulnerable to point correspondence errors caused by self-occlusion. In order to solve this problem, we propose a shape conversion matrix (SCM) which estimates the ground-truth 2D facial feature points (FFPs) from the observed 2D FFPs corrupted by self-occlusion errors. To make the SCM, the training observed 2D FFPs and ground-truth 2D FFPs are collected by using 3D face scans. An observed shape model and a ground-truth shape model are then built to represent the observed 2D FFPs and the ground-truth 2D FFPs, respectively. Finally, the observed shape model parameter is converted to the ground truth shape model parameter via the SCM. By using the SCM, the true locations of the self-occluded FFPs are estimated exactly with simple matrix multiplications. As a result, SfM-based 3D face reconstruction methods combined with the proposed SCM become more robust against point correspondence errors caused by self-occlusion, and the computational cost is significantly reduced. In experiments, the reconstructed 3D facial shape is quantitatively compared with the 3D facial shape obtained from a 3D scanner, and the results show that SfM-based 3D face reconstruction methods with the proposed SCM show a higher accuracy and a faster processing time than SfM-based 3D face reconstruction methods without the SCM.  相似文献   

14.
Abstract— A circular camera system employing an image‐based rendering technique that captures light‐ray data needed for reconstructing three‐dimensional (3‐D) images by using reconstruction of parallax rays from multiple images captured from multiple viewpoints around a real object in order to display a 3‐D image of a real object that can be observed from multiple surrounding viewing points on a 3‐D display is proposed. An interpolation algorithm that is effective in reducing the number of component cameras in the system is also proposed. The interpolation and experimental results which were performed on our previously proposed 3‐D display system based on the reconstruction of parallax rays will be described. When the radius of the proposed circular camera array was 1100 mm, the central angle of the camera array was 40°, and the radius of a real 3‐D object was between 60 and 100 mm, the proposed camera system, consisting of 14 cameras, could obtain sufficient 3‐D light‐ray data to reconstruct 3‐D images on the 3‐D display.  相似文献   

15.
This paper addresses an image‐based method for modeling 3D objects with curved surfaces based on the non‐uniform rational B‐splines (NURBS) representation. The user fits the feature curves on a few calibrated images with 2D NURBS curves using the interactive user interface. Then, 3D NURBS curves are constructed by stereo reconstruction of the corresponding feature curves. Using these as building blocks, NURBS surfaces are reconstructed by the known surface building methods including bilinear surfaces, ruled surfaces, generalized cylinders, and surfaces of revolution. In addition to them, we also employ various advanced techniques, including skinned surfaces, swept surfaces, and boundary patches. Based on these surface modeling techniques, it is possible to build various types of 3D shape models with textured curved surfaces without much effort. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
B-spline surfaces, extracted from scanned sensor data, are usually required to represent objects in inspection, surveying technology, metrology and reverse engineering tasks. In order to express a large object with a satisfactory accuracy, multiple scans, which generally lead to overlapping patches, are always needed due to, inter-alia, practical limitations and accuracy of measurements, uncertainties in measurement devices, calibration problems as well as skills of the experimenter. In this paper, we propose an action sequence consisting of division and merging. While the former divides a B-spline surface into many patches with corresponding scanned data, the latter merges the scanned data and its overlapping B-spline surface patch. Firstly, all possible overlapping cases of two B-spline surfaces are enumerated and analyzed from a view of the locations of the projection points of four corners of one surface in the interior of its overlapping surface. Next, the general division and merging methods are developed to deal with all overlapping cases, and a simulated example is used to illustrate aforementioned detailed procedures. In the sequel, two scans obtained from a three-dimensional laser scanner are simulated to express a large house with B-spline surfaces. The simulation results show the efficiency and efficacy of the proposed method. In this whole process, storage space of data points is not increased with new obtained overlapping scans, and none of the overlapping points are discarded which increases the representation accuracy. We believe the proposed method has a number of potential applications in the representation and expression of large objects with three-dimensional laser scanner data.  相似文献   

17.
This paper puts forward a hierarchical method of fluid surface modeling in natural landscapes. The proposed method produces a visually plausible surface geometry with the texture from a single video image recorded by a standard video device. In contrast with the conventional physically based fluid simulation, our method computes preliminary results using empirical method and adopts Stokes wave model to obtain the reconstruction result. We illustrate the working of system with a wide range of possible scene, and a qualitative evaluation of our method is provided to verify the quality of the surface geometry. The experiment shows that the method can meet the requirement of real‐time performance and the reality of the fluid. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
陈忠泽  黄国玉 《计算机应用》2008,28(5):1251-1254
提出一种由目标的立体图像通过人工神经网络实时估计得到其3D姿态的方法。网络的输入向量由同步立体图像帧上目标特征点的坐标构成;而输出向量则表示目标若干关键位置的三维姿态(进而可以建立目标的3D模型)。拟合该神经网络所需要的输出样本数据由运动捕获系统REACTOR获取。实验表明基于该算法的3D姿态估计误差低于5%,可以有效应用于3D虚拟目标的计算机实时合成等。  相似文献   

19.
Prototypes of a special conformal load‐bearing antenna array (CLAA) which has nondevelopable surface, are designed, fabricated, and tested, and the effect of the substrate curvature radius on its EM performance is also researched in this work. A novel three‐dimensional (3‐D) printing technology and fabrication equipment based on micro‐droplet spraying and metal laser sintering are proposed to create patch array and divider network on a non‐developable curved rigid substrate. In order to compare with conventional technology (such as chemical etching), a planar CLAA prototype with two patches, operating frequency at 5GHz, is designed and fabricated by two different technologies, the surface roughness, fabrication tolerance, and EM performance are tested and compared. Finally, a spherical CLAA prototype with eight patches, operating frequency at 13GHz, is designed and fabricated by the novel 3D printing, measured EM performance demonstrate the applicability of additive manufacturing for this special CLAA.  相似文献   

20.
因三维表面纹理能比二维纹理更好地表现物体的纹理信息,而且随场景光照及视角的变化而变化,所以被广泛用于虚拟现实以及计算机游戏等技术之中。Photometric Stereo作为一种有效的获取三维表面纹理信息的技术而被人们所广泛关注。均匀的光照条件是Photometric Stereo捕获和重建三维表面纹理成功的关键条件。在现实应用中,不均匀光照会导致三维表面纹理在捕获和重建过程中发生失真和畸变。针对这种失真和畸变进行了研究,并提出了一种解决此类问题的方法。实验结果表明,该方法简单可行,有效。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号