首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recovering shape by purposive viewpoint adjustment   总被引:1,自引:1,他引:0  
We present an approach for recovering surface shape from the occluding contour using an active (i.e., moving) observer. It is based on a relation between the geometries of a surface in a scene and its occluding contour: If the viewing direction of the observer is along a principal direction for a surface point whose projection is on the contour, surface shape (i.e., curvature) at the surface point can be recovered from the contour. Unlike previous approaches for recovering shape from the occluding contour, we use an observer thatpurposefully changes viewpoint in order to achieve a well-defined geometric relationship with respect to a 3-D shape prior to its recognition. We show that there is a simple and efficient viewing strategy that allows the observer to align the viewing direction with one of the two principal directions for a point on the surface. This strategy depends on only curvature measurements on the occluding contour and therefore demonstrates that recovering quantitative shape information from the contour does not require knowledge of the velocities or accelerations of the observer. Experimental results demonstrate that our method can be easily implemented and can provide reliable shape information from the occluding contour.  相似文献   

2.
3.
Image-based relighting (IBL) is a technique to change the illumination of an image-based object/scene. In this paper, we define a representation called the reflected irradiance field which records the light reflected from a scene as viewed at a fixed viewpoint as a result of moving a point light source on a plane. It synthesizes a novel image under a different illumination by interpolating and superimposing appropriate recorded samples. Furthermore, we study the minimum sampling problem of the reflected irradiance field, i.e., how many light source positions are needed. We find that there exists a geometry-independent bound for the sampling interval whenever the second-order derivatives of the surface BRDF and the minimum depth of the scene are bounded. This bound ensures that when the novel light source is on the plane, the error in the reconstructed image is controlled by a given tolerance, regardless of the geometry. We also analyze the bound of depth error so that the extra reconstruction error can also be governed when the novel light source is off-plane. Experiments on both synthetic and real surfaces are conducted to verify our analysis.  相似文献   

4.
《Real》1999,5(3):203-213
In this paper we describe a new approach to contour extraction and tracking, which is based on the principles of active contour models and overcomes its shortcomings. We formally introduce active rays, describe the contour extraction as an energy minimization problem and discuss what active contours and active rays have in common.The main difference is that for active rays a unique ordering of the contour elements in the 2D image plane is given, which cannot be found for active contours. This is advantageous for predicting the contour elements' position and prevents crossings in the contour. Further, another advantage is that instead of an energy minimization in the 2D image plane the minimization is reduced to a 1D search problem. The approach also shows any-time behavior, which is important with respect to real-time applications. Finally, the method allows for the management of multiple hypotheses of the object's boundary. This is an important aspect if concave contours are to be tracked.Results on real image sequences (tracking a toy train in a laboratory scene, tracking pedestrians in an outdoor scene) show the suitability of this approach for real-time object tracking in a closed loop between image acquisition and camera movement. The contour tracking can be done within the image frame rate (25 fps) on standard Unix workstations (HP 735) without any specialized hardware.  相似文献   

5.
An image generation approach in computer graphics leading to images of considerable optical quality is ray tracing, deduced from geometric optics. Crucial for the efficiency of ray tracing is to find quickly an intersection point closest to a ray's origin. This requires to restrict the candidate patches of a given scene as well as to find the intersections of a ray with a patch. We survey approaches to solve these two problems, and present a new method, based on space sweep and patch subdivision. Its advantages are subdivision adapted to the distribution of rays and consideration of ray coherence. This implies that useless subdivisions are avoided.  相似文献   

6.
We present an approach that significantly enhances the capabilities of traditional image mosaicking. The key observation is that as a camera moves, it senses each scene point multiple times. We rigidly attach to the camera an optical filter with spatially varying properties, so that multiple measurements are obtained for each scene point under different optical settings. Fusing the data captured in the multiple images yields an image mosaic that includes additional information about the scene. We refer to this approach as generalized mosaicing. In this paper we show that this approach can significantly extend the optical dynamic range of any given imaging system by exploiting vignetting effects. We derive the optimal vignetting configuration and implement it using an external filter with spatially varying transmittance. We also derive efficient scene sampling conditions as well as ways to self calibrate the vignetting effects. Maximum likelihood is used for image registration and fusion. In an experiment we mounted such a filter on a standard 8-bit video camera, to obtain an image panorama with dynamic range comparable to imaging with a 16-bit camera.  相似文献   

7.
阐述了图象变形的数字方法,在此基础上详述并实现了基于碎片的二次网状变形技术及基本点的采用径向函数的图象变形技术,并给出了实验结果。最后简单地讲述了基于轮廓的图象变形技术。  相似文献   

8.
We study the problem of recovering the 3D shape of an unknown smooth specular surface from a single image. The surface reflects a calibrated pattern onto the image plane of a calibrated camera. The pattern is such that points are available in the image where position, orientations, and local scale may be measured (e.g. checkerboard). We first explore the differential relationship between the local geometry of the surface around the point of reflection and the local geometry in the image.We then study the inverse problem and give necessary and sufficient conditions for recovering surface position and shape.We prove that surface position and shape up to third order can be derived as a function of local position, orientation and local scale measurements in the image when two orientations are available at the same point (e.g. a corner). Information equivalent to scale and orientation measurements can be also extracted from the reflection of a planar scene patch of arbitrary geometry, provided that the reflections of (at least) 3 distinctive points may be identified.We validate our theoretical results with both numerical simulations and experiments with real surfaces.  相似文献   

9.
《国际计算机数学杂志》2012,89(13):2857-2870
Three novel object's contour detection schemes based on image fusion are proposed in this paper. In these schemes an active contour model is applied to detect the object's contour edge. Since an object's contour in an infrared (IR) image is usually clearer than that in a visible image, the convergent active contour in a visible image is improved with that in an IR image. The first contour detection scheme is realized by revising the shape-preserving active contour model. The second scheme minimizes the B-spline L 2 norm's square of the difference of the B-spline control point vectors in two modal images. Contour tracking and extraction experiments indicate that the first scheme outperforms the second one. Moreover, a third scheme based on the active contour and pixel-level image fusion is proposed for images with incomplete but complementary scene information. An example using contour extraction of a partially hidden tank proves its efficacy.  相似文献   

10.
针对景象图像的特点,提出了基于链码技术的景象图特征点的提取方法,并首先引进了一种新的链码技术对景象图像轮廓进行描述;然后利用链码的重构不变性重构图像的主要轮廓,以去除噪声和细节,并对其进行二次编码,以获取图像主要轮廓链码信息;最后在所得链码中提取各类特征点,包括轮廓边界的端点、形心、交叉点和显著拐点。实验结果表明,该方法不仅特征点提取效果好,而且信息压缩能力强、抗干扰能力强。  相似文献   

11.
提出一种基于向量运算体现纹理形变要求的映射对应点获取方法,根据场景表面纹理走势特点划分表面形变控制网格,以网格边界为形变控制向量,通过对控制向量的合成取得映射点;同时,根据对应点亮度进行图像融合达到纹理真实感融入.对于场景表面纹理走势细节性变化给出了网格调整方法,进一步使用三次参数样条曲线拟合映射区域,基于曲率半径进行边界多边形化以实现对映射区域的精确判断.该方法能产生高度真实感的虚拟实景效果.  相似文献   

12.
This paper presents an approach to image understanding on the aspect of unsupervised scene segmentation. With the goal of image understanding in mind, we consider ‘unsupervised scene segmentation’ a task of dividing a given image into semantically meaningful regions without using annotation or other human-labeled information. We seek to investigate how well an algorithm can achieve at partitioning an image with limited human-involved learning procedures. Specifically, we are interested in developing an unsupervised segmentation algorithm that only relies on the contextual prior learned from a set of images. Our algorithm incorporates a small set of images that are similar to the input image in their scene structures. We use the sparse coding technique to analyze the appearance of this set of images; the effectiveness of sparse coding allows us to derive a priori the context of the scene from the set of images. Gaussian mixture models can then be constructed for different parts of the input image based on the sparse-coding contextual prior, and can be combined into an Markov-random-field-based segmentation process. The experimental results show that our unsupervised segmentation algorithm is able to partition an image into semantic regions, such as buildings, roads, trees, and skies, without using human-annotated information. The semantic regions generated by our algorithm can be useful, as pre-processed inputs for subsequent classification-based labeling algorithms, in achieving automatic scene annotation and scene parsing.  相似文献   

13.
The efficient evaluation of visibility in a three‐dimensional scene is a longstanding problem in computer graphics. Visibility evaluations come in many different forms: figuring out what object is visible in a pixel; determining whether a point is visible to a light source; or evaluating the mutual visibility between 2 surface points. This paper provides a new, experimental view on visibility, based on a probabilistic evaluation of the visibility function. Instead of checking the visibility against all possible intervening geometry the visibility between 2 points is now evaluated by testing only a random subset of objects. The result is not a Boolean value that is either 0 or 1, but a numerical value that can even be negative. Because we use the visibility evaluation as part of the integrand in illumination computations, the probabilistic evaluation of visibility becomes part of the Monte Carlo procedure of estimating the illumination integral, and results in an unbiased computation of illumination values in the scene. Moreover, the number of intersections tests for any given ray is decreased, since only a random selection of geometric primitives is tested. Although probabilistic visibility is an experimental and new idea, we present a practical algorithm for direct illumination that uses the probabilistic nature of visibility evaluations.  相似文献   

14.
This paper introduces a novel camera attachment for measuring the illumination color spatially in the scene. The illumination color is then used to transform color appearance in the image into that under white light.The main idea is that the scene inter-reflection through a reference camera-attached surface Nose can, under some conditions, represent the illumination color directly. The illumination measurement principle relies on the satisfaction of the gray world assumption in a local scene area or the appearance of highlights, from dielectric surfaces. Scene inter-reflections are strongly blurred due to optical dispersion on the nose surface and defocusing of the nose surface image. Blurring smoothes the intense highlights and it thus becomes possible to measure the nose inter-reflection under conditions in which intensity variation in the main image would exceed the sensor dynamic range.We designed a nose surface to reflect a blurred scene version into a small image section, which is interpreted as a spatial illumination image. The nose image is then mapped to the main image for adjusting every pixel color. Experimental results showed that the nose inter-reflection color is a good measure of illumination color when the model assumptions are satisfied. The nose method performance, operating on real images, is presented and compared with the Retinex and the scene-inserted white patch methods.  相似文献   

15.
Image mosaic construction is about stitching together a number of images about the same scene to construct a single image with a larger field of view. The majority of the previous work was rooted at the use of a single image-to-image mapping termed planar homography for representing the imaged scene. However, the mapping is applicable only to cases where the imaged scene is either a single planar surface, or very distant from the cameras, or imaged under a pure rotation of the camera, and that greatly limits the range of applications of the mosaicking methods. This paper presents a novel mosaicking solution for scenes that are polyhedral (thus consisting of multiple surfaces) and that are pictured possibly in closed range of the camera. The solution has two major advantages. First, it requires only a few correspondences over the entire scene, not correspondences over every surface patch in it to work. Second, it conquers a seemingly impossible task—warping image data of surfaces that are visible in only one of the input images, which we refer to as the singly visible surfaces, to another viewpoint to constitute the mosaic there. We also provide a detail analysis of what determines whether a singly visible surface could be mosaicked or not. Experimental results on real image data are presented to illustrate the performance of the method.  相似文献   

16.
王程  张骏  高隽 《中国图象图形学报》2020,25(12):2630-2646
目的 光场相机一次成像可以同时获取场景中光线的空间和角度信息,为深度估计提供了条件。然而,光场图像场景中出现高光现象使得深度估计变得困难。为了提高算法处理高光问题的可靠性,本文提出了一种基于光场图像多视角上下文信息的抗高光深度估计方法。方法 本文利用光场子孔径图像的多视角特性,创建多视角输入支路,获取不同视角下图像的特征信息;利用空洞卷积增大网络感受野,获取更大范围的图像上下文信息,通过同一深度平面未发生高光的区域的深度信息,进而恢复高光区域深度信息。同时,本文设计了一种新型的多尺度特征融合方法,串联多膨胀率空洞卷积特征与多卷积核普通卷积特征,进一步提高了估计结果的精度和平滑度。结果 实验在3个数据集上与最新的4种方法进行了比较。实验结果表明,本文方法整体深度估计性能较好,在4D light field benchmark合成数据集上,相比于性能第2的模型,均方误差(mean square error,MSE)降低了20.24%,坏像素率(bad pixel,BP)降低了2.62%,峰值信噪比(peak signal-to-noise ratio,PSNR)提高了4.96%。同时,通过对CVIA (computer vision and image analysis) Konstanz specular dataset合成数据集和Lytro Illum拍摄的真实场景数据集的定性分析,验证了本文算法的有效性和可靠性。消融实验结果表明多尺度特征融合方法改善了深度估计在高光区域的效果。结论 本文提出的深度估计模型能够有效估计图像深度信息。特别地,高光区域深度信息恢复精度高、物体边缘区域平滑,能够较好地保存图像细节信息。  相似文献   

17.
基于形变模型由立体序列图象恢复物体的3D形状   总被引:1,自引:0,他引:1  
结合立体视觉和形变模型提出了一种新的物体3D形状的恢复方法。采用立体视觉方法导出物体表面的3D坐标;利用光流模型估计物体的3D运动,根据此运动移动形变模型,使其对准物体的表面块;由形变模型将由各幅图象得到的离散的3D点融为一起,得到物体的表面形状。实验结果表明该方法能用于形状复杂的物体恢复。  相似文献   

18.
基于暗原色及入射光假设的单幅图像去雾   总被引:1,自引:1,他引:0       下载免费PDF全文
目的 雾是一种常见的天气状况,针对雾能使图像中的景物对比度降低、表面颜色退化的问题,提出一种基于入射光假设的单幅图像去雾方法。方法 首先利用全局暗原色进行初步去雾,从而使图像透射率处于[0,1]范围内;然后利用雾天光照均匀的特点以及Retinex的照度估计原理进行透射图的估计;最后利用透射图以及初步去雾图像得到复原图像。结果 与He算法、Fattal算法的对比实验结果显示,该算法获得的复原图像细节清晰,颜色自然。与引导滤波优化后的He去雾算法相比,本文算法速度提高了93%。结论 大量对比实验结果表明,本文算法能够显著恢复雾天降质图像,对于薄雾和浓雾同样有效,具有广泛的适用性,且算法原理简单。此外,本文算法也同样适用于灰度图。  相似文献   

19.
We present a new framework for point cloud denoising by patch‐collaborative spectral analysis. A collaborative generalization of each surface patch is defined, combining similar patches from the denoised surface. The Laplace–Beltrami operator of the collaborative patch is then used to selectively smooth the surface in a robust manner that can gracefully handle high levels of noise, yet preserves sharp surface features. The resulting denoising algorithm competes favourably with state‐of‐the‐art approaches, and extends patch‐based algorithms from the image processing domain to point clouds of arbitrary sampling. We demonstrate the accuracy and noise‐robustness of the proposed algorithm on standard benchmark models as well as range scans, and compare it to existing methods for point cloud denoising.  相似文献   

20.
针对光场的深度信息估计中,由遮挡带来的干扰,造成遮挡处的深度值估计精度低的问题,提出一种抗多遮挡物干扰的光场深度信息估计算法。对场景点的angular patch图像进行多遮挡物分析,分析遮挡物的位置分布特性。基于分类的思想提出改进AP(Affinity Propagation)聚类算法将场景点的angular patch图像进行像素点分类,将遮挡物和场景点分离。对分离遮挡物后的angular patch图像提出联合像素强度信息熵及中心方差的目标函数,最小化该函数,求得场景点的初始深度值估计。对初始深度值估计提出基于MAP-MRF(最大后验估计的马尔可夫随机场)框架的平滑约束能量函数进行平滑优化,并采用图割算法(Graph Cut Algorithm)求解,得到场景的最终深度值估计。实验结果表明,相较于现有深度信息估计算法,所提算法提升了遮挡处的估计精度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号