首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Image compositing is widely used to combine visual elements from separate source images into a single image. Although recent image compositing techniques are capable of achieving smooth blending of the visual elements from different sources, most of them implicitly assume the source images are taken in the same viewpoint. In this paper, we present an approach to compositing novel image objects from multiple source images which have different viewpoints. Our key idea is to construct 3D proxies for meaningful components of the source image objects, and use these 3D component proxies to warp and seamlessly merge components together in the same viewpoint. To realize this idea, we introduce a coordinate-frame based single-view camera calibration algorithm to handle general types of image objects, a structure-aware cuboid optimization algorithm to get the cuboid proxies for image object components with correct structure relationship, and finally a 3D-proxy transformation guided image warping algorithm to stitch object components. We further describe a novel application based on this compositing approach to automatically synthesize a large number of image objects from a set of exemplars. Experimental results show that our compositing approach can be applied to a variety of image objects, such as chairs, cups, lamps, and robots, and the synthesis application can create novel image objects with significant shape and style variations from a small set of exemplars.  相似文献   

2.
We describe a method for doing image compositing using either 2D geometric shapes or raster images as input primitives. The resolution of the final image is virtually unlimited but, as no frame buffer is used, performance is much less dependant on resolution than with standard painting programs, allowing rendering very large images in reasonable time. Many standard features found in compositing programs have been implemented, like hierarchical data structures for input primitives, lighting control for each layer and filter operations (for antialiasing or defocus).  相似文献   

3.
Parallel volume rendering using binary-swap compositing   总被引:3,自引:0,他引:3  
We describe a parallel volume-rendering algorithm, which consists of two parts: parallel ray tracing and parallel compositing. In the most recent implementation on Connection Machine's CM-5 and networked workstations, the parallel volume renderer evenly distributes data to the computing resources available. Without the need to communicate with other processing units, each subvolume is ray traced locally and generates a partial image. The parallel compositing process then merges all resulting partial images in depth order to produce the complete image. The compositing algorithm is particularly effective for massively parallel processing, as it always uses all processing units by repeatedly subdividing the partial images and distributing them to the appropriate processing units. Test results on both the CM-5 and the workstations are promising. They do, however, expose different performance issues for each platform  相似文献   

4.
A panoramic image has a 360° horizontal field of view, and it can provide the viewer the impression of being immersed in the scene. A panorama is created by first taking a sequence of images while rotating the camera about a vertical axis. These images are then projected onto a cylindrical surface before being seamlessly composited. The cross-sectional circumference of the cylindrical panorama is called thecompositing length. This work characterizes the error in compositing panoramic images due to errors in some of the intrinsic parameters. The intrinsic camera parameters that are considered are the camera focal length and the radial distortion coefficient. We show that the error in the compositing length is more sensitive to the error in the camera focal length. Especially important is the discovery that the relative error in compositing length is always smaller than the relative error in the focal length. This means that the error in focal length can be corrected by iteratively using the composited length to compute a new and more correct focal length. Thiscompositing approach to camera calibrationhas the advantages of not requiring both feature detection and separate prior calibration.  相似文献   

5.
This study strives to establish an objective basis for image compositing in satellite oceanography. Image compositing is a powerful technique for cloud filtering that often emphasizes cloud clearing at the expense of obtaining synoptic coverage. Although incomplete cloud removal in image compositing is readily apparent, the loss of synopticity, often, is not. Consequently, the primary goal of image compositing should be to obtain the greatest amount of cloud-free coverage or clarity in a period short enough that synopticity, to a significant degree, is preserved.To illustrate the process of image compositing and the problems associated with it, we selected a region off the coast of California and constructed two 16-day image composites, one, during the spring, and the second, during the summer of 2006, using Advanced Very High Resolution Radiometer (AVHRR) InfraRed (IR) satellite imagery. Based on the results of cloud clearing for these two 16-day sequences, rapid cloud clearing occurred up to day 4 or 5, followed by much slower cloud clearing out to day 16, suggesting an explicit basis for the growth in cloud clearing. By day 16, the cloud clearing had, in most cases, exceeded 95%. Based on these results, a shorter compositing period could have been employed without a significant loss in clarity.A method for establishing an objective basis for selecting the period for image compositing is illustrated using observed data. The loss in synopticity, which, in principle, could be estimated from pattern correlations between the images in the composite, was estimated from a separate time series of SST since the loss of synopticity, in our approach, is only a function of time. The autocorrelation function of the detrended residuals provided the decorrelation time scale and the basis for the decay process, which, together, define the loss of synopticity. The results show that (1) the loss of synopticity and the gain in clarity are inversely related, (2) an objective basis for selecting a compositing period corresponds to the day number where the decay and growth curves for synopticity and clarity intersect, and (3), in this case, the point of intersection occurred 3.2 days into the compositing period. By applying simple mathematics it was shown that the intersection time for the loss in synopticity and the growth in clarity is directly proportional to the initial conditions required to specify the clarity at the beginning of the compositing period, and inversely proportional to the sum of the rates of growth for clarity and the loss in synopticity. Finally, we consider these results to be preliminary in nature, and, as a result, hope that future work will bring forth significant improvements in the approach outlined in this study.  相似文献   

6.
Because of intensive inter‐node communications, image compositing has always been a bottleneck in parallel visualization systems. In a heterogeneous networking environment, the variation of link bandwidth and latency adds more uncertainty to the system performance. In this paper, we present a pipelining image compositing algorithm in heterogeneous networking environments, which is able to rearrange the direction of data flow of a compositing pipeline under strict ordering constraint. We introduce a novel directional image compositing operator that specifies not only the color and α channels of the output but also the direction of data flow when performing compositing. Based on this new operator, we thoroughly study the properties of image compositing pipelines in heterogeneous environments. We develop an optimization algorithm that could find the optimal pipeline from an exponentially large searching space in polynomial time. We conducted a comprehensive evaluation on the ns‐3 network simulator. Experimental results demonstrate the efficiency of our method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
The aim of this paper is to achieve seamless image stitching without producing visual artifact caused by severe intensity discrepancy and structure misalignment, given that the input images are roughly aligned or globally registered. Our new approach is based on structure deformation and propagation for achieving overall consistency in image structure and intensity.The new stitching algorithm, which has found applications in image compositing, image blending, and intensity correction,consists of the following main processes. Depending on the compatibility and distinctiveness of the 2-D features detected in the image plane, single or double optimal partitions are computed subject to the constraints of intensity coherence and structure continuity. Afterwards, specific 1-D features are detected along the computed optimal partitions, from which a set of sparse deformation vectors is derived to encode 1-D feature matching between the partitions. These sparse deformation cues are robustly propagated into the input images by solving the associated minimization problem in gradient domain, thus providing a uniform framework for the simultaneous alignment of image structure and intensity. We present results in general image compositing and blending, in order to show the effectiveness of our method in producing seamless stitching results from complex input images.  相似文献   

8.
吴昊  徐丹 《中国图象图形学报》2012,17(11):1333-1346
数字图像合成一直是图像处理中的研究热点,在图片编辑,平面设计,电影特效等领域有着广泛的应用。从原图像中准确地提取目标物体并将其无缝地合成到新背景下是图像合成的基本目标。按所使用的关键技术分类,现有的数字图像合成技术可分为基于α分量的图像合成,基于梯度场的图像合成和基于多分辨率模型的图像合成。首先详述了3类方法中的典型算法,并从合成质量、鲁棒性、运算效率等方面进行分析比较,然后对新的图像合成应用方式进行了扩展介绍,最后总结了现有图像合成方法的普遍局限性,并探讨了图像合成今后面临的挑战和发展方向。  相似文献   

9.
三维数据域可视化体绘制中的色彩合成新方法   总被引:3,自引:0,他引:3  
在科学视算的研究中,目前各种直接绘制三维数据域的方法,都是运用合成算子线性递推地合成各个象素的色彩。本文提出一种新方法,将色彩合成由线性递推的纵向合成变为二维面(虚拟面)上简单的分布合成。新方法在合成色彩时省去了线性递推方法中所需的乘法运算。更有意义的是,当此方法与三维数据域体绘制的投影成象方法结合时,利用已形成的虚拟面,可以方便地生成大小不同的可视图,而不必重复进行投影操作。本文同时证明了,在一定精度内,放大了的图象反映了在相应精度下具有插值效果的可视信息。  相似文献   

10.
一种通用的3D绘制图象合成方法   总被引:1,自引:0,他引:1  
合成3D绘制图象是一个有趣的主题,为了使复杂的图象更真实,需要合成场景的光照效果,已有的几种方法在合成光照效果方面不太理想,尤其在阴影合成方面。本文提出了一种新的3D绘制图象的合成方法,它结合了Duff的RGBAZ方法和ShadowBuffer技术。  相似文献   

11.
为了获得更少瑕疵的图像合成结果,提出了一种融合图像合成的抠图算法,将抠图与图像合成融为一个统一的过程。通过将待合成背景的信息引入抠图过程,新算法能够更有针对性地进行抠图。理论分析和实验结果说明了当待合成背景与原图像近似的情况下,新算法能够有效地减少抠图误估计带来的影响;当待合成背景与原图像颜色差异较大的情况下,亦能获得较好的合成结果。  相似文献   

12.
Images synthesized by light field rendering exhibit aliasing artifacts when the light field is undersampled; adding new light field samples improves the image quality and reduces aliasing but new samples are expensive to acquire. Light field rays are traditionally gathered directly from the source images, but new rays can also be inferred through geometry estimation. This paper describes a light field rendering approach based on this principle that estimates geometry from the set of source images using multi‐baseline stereo reconstruction to supplement the existing light field rays to meet the minimum sampling requirement. The rendering and reconstruction steps are computed over a set of planes in the scene volume, and output images are synthesized by compositing results from these planes together. The planes are each processed independently and the number of planes can be adjusted to scale the amount of computation to achieve the desired frame rate. The reconstruction fidelity (and by extension image quality) is improved by a library of matching templates to support matches along discontinuities in the image or geometry (e.g. object profiles and concavities). Given a set of silhouette images, the visual hull can be constructed and applied to further improve reconstruction by removing outlier matches. The algorithm is efficiently implemented by a set of image filter operations on commodity graphics hardware and achieves image synthesis at interactive rates.  相似文献   

13.
In the sort-last-sparse parallel volume rendering system on distributed memory multicomputers, one can achieve a very good performance improvement in the rendering phase by increasing the number of processors. This is because each processor can render images locally without communicating with other processors. However, in the compositing phase, a processor has to exchange local images with other processors. When the number of processors exceeds a threshold, the image compositing time becomes a bottleneck. In this paper, we propose three compositing methods to efficiently reduce the compositing time in parallel volume rendering. They are the binary-swap with bounding rectangle (BSBR) method, the binary-swap with run-length encoding and static load-balancing (BSLC) method, and the binary-swap with bounding rectangle and run-length encoding (BSBRC) method. The proposed methods were implemented on an SP2 parallel machine along with the binary-swap compositing method. The experimental results show that the BSBRC method has the best performance among these four methods.  相似文献   

14.
Multi‐temporal compositing of SPOT‐4 VEGETATION imagery over tropical regions was tested to produce spatially coherent monthly composite images with reduced cloud contamination, for the year 2000. Monthly composite images generated from daily images (S1 product, 1‐km) encompassing different land cover types of the state of Mato Grosso, Brazil, were evaluated in terms of cloud contamination and spatial consistency. A new multi‐temporal compositing algorithm was tested which uses different criteria for vegetated and non‐vegetated or sparsely vegetated land cover types. Furthermore, a principal components transformation that rescales the noise in the image—Maximum Noise Fraction (MNF)—was applied to a multi‐temporal dataset of monthly composite images and tested as a method of additional signal‐to‐noise ratio improvement. The back‐transformed dataset using the first 12 MNF eigenimages yielded an accurate reconstruction of monthly composite images from the dry season (May to September) and enhanced spatial coherence from wet season images (October to April), as evaluated by the Moran's I index of spatial autocorrelation. This approach is useful for land cover change studies in the tropics, where it is difficult to obtain cloud‐free optical remote sensing imagery. In Mato Grosso, wet season composite images are important for monitoring agricultural crop cycles.  相似文献   

15.
16.
We present a new, high‐quality compositing pipeline and navigation approach for variable resolution imagery. The motivation of this work is to explore the use of variable resolution images as a quick and accessible alternative to traditional gigapixel mosaics. Instead of the common tedious acquisition of many images using specialized hardware, variable resolution images can achieve similarly deep zooms as large mosaics, but with only a handful of images. For this approach to be a viable alternative, the state‐of‐the‐art in variable resolution compositing needs to be improved to match the high‐quality approaches commonly used in mosaic compositing. To this end, we provide a novel, variable resolution mosaic seam calculation and gradient domain color correction. This approach includes a new priority order graph cuts computation along with a practical data structure to keep memory overhead low. In addition, navigating variable resolution images is challenging, especially at the zoom factors targeted in this work. To address this challenge, we introduce a new image interaction for variable resolution imagery: a pan that automatically, and smoothly, hugs available resolution. Finally, we provide several real‐world examples of our approach producing high‐quality variable resolution mosaics with deep zooms typically associated with gigapixel photography.  相似文献   

17.
We propose a novel approach for denoising Monte Carlo path traced images, which uses data from individual samples rather than relying on pixel aggregates. Samples are partitioned into layers, which are filtered separately, giving the network more freedom to handle outliers and complex visibility. Finally the layers are composited front-to-back using alpha blending. The system is trained end-to-end, with learned layer partitioning, filter kernels, and compositing. We obtain similar image quality as recent state-of-the-art sample based denoisers at a fraction of the computational cost and memory requirements.  相似文献   

18.
Video compositing, the editing and integrating of many video sequences into a single presentation, is an integral part of advanced multimedia services. Single-user compositing systems have been suggested in the past, but when they are extended to accommodate many users, the amount of memory required quickly grows out of hand. We propose two new architectures for digital video compositing in a multiuser environment that are memory-efficient and can operate in real time. Both architectures decouple the task of memory management from compositing processing. We show that under hard throughput and bandwidth constraints, a memory less solution for transferring data from many video sources to many users does not exist. We overcome this using (i) a dynamic memory buffering architecture and (ii) a constant memory bandwidth solution that transforms the sources-to-users transfer schedule into two schedules, then pipelines the computation. The architectures support opaque overlapping of images, arbitrarily shaped images, and images whose shapes dynamically change from frame to frame.  相似文献   

19.
Exposing Digital Forgeries From JPEG Ghosts   总被引:6,自引:0,他引:6  
When creating a digital forgery, it is often necessary to combine several images, for example, when compositing one person's head onto another person's body. If these images were originally of different JPEG compression quality, then the digital composite may contain a trace of the original compression qualities. To this end, we describe a technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image. This approach is applicable to images of high and low quality as well as resolution.   相似文献   

20.

The main objective of this study was to compare the adequacy of various multitemporal image compositing algorithms to produce composite images suitable for burned area analysis. Satellite imagery from the NOAA Advanced Very High Resolution Radiometer (AVHRR) from three different regions (Portugal, central Africa, and South America) were used to compare six algorithms, two of which involve the sequential application of two criteria. Performance of the algorithms was assessed with the Jeffries-Matusita distance, to quantify spectral separability of the burned and unburned classes in the composite images. The ability of the algorithms to avoid the retention of cloud shadows was assessed visually with red-green-blue colour composites, and the level of radiometric speckle in the composite images was quantified with the Moran's I spatial autocorrelation statistic. The commonly used NDVI maximum value compositing procedure was found to be the least appropriate to produce composites to be used for burned area mapping, from all standpoints. The best spectral separability is provided by the minimum channel 2 (m2) compositing approach which has, however, the drawback of retaining cloud shadows. A two-criterion approach which complements m2 with maximization of brightness temperature in a subset of the data (m2M4) is considered the better method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号