首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The aim of this paper is to achieve seamless image stitching without producing visual artifact caused by severe intensity discrepancy and structure misalignment, given that the input images are roughly aligned or globally registered. Our new approach is based on structure deformation and propagation for achieving overall consistency in image structure and intensity.The new stitching algorithm, which has found applications in image compositing, image blending, and intensity correction,consists of the following main processes. Depending on the compatibility and distinctiveness of the 2-D features detected in the image plane, single or double optimal partitions are computed subject to the constraints of intensity coherence and structure continuity. Afterwards, specific 1-D features are detected along the computed optimal partitions, from which a set of sparse deformation vectors is derived to encode 1-D feature matching between the partitions. These sparse deformation cues are robustly propagated into the input images by solving the associated minimization problem in gradient domain, thus providing a uniform framework for the simultaneous alignment of image structure and intensity. We present results in general image compositing and blending, in order to show the effectiveness of our method in producing seamless stitching results from complex input images.  相似文献   

2.
Sort-last并行绘制方法广泛应用于大规模科学数据的并行可视化过程,而并行图像合成方法直接决定了Sort-last方法的总体绘制性能.针对目前Direct Send图像合成方法中存在的不足,提出一种通信开销最小的Direct Send并行图像合成方法,该方法首先使用GPU多线程方式统计各个绘制节点图像的有效像素前缀和,再利用动态规划方法计算有效像素前缀和列表的最佳分割位置,使并行图像合成的通信开销最小.该方法改变了传统Direct Send方法中静态均匀分配图像子块的合成模式,实验表明:所提出的Direct Send方法在并行图像合成性能方面明显优于现有方法,为后续以Direct Send方法为基石构建更高效的大规模并行图像合成方法奠定了基础.  相似文献   

3.
We present a new, high‐quality compositing pipeline and navigation approach for variable resolution imagery. The motivation of this work is to explore the use of variable resolution images as a quick and accessible alternative to traditional gigapixel mosaics. Instead of the common tedious acquisition of many images using specialized hardware, variable resolution images can achieve similarly deep zooms as large mosaics, but with only a handful of images. For this approach to be a viable alternative, the state‐of‐the‐art in variable resolution compositing needs to be improved to match the high‐quality approaches commonly used in mosaic compositing. To this end, we provide a novel, variable resolution mosaic seam calculation and gradient domain color correction. This approach includes a new priority order graph cuts computation along with a practical data structure to keep memory overhead low. In addition, navigating variable resolution images is challenging, especially at the zoom factors targeted in this work. To address this challenge, we introduce a new image interaction for variable resolution imagery: a pan that automatically, and smoothly, hugs available resolution. Finally, we provide several real‐world examples of our approach producing high‐quality variable resolution mosaics with deep zooms typically associated with gigapixel photography.  相似文献   

4.
三维数据域可视化体绘制中的色彩合成新方法   总被引:3,自引:0,他引:3  
在科学视算的研究中,目前各种直接绘制三维数据域的方法,都是运用合成算子线性递推地合成各个象素的色彩。本文提出一种新方法,将色彩合成由线性递推的纵向合成变为二维面(虚拟面)上简单的分布合成。新方法在合成色彩时省去了线性递推方法中所需的乘法运算。更有意义的是,当此方法与三维数据域体绘制的投影成象方法结合时,利用已形成的虚拟面,可以方便地生成大小不同的可视图,而不必重复进行投影操作。本文同时证明了,在一定精度内,放大了的图象反映了在相应精度下具有插值效果的可视信息。  相似文献   

5.
Direct volume rendering based on projective methods works by projecting, in visibility order, the polyhedral cells of a mesh onto the image plane, and incrementally compositing the cell's color and opacity into the final image. Crucial to this method is the computation of a visibility ordering of the cells. If the mesh is "well-behaved" (acyclic and convex), then the MPVO method of Williams provides a very fast sorting algorithm; however, this method only computes an approximate ordering in general datasets, resulting in visual artifacts when rendered. A recent method of Silva et al. removed the assumption that the mesh is convex, by means of a sweep algorithm used in conjunction with the MPVO method; their algorithm is substantially faster than previous exact methods for general meshes.
In this paper we propose a new technique, which we call BSP-XMPVO, which is based on a fast and simple way of using binary space partitions on the boundary elements of the mesh to augment the ordering produced by MPVO. Our results are shown to be orders of magnitude better than previous exact methods of sorting cells.  相似文献   

6.
7.
Morphological image compositing   总被引:1,自引:0,他引:1  
Image mosaicking can be defined as the registration of two or more images that are then combined into a single image. Once the images have been registered to a common coordinate system, the problem amounts to the definition of a selection rule to output a unique value for all those pixels that are present in more than one image. This process is known as image compositing. In this paper, we propose a compositing procedure based on mathematical morphology and its marker-controlled segmentation paradigm. Its scope is to position seams along salient image structures so as to diminish their visibility in the output mosaic even in the absence of radiometric corrections or blending procedures. We also show that it is suited to the seamless minimization of undesirable transient objects occurring in the regions where two or more images overlap. The proposed methodology and algorithms are illustrated for the composition of satellite images minimizing cloud cover.  相似文献   

8.
Exposing Digital Forgeries From JPEG Ghosts   总被引:6,自引:0,他引:6  
When creating a digital forgery, it is often necessary to combine several images, for example, when compositing one person's head onto another person's body. If these images were originally of different JPEG compression quality, then the digital composite may contain a trace of the original compression qualities. To this end, we describe a technique to detect whether the part of an image was initially compressed at a lower quality than the rest of the image. This approach is applicable to images of high and low quality as well as resolution.   相似文献   

9.
Image compositing is widely used to combine visual elements from separate source images into a single image. Although recent image compositing techniques are capable of achieving smooth blending of the visual elements from different sources, most of them implicitly assume the source images are taken in the same viewpoint. In this paper, we present an approach to compositing novel image objects from multiple source images which have different viewpoints. Our key idea is to construct 3D proxies for meaningful components of the source image objects, and use these 3D component proxies to warp and seamlessly merge components together in the same viewpoint. To realize this idea, we introduce a coordinate-frame based single-view camera calibration algorithm to handle general types of image objects, a structure-aware cuboid optimization algorithm to get the cuboid proxies for image object components with correct structure relationship, and finally a 3D-proxy transformation guided image warping algorithm to stitch object components. We further describe a novel application based on this compositing approach to automatically synthesize a large number of image objects from a set of exemplars. Experimental results show that our compositing approach can be applied to a variety of image objects, such as chairs, cups, lamps, and robots, and the synthesis application can create novel image objects with significant shape and style variations from a small set of exemplars.  相似文献   

10.
拼图是使用不同形式的素材进行组装的合成艺术.文中介绍了一种崭新的拼图创作方法,通过挑选主题相关的网络图像的前景抠图来表现一幅输入图像,能够生成奇异的具有阿奇姆博多风格的拼图.鉴于网络图像数据的海量规模,总能够找到合适的前景抠图来匹配输入图像的各个分割部分;通过将挑选的前景抠图按照一定的规则进行组装,使其在整体上可以从形状和颜色表现输入图像,而在局部每个前景抠图仍然可以辨识.实验结果和用户体验表明,采用文中方法可以有效地生成具有娱乐性的阿奇姆博多拼图.  相似文献   

11.
A new technology was developed at the Canada Centre for Remote Sensing (CCRS) for generating Canada-wide and North America continental scale clear-sky composites at 250 m spatial resolution for all seven MODIS land spectral bands (B1–B7). The MODIS Level 1B (MOD02) swath level data are used as input to circumvent the problems with image distortion in the mid latitude and polar regions inherent to the global sinusoidal (SIN) projection utilized for the standard MODIS data products. The MODIS 500 m land bands B3 to B7 are first downscaled to 250 m resolution using an adaptive regression and normalization scheme for compatibility with the 250 m bands B1 and B2. A new method has been developed to produce the mask of clear-sky, cloud and cloud shadow at 250 m resolution. It shows substantial advantages in comparison with the MODIS 250 m standard cloud masks. The testing of new cloud mask showed that it is in reasonable agreement with the MODIS 1-km standard product once it is aggregated to 1-km scale, while the cloud shadow detection looks more reliable with the new methodology. Nevertheless, more quantitative analyses of the presented scene identification technique are required to understand its performance over the range of input scenes in various seasons. The new clear-sky compositing scheme employs a scene-dependent decision matrix. It is demonstrated that this new scheme provides better results than any others based on a single compositing criterion, such as maximum NDVI or minimum visible reflectance. To account for surface bi-directional properties, two clear-sky composites for the same time period are produced by separating backward scattering and forward scattering geometries, which separate pixels with the sun-satellite relative azimuth angles within 90°–270° and outside of this range. Comparison with Landsat imagery and with MODIS standard composite products demonstrated the advantage of the new technique for screening cloud and cloud shadow, and generating high spatial resolution MODIS clear-sky composites. The new data products are mapped in the Lambert Conformal Conic (LCC) projection for Canada and the Lambert Azimuthal Equal-Area (LAEA) projection for North America. Presently this activity is limited to MODIS/TERRA due to known problems with band-to-band registration and noisy SWIR channels on MODIS/AQUA.  相似文献   

12.
Ouglas Comer 《Software》1982,12(11):1069-1079
Ffg is a simple database system that consists of a set of many independent programs, called primitives, each of which performs a single operation. The user composes primitives by connecting the output of one to the input of the next to achieve a computation. Because they are independent, primitives are easily modified or replaced, and one can add programs to the set of primitives. Unlike the strongly typed interconnections provided by many languages, interconnections of ffg primitives are not type-checked. The implementation ffg is also novel because ffg primitives are constructed out of lower level primitives instead of conventional programs. The selection of primitives, the type checking mechanism and the implementation are discussed.  相似文献   

13.
数字图像超分辨率重建技术综述   总被引:2,自引:0,他引:2  
图像超分辨率重建的目的是通过一幅或多幅低分辨率降质图像来估计一幅视觉效果较好的高分辨率图像.它从传统的图像恢复与重建技术而来,利用图像之间的信息互补来获得比单幅图像更多的细节.超分辨率技术主要分为两大类:基于重建的超分辨率技术和基于学习的超分辨率技术.基于重建的超分辨率技术按照特定的退化模型,通过输入的图像来估计高分辨率图像.基于学习的超分辨率技术从训练样本中获取先验知识,对输入图像的信息进行补充,可以获得比基于重建的算法更好的效果.对超分辨技术的算法作了系统的介绍,并指出图像的配准、退化模型的建立、盲估计问题、学习模型的建立、学习算法等仍是图像超分辨率技术中存在的主要问题,也是进一步研究的方向.  相似文献   

14.
传统的以彩色图像为指导的深度图像超分辨率(SR)重建方法,参考图像必须为高分 辨率彩色图像,彩色图像的分辨率决定了深度图像的放大上限。同时,实际应用中可能只存在低 分辨率彩色图像,此时上述方法也不再适用。为此,探讨使用任意分辨率彩色图像为指导的深度 图像SR 重建方法。首先,使用大量不同类别的图像SR 算法对输入彩色图像进行上采样,得到 高分辨率彩色图像并以此作为指导图像,然后采用基于二阶总广义变分方法,将由低分辨率彩色 图像重建得到的图像作为正则约束项,添加图像边缘信息,构建目标函数,将深度图像SR 重建 问题转化为最优化问题,再通过原-对偶方法求解,最终得到高分辨率深度图像。探讨了之前被 相关方法所忽略的情形,该方法可以适用于任意分辨率的彩色指导图像。并且通过相关实验发现 了令人惊异的现象,即通过使用低分辨率彩色图像放大后作为指导,可以得到与使用高分辨率彩 色指导图像相近甚至更好的结果,对相关问题的研究和应用具有一定参考意义。  相似文献   

15.
Digital storage and transmission promise noise-free images, but it is important to keep in mind that even digital is not perfect. Digital images have their own sources of noise: round-off error and quantization error. Whenever you do any sort of image arithmetic, such as contrast enhancement or compositing, you get roundoff error. In fact, since the arithmetic is often done in only X-bit accuracy, sometimes the round-off error can be substantial. You get quantization error, on the other hand, whenever you go from an analog signal to a digital signal or whenever you go from a high color-resolution signal (for example, 24 bits per pixel) to a low resolution signal (for example, 8 bits per pixel). The author considers the quantization error from analog to digital  相似文献   

16.
Current digital image/video storage, transmission and display technologies use uniformly sampled images. On the other hand, the human retina has a nonuniform sampling density that decreases dramatically as the solid angle from the visual fixation axis increases. Therefore, there is sampling mismatch. This paper introduces retinally reconstructed images (RRI), a representation of digital images that enables a resolution match with the retina. To create an RRI, the size of the input image, the viewing distance and the fixation point should be known. In the coding phase, we compute the “retinal codes”, which consist of the retinal sampling locations onto which the image projects, together with the retinal outputs at these locations. In the decoding phase, we use the backprojection of the retinal codes onto the input image grid as B-spline control coefficients, in order to construct a 3D B-spline surface with nonuniform resolution properties. An RRI is then created by mapping the B-spline surface onto a uniform grid, using triangulation. Transmitting or storing the “retinal codes” instead of the full resolution images enables up to two orders of magnitude data compression, depending on the resolution of the input image, the size of the input image and the viewing distance. The data reduction capability of retinal codes and RRI is promising for digital video storage and transmission applications. However, the computational burden can be substantial in the decoding phase  相似文献   

17.
This paper introduces a novel method of visual learning based on genetic programming, which evolves a population of individuals (image analysis programs) that process attributed visual primitives derived from raw raster images. The goal is to evolve an image analysis program that correctly recognizes the training concept (shape). The approach uses generative evaluation scheme: individuals are rewarded for reproducing the shape of the object being recognized using graphical primitives and elementary background knowledge encoded in predefined operators. Evolutionary run is driven by a multiobjective fitness function to prevent premature convergence and enable effective exploration of the space of solutions. We present the method in detail and verify it experimentally on the task of learning two visual concepts from examples.  相似文献   

18.
In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high‐resolution panchromatic image and that of the low‐resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1 : 10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1 : 10 to 1 : 30). However, even with a spatial resolution ratio as small as 1 : 30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1 : 30), to obtain better spectral integrity of the fused image, one may downsample the input high‐resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.  相似文献   

19.
Parallel volume rendering using binary-swap compositing   总被引:3,自引:0,他引:3  
We describe a parallel volume-rendering algorithm, which consists of two parts: parallel ray tracing and parallel compositing. In the most recent implementation on Connection Machine's CM-5 and networked workstations, the parallel volume renderer evenly distributes data to the computing resources available. Without the need to communicate with other processing units, each subvolume is ray traced locally and generates a partial image. The parallel compositing process then merges all resulting partial images in depth order to produce the complete image. The compositing algorithm is particularly effective for massively parallel processing, as it always uses all processing units by repeatedly subdividing the partial images and distributing them to the appropriate processing units. Test results on both the CM-5 and the workstations are promising. They do, however, expose different performance issues for each platform  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号