首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
High‐quality texture minification techniques, including trilinear and anisotropic filtering, require texture data to be arranged into a collection of pre‐filtered texture maps called mipmaps. In this paper, we present a compression scheme for mipmapped textures which achieves much higher quality than current native schemes by exploiting image coherence across mipmap levels. The basic idea is to use a high‐quality native compressed format for the upper levels of the mipmap pyramid (to retain efficient minification filtering) together with a novel compact representation of the detail provided by the highest‐resolution mipmap. Key elements of our approach include delta‐encoding of the luminance signal, efficient encoding of coherent regions through texel runs following a Hilbert scan, a scheme for run encoding supporting fast random‐access, and a predictive approach for encoding indices of variable‐length blocks. We show that our scheme clearly outperforms native 6:1 compressed texture formats in terms of image quality while still providing real‐time rendering of trilinearly filtered textures.  相似文献   

2.
The rapid growth of digital libraries, e-governance, and internet based applications has caused an exponential escalation in the volume of ‘Big-data’ particularly due to texts, images, audios and videos that are being both archived and transmitted on a daily basis. In order to make their storage and transfer efficient, different data compression techniques are used in the literature. The ultimate motive behind data compression is to transform a big size data into small size data, which eventually implies less space while archiving, and less time in transferring. However, in order to operate/analyze compressed data, it is usually necessary to decompress it, so as to bring back the data to its original form, which unfortunately warrants an additional computing cost. In this backdrop, if operating upon the compressed data itself can be made possible without going through the stage of decompression, then the advantage that could be accomplished due to compression would escalate. Further due to compression, from the data structure and storage perspectives, the original visibility structure of the data also being lost, it turns into a potential challenge to trace the original information in the compressed representation. This challenge is the motivation behind exploring the idea of direct processing on the compressed data itself in the literature. The proposed survey paper specifically focuses on compressed document images and brings out two original contributions. The first contribution is that it presents a critical study on different image analysis and image compression techniques, and highlights the motivational reasons for pursuing document image analysis in the compressed domain. The second contribution is that it summarizes the different compressed domain techniques in the literature so far based on the type of compression and operations performed by them. Overall, the paper aims to provide a perspective for pursuing further research in the area of document image analysis and pattern recognition directly based on the compressed data.  相似文献   

3.
基于多子模式的非对称逆布局模式表示可以作为一种无损图像表示方法,本文以包括点、直线、矩形和三角形的典型多子模式为研究对象,提出了一种基于多种子模式的NAM图像表示方法.其中,三角形典型子模式包括四种走向的非等腰直角三角形,因此在子模式抽取时需要对三角形进行不等腰处理.本文给出了MNAM的表示思想,对其在计算机内的存储结...  相似文献   

4.
In this paper we present a streaming compression scheme for gigantic point sets including per-point normals. This scheme extends on our previous Duodecim approach [21] in two different ways. First, we show how to use this approach for the compression and rendering of high-resolution iso-surfaces in volumetric data sets. Second, we use deferred shading of point primitives to considerably improve rendering quality. Iso-surface reconstruction is performed in a hexagonal close packing (HCP) grid, into which the initial data set is resampled. Normals are resampled from the initial domain using volumetric gradients. By incremental encoding, only slightly more than 3 bits per surface point and 5 bits per surface normal are required at high fidelity. The compressed data stream can be decoded in the graphics processing unit (GPU). Decoded point positions are saved in graphics memory, and they are then used on the GPU again to render point primitives. In this way high quality gigantic data sets can directly be rendered from their compressed representation in local GPU memory at interactive frame rates (see Fig. 1).  相似文献   

5.
The purpose of mid-level visual element discovery is to find clusters of image patches that are representative of, and which discriminate between, the contents of the relevant images. Here we propose a pattern-mining approach to the problem of identifying mid-level elements within images, motivated by the observation that such techniques have been very effective, and efficient, in achieving similar goals when applied to other data types. We show that Convolutional Neural Network (CNN) activations extracted from image patches typical possess two appealing properties that enable seamless integration with pattern mining techniques. The marriage between CNN activations and a pattern mining technique leads to fast and effective discovery of representative and discriminative patterns from a huge number of image patches, from which mid-level elements are retrieved. Given the patterns and retrieved mid-level visual elements, we propose two methods to generate image feature representations. The first encoding method uses the patterns as codewords in a dictionary in a manner similar to the Bag-of-Visual-Words model. We thus label this a Bag-of-Patterns representation. The second relies on mid-level visual elements to construct a Bag-of-Elements representation. We evaluate the two encoding methods on object and scene classification tasks, and demonstrate that our approach outperforms or matches the performance of the state-of-the-arts on these tasks.  相似文献   

6.
7.
In this paper we develop a 4-dimensional representation for patterns based on image decompositions via orientation- and size-specific filters. By retaining image positional information, this encoding scheme reduces pattern rotations, translations, and scale changes to shifts in the filter outputs. The appropriate correlation processes for matching are discussed and the recognition system is illustrated by a number of examples.  相似文献   

8.
Among the most commonly used compression algorithms for document images are those defined by the Consultative Committee for International Telephone and Telegraph (CCITT). CCITT Group III compression is used in all facsimile transmission by modem over analog telephone lines. CCITT Group IV is used in digital transmission and storage of document images. Sufficient readily interpretable spatial information exists in these compressed document images to enable their characterization. In particular, it is possible to locate the positions of the bottoms of both black and white structures. Using the bottoms of black structures we can determine the peak strength of their alignment in order to determine the dominant skew angle of the image. This method can be expanded, by finding minor peaks, to identify multiple skew angles in single images. The angular distributions of the peak alignments of both white and black structures are assembled to form an alignment signature. Logotypes can be designed which generate distinct alignment signatures that are detectable in the compressed representation.  相似文献   

9.
一种基于模板的档案图像压缩新方法   总被引:1,自引:1,他引:0  
杨有 《计算机科学》2008,35(6):265-267
在大容量档案图像数据库中,不仅单页档案图像内部存在数据冗余,而且档案图像页之间存在大量集合冗余.本文提出了基于模板的压缩新方法,通过定义相似图像集合的模板,充分利用图像数据的先验知识,对档案图像的内容进行分析和理解,从图像内和图像间以二维模式压缩图像数据.实验表明,该方法能够大幅提高压缩性能.  相似文献   

10.
Raster‐based topographic maps are commonly used in geoinformation systems to overlay geographic entities on top of digital terrain models. Using compressed texture formats for encoding topographic maps allows reducing latency times while visualizing large geographic datasets. Topographic maps encompass high‐frequency content with large uniform regions, making current compressed texture formats inappropriate for encoding them. In this paper we present a method for locally‐adaptive compression of topographic maps. Key elements include a Hilbert scan to maximize spatial coherence, efficient encoding of homogeneous image regions through arbitrarily‐sized texel runs, a cumulative run‐length encoding supporting fast random‐access, and a compression algorithm supporting lossless and lossy compression. Our scheme can be easily implemented on current programmable graphics hardware allowing real‐time GPU decompression and rendering of bilinear‐filtered topographic maps.  相似文献   

11.
We develop a volumetric video system which supports interactive browsing of compressed time-varying volumetric features (significant isosurfaces and interval volumes). Since the size of even one volumetric frame in a time-varying 3D data set is very large, transmission and on-line reconstruction are the main bottlenecks for interactive remote visualization of time-varying volume and surface data. We describe a compression scheme for encoding time-varying volumetric features in a unified way, which allows for on-line reconstruction and rendering. To increase the run-time decompression speed and compression ratio, we decompose the volume into small blocks and encode only the significant blocks that contribute to the isosurfaces and interval volumes. The results show that our compression scheme achieves high compression ratio with fast reconstruction, which is effective for interactive client-side rendering of time-varying volumetric features.  相似文献   

12.
A new image fusion approach for infrared and visible images is explored, combining fusion with data compression based on sparse representation and compressed sensing. The proposed approach first compresses the sensing data by random projection and then obtains sparse coefficients on compressed samples by sparse representation. Finally, the fusion coefficients are combined with the fusion impact factor and the fused image is reconstructed from the combined sparse coefficients. Experimental results validate its rationality and effectiveness, which can achieve comparable fusion quality on the less-compressed sensing data.  相似文献   

13.
In this paper, a non-symmetry and anti-packing image representation model (NAM) has been proposed. NAM is a hierarchical image representation method and it aims to provide faster operations and less storage requirement. By taking a rectangle sub-pattern, for example, we describe the idea of NAM and its encoding algorithm.In addition, an approach for adaptive area histogram equalization for image contrast enhancement based on a NAM image is presented. The contrast enhancement approach is designed to meet the NAM image representation and it can be duplicated with faster operation. The complexity analysis and the experimental results show that the NAM based algorithm for image contrast enhancement is faster and more effective than that based on matrix image.  相似文献   

14.
基于对象的编码模式优化选择算法   总被引:1,自引:0,他引:1  
张剑  郑心炜 《计算机科学》2005,32(7):125-127
各种视频压缩编码标准都是根据人们在不同领域中对声像数据的要求所制定的,并且随着人们的需求不断地发展。目前,视频压缩编码研究主要分为两个方向:一是基于传统的DCT混合编码方案;另一个是基于第二代图像编码技术而提出的基于对象的编码方案。其中,基于对象的编码方法不仅能满足进一步获得更大的图像数据压缩比的要求,而且能够实现人机对话的功能,所以,我们认为它将是未来视频压缩编码的发展方向。本文对基于对象的编码模式选择算法进行了研完,借用了MPEG-4中视频对象的概念,提出了一种视频分割的方法。  相似文献   

15.
大规模高密度集成电路测试中存在测试数据量大、测试功耗高等问题.提出了一种先通过编码优化测试集,再使用线性反馈移位寄存器(linear feedback shift register, LFSR)重播种的内建自测试方案.该方案通过自动测试模式生成工具得到被测电路的确定测试集,再压缩为种子集存储在片上ROM中.压缩测试集的过程中,首先以降低测试功耗为目标,用少量确定位编码测试集中的部分测试立方,来增强解码后测试模式相邻位之间的一致性;然后以提高压缩率同时降低LFSR级数为目标,将测试立方编码为确定位含量更少的分段相容码(CBC),最后将以CBC编码的测试立方集压缩为LFSR种子集.实验证明所提出的方案在不影响故障覆盖率的前提下大量降低了测试功耗,并且具有更高的测试数据压缩率.  相似文献   

16.
This paper presents a novel data hiding scheme for VQ compression images. This scheme first uses SMVQ prediction to classify encoding blocks into different types, then uses different codebooks and encoding strategies to perform encoding and data hiding simultaneously. In using SMVQ prediction, no extra data is required to identify the combination of encoding strategies and codebook, which helps improve compression performance. Furthermore, the proposed scheme adaptively combines VQ and SMVQ encoding characteristics to provide higher image quality of stego-images while size of the hidden payload remains the same. Experimental results show that the proposed scheme indeed outperforms other previously proposed schemes in image quality of the stego-images and compression performance.  相似文献   

17.
This correspondence presents a speech sentence compression scheme. A compressed word sequence is first extracted. Speech segments, in the spoken document, corresponding to the extracted words are selected for concatenation. Evaluation of the proposed approach shows the compressed speech sentence retains important and meaningful information and naturalness  相似文献   

18.
We present an approach to authenticating photo-ID documents that relies on pattern recognition and public-key cryptography and has security advantages over physical mechanisms that currently safeguard cards, such as optical laminates and holograms. The pattern-recognition component of this approach is based on a photo signature that is a concise representation of the photo image on the document. This photo signature is stored in a database for remote authentication or in encrypted form on the card for stand-alone authentication. Verification of the ID document entails scanning both the photo image and a machine-readable form of the text information, determining the photo signature, and comparing this information against the reference. In this paper, we describe the method and present results of testing a large database of images for photo-signature match in the presence of noise  相似文献   

19.
Increased amount of visual data in several applications necessitates content-based image retrieval. Since most of visual data is stored in compressed form, it is crucial to develop indexing techniques for searching images based on their content in compressed form. Therefore, it is desirable to explore image compression techniques with capability of describing image content in compressed form. Vector Quantization (VQ) is a compression scheme that exploits intra-block correlation and image correlation reflects image content, hence VQ is a suitable compression technique for compressed domain image retrieval.This paper introduces a novel indexing scheme for compressed domain image databases based on indices generated from IC-VQ. The proposed scheme extracts image features based on relationship between indices of IC-VQ compressed images. This relationship detects contiguous regions of compressed image based on inter- and intra-block correlation. Experimental results show effectiveness superiority of the new scheme compared to VQ and color-based schemes.  相似文献   

20.
Efficient memory representation of XML document trees   总被引:1,自引:0,他引:1  
Implementations that load XML documents and give access to them via, e.g., the DOM, suffer from huge memory demands: the space needed to load an XML document is usually many times larger than the size of the document. A considerable amount of memory is needed to store the tree structure of the XML document. In this paper, a technique is presented that allows to represent the tree structure of an XML document in an efficient way. The representation exploits the high regularity in XML documents by compressing their tree structure; the latter means to detect and remove repetitions of tree patterns. Formally, context-free tree grammars that generate only a single tree are used for tree compression. The functionality of basic tree operations, like traversal along edges, is preserved under this compressed representation. This allows to directly execute queries (and in particular, bulk operations) without prior decompression. The complexity of certain computational problems like validation against XML types or testing equality is investigated for compressed input trees.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号