首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we propose a technique of compressing bitmap indexes for application in data warehouses. This technique, called run-length Huffman (RLH), is based on run-length encoding and on Huffman encoding. Additionally, we present a variant of RLH, called RLH-N. In RLH-N a bitmap is divided into N-bit words that are compressed by RLH. RLH and RLH-N were implemented and experimentally compared to the well-known word aligned hybrid (WAH) bitmap compression technique that has been reported to provide the shortest query execution time. The experiments discussed in this paper show that: (1) RLH-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps, regardless of the cardinality of an indexed attribute, (2) RLH-N-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps for certain range of cardinalities of an indexed attribute, (3) RLH and RLH-N-compressed bitmaps offer shorter query response times than WAH-compressed bitmaps, for certain range of cardinalities of an indexed attribute, and (4) RLH-N assures shorter update time of compressed bitmaps than RLH.  相似文献   

2.
Multimedia Tools and Applications - Digital data compression aims to reduce the size of digital files in line with technological development. However, most data is distinguished by its large size,...  相似文献   

3.
提出了一种新的基于连续及非连续长度块编码的测试数据压缩方法,该方案从提高码字利用率的目的出发,利用定长的二进制码字表示连续长度块的长度信息,同时,将连续位长度不足的序列按一定的策略划为非连续块,并且不对其进行编码,故有效地避免了用长码字替换短游程序列的情况。该方案的编码规则减少了使用前、后缀形式编码的复杂性,所以其编码及解码过程简单,同时具有简单的通讯协议。对ISCAS-89标准电路Mintest集的压缩结果表明,提出的方案较FDR码和Golomb码都具有更好的压缩效率。  相似文献   

4.
无失真综合压缩编码分析与译码实现技术   总被引:1,自引:0,他引:1  
本文从字典编码和霍夫曼编码的基本原理出发,分析了二者在目前比较常用的无失真压缩编码技术中的应用,重点介绍了具有代表性的LZSS算法与霍夫曼编码的综合应用,并概述了其译码实现过程。  相似文献   

5.
为了保存超声检测过程中产生的大量数据,通过对超声波数据提取波形沿交点等特征,并在此基础上进行Delta编码,将特征提取与Delta编码进行结合,建立了一份十进制二进制编码表,使编码后所得的数据压缩比提高20%~40%。特征提取和Delta编码可以互补提高压缩比,并且可以根据重构精度的要求动态改变压缩比。最后用实例说明了这种压缩方法的压缩效果和还原效果。  相似文献   

6.
This paper presents an effective compression method suitable for transmission the still images on public switching telephone networks (PSTN). Since compression algorithm reduce the number of pixels or the gray levels of a source picture, therefore this will lead to the reduction of the amount of memory needed to store the source information or the time necessary for transmitting by a channel with a limited bandwidth. First, we introduced some current standards and finally the lossy DCT-based JPEG compression method is chosen. According to our studies, this method is one of the suitable methods. However, it is not directly applicable for image transmission on usual telephone lines (PSTN). Therefore, it must be modified considerably to be suitable for our purposes. From Shannon’s Information Theory, we know that for a given information source like an image there is a coding technique which permits a source to be coded with an average code length as close as to the entropy of the source as desired. So, we have modified the Huffman coding technique and obtained a new optimized version of this coding, which has a high speed and is easily implemented. Then, we have applied the DCT1 and the FDCT2 for compression of the data. We have analyzed and written the programs in C++ for image compression/decompression, which give a very high compression ratio (50:1 or more) with an excellent SNR.3In this paper, we present the necessary modifications on Huffman coding algorithms and the results of simulations on typical images.  相似文献   

7.
改进过渡区提取算法,将最小交叉熵应用于红外图像过渡区提取中,提出基于交叉熵的图像过渡区算法,并将该过渡区分割法应用于红外图像感兴趣区域的自动提取,最后提出了一种基于JPEG2000框架红外图像感兴趣区压缩方案进行分类压缩。经图像实验充分验证了该方法的有效性、实时性,其具有重要的应用价值。  相似文献   

8.
Recently, medical image compression becomes essential to effectively handle large amounts of medical data for storage and communication purposes. Vector quantization (VQ) is a popular image compression technique, and the commonly used VQ model is Linde–Buzo–Gray (LBG) that constructs a local optimal codebook to compress images. The codebook construction was considered as an optimization problem, and a bioinspired algorithm was employed to solve it. This article proposed a VQ codebook construction approach called the L2‐LBG method utilizing the Lion optimization algorithm (LOA) and Lempel Ziv Markov chain Algorithm (LZMA). Once LOA constructed the codebook, LZMA was applied to compress the index table and further increase the compression performance of the LOA. A set of experimentation has been carried out using the benchmark medical images, and a comparative analysis was conducted with Cuckoo Search‐based LBG (CS‐LBG), Firefly‐based LBG (FF‐LBG) and JPEG2000. The compression efficiency of the presented model was validated in terms of compression ratio (CR), compression factor (CF), bit rate, and peak signal to noise ratio (PSNR). The proposed L2‐LBG method obtained a higher CR of 0.3425375 and PSNR value of 52.62459 compared to CS‐LBG, FA‐LBG, and JPEG2000 methods. The experimental values revealed that the L2‐LBG process yielded effective compression performance with a better‐quality reconstructed image.  相似文献   

9.
Reversible image data hiding technology means the cover image can be totally recovered after the embedded secret data is extracted. In this paper, we propose a reversible image data hiding scheme based on vector quantization (VQ) compressed images. The secret bits are embedded into the VQ index table by modifying the index value according to the difference of neighboring indices. The data hiding capacity and the size of the final codestream (embedded result) are a trade-off, and it can be decided by users. In other words, the proposed scheme has flexible hiding capacity ability. To estimate the performance, the proposed scheme was compared with the scheme proposed by Wang and Lu (2009). The results of the comparison showed that our proposed scheme is superior to the scheme proposed by Wang and Lu in both data hiding capacity and bit rate.  相似文献   

10.
An improvement to a method of extracting line and region information from technical drawings is presented. A scheme for compressing this information as it is being found is also given. The encoding scheme is based on a combination of run-length codes, direction chain codes, and Huffman codes. It is efficient, requires minimal storage, and achieves favorable compression ratios. Experimental results from four typical line drawings are presented.  相似文献   

11.
Leakage Power Analysis and Reduction for Nanoscale Circuits   总被引:2,自引:0,他引:2  
Leakage current in the nanometer regime has become a significant portion of power dissipation in CMOS circuits as threshold voltage, channel length, and gate oxide thickness scale downward. Various techniques are available to reduce leakage power in high-performance systems.  相似文献   

12.
A hybrid aggregation and compression technique for road network databases   总被引:1,自引:1,他引:0  
Vector data and in particular road networks are being queried, hosted and processed in many application domains such as in mobile computing. Many client systems such as PDAs would prefer to receive the query results in unrasterized format without introducing an overhead on overall system performance and result size. While several general vector data compression schemes have been studied by different communities, we propose a novel approach in vector data compression which is easily integrated within a geospatial query processing system. It uses line aggregation to reduce the number of relevant tuples and Huffman compression to achieve a multi-resolution compressed representation of a road network database. Our experiments performed on an end-to-end prototype verify that our approach exhibits fast query processing on both client and server sides as well as high compression ratio.
Cyrus ShahabiEmail:
  相似文献   

13.
A new scheme of test data compression/decompression, namely coding of even bits marking and selective output inversion, is presented. It first uses a special kind of codewords, odd bits of which are used to represent the length of runs and even bits of which are used to represent whether the codewords finish. The characteristic of the codewords make the structure of decompressor simple. It then introduces a structure of selective output inversion to increase the probability of 0s. This scheme can obtain a better compression ratio than some already known schemes, but it only needs a very low hardware overhead. The performance of the scheme is experimentally confirmed on the larger examples of the ISCAS89 benchmark circuits.  相似文献   

14.
Multiwavelets are the new addition to the body of wavelet theory. There are many types of symmetric multiwavelets such as GHM and CL. However, the matrix filters generating the GHM system multiwavelets do not satisfy the symmetric property. Apparently, GHM cannot solve the edge problem accurately. For this reason, this paper presents some formulas for constructing the symmetric orthogonal matrix filters, which leads the symmetric orthogonal multiwavelets (SOM). Moreover, we analyze the frequency property by vanishing moments and prefilter technology to get a good combining frequency property. To prove the good property of SOM in image compression application, we compared the compression effect with other writers' work, which was in published literature. Extensive experimental results demonstrate that our new symmetric orthogonal matrix filters combining with the prefilter technology and coefficient reorganization exhibit performance equal to, or in several cases superior to the GHM and CL symmetric multiwavelets.  相似文献   

15.
Multimedia Tools and Applications - The advances in digital image processing and communications have created a great demand for real–time secure image transmission over the networks. However,...  相似文献   

16.
《微型机与应用》2017,(3):93-95
井下电视应用一直受到传输速率的影响,数字井下电视可以分析并解决井下复杂状况问题。采用H.264编码技术可以在相同带宽传输更高的图像质量,以及采用单对线高速数字用户线(SHDSL)技术可以解决井下传输距离与速率相矛盾的问题。该传输技术具备自适应能力来适应同轴铠甲电缆,对传输介质降低了要求且节省成本。通过以上两种技术可以解决井下图像传输距离短和分辨率低的问题,并提高井下数据传输速率。  相似文献   

17.
Iris recognition has been demonstrated to be an efficient technology for personal identification. In this work, methods to perform iris encoding using bi-orthogonal wavelets and directional bi-orthogonal filters are proposed and compared. All the iris images are enhanced using the wavelet domain in-band de-noising method. This method is shown to improve the iris segmentation results. A framework to assess the iris image quality based on occlusion, contrast, focus and angular deformation is introduced and used as part of a novel adaptive matching technique based on the assessed iris image quality. Adaptive matching presents improved performance when compared against the Hamming distance method. Four different databases are used to analyze the system performance. The first two databases include popular CASIA and high resolution University of Bath databases. Results obtained for these databases compare with results from the literature, in terms of speed as well as accuracy. The other two databases have challenging off-angle (WVU database) and uncontrolled (Clarkson database) iris images and are used to assess the limits of system performance. Best results are achieved for directional bi-orthogonal filter based encoding technique combined with the adaptive matching method with EER values of 0.07%, 0.15%, 0.81% and 1.29% for the four databases, which reflect highly competent performance and high correlation with the quality of the iris images.  相似文献   

18.
Generation of electricity from solar energy has gained worldwide acceptance due to its abundant availability and eco-friendly nature. Even though the power generated from solar looks to be attractive; its availability is subjected to variation owing to many factors such as change in irradiation, temperature, shadow etc. Hence, extraction of maximum power from solar PV using Maximum Power Point Tracking (MPPT) method was the subject of study in the recent past. Among many methods proposed, Hill Climbing and Incremental Conductance MPPT methods were popular in reaching Maximum Power under constant irradiation. However, these methods show large steady state oscillations around MPP and poor dynamic performance when subjected to change in environmental conditions. On the other hand, bio-inspired algorithms showed excellent characteristics when dealing with non-linear, non-differentiable and stochastic optimization problems without involving excessive mathematical computations. Hence, in this paper an attempt is made by applying modifications to Particle Swarm Optimization technique, with emphasis on initial value selection, for Maximum Power Point Tracking. The key features of this method include ability to track the global peak power accurately under change in environmental condition with almost zero steady state oscillations, faster dynamic response and easy implementation. Systematic evaluation has been carried out for different partial shading conditions and finally the results obtained are compared with existing methods. In addition, simulations results are validated via built-in hardware prototype.  相似文献   

19.
We compare techniques that dynamically scale the voltage of individual network links to reduce power consumption with an approach in which all links in the network are set to the same voltage and adaptive routing is used to distribute load across the network. Our results show that adaptive routing with static network link voltages outperforms dimension-order routing with dynamic link voltages in all cases, because the adaptive routing scheme can respond more quickly to changes in network demand. Adaptive routing with static link voltages also outperforms adaptive routing with dynamic link voltages in many cases, although dynamic link voltage scaling gives better behavior as the demand on the network grows.  相似文献   

20.
综合以配电网网络损耗最低、可靠性最高为目标函数,以配电网的运行满足电力连续供应为约束,运用改进遗传算法及模拟退火算法进行网络重构,提出了基于原始网络的初始种群选取以及在自适应遗传算法之中加入模拟退火的策略,克服了现有遗传算法在配电网重构中应用时产生大量不可行解的不足.通过IEEE典型算例RBTS Bus 4系统的验算,结果表明所提算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号