首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
One of the major difficulties arising in vector quantization (VQ) is high encoding time complexity. Based on the well‐known partial distance search (PDS) method and a special order of codewords in VQ codebook, two simple and efficient methods are introduced in fast full search vector quantization to reduce encoding time complexity. The exploitation of the “move‐to‐front” method, which may get a smaller distortion as early as possible, combined with the PDS algorithm, is shown to improve the encoding efficiency of the PDS method. Because of the feature of energy compaction in DCT domain, search in DCT domain codebook may be further speeded up. The experimental results show that our fast algorithms may significantly reduce search time of VQ encoding. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 204–210, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10030  相似文献   

2.
Abstract

The nearest neighbor (NN) searching problem has wide applications. In vector quantization (VQ), both the codebook generation phase and encoding phase (using the codebook just generated) often need to use the NN search. Improper design of the searching algorithm will make the complexity quite big as vector dimensionality k or codebook size N increases. In this paper, a fast NN searching method is proposed, which can then accelerate the LBG codebook generation process for VQ design. The method successfully modifies and improves the LAESA method. Unlike LAESA, the proposed k/2 “fixed” points (allocated far from the data) and the origin are used as the k/2+1 reference points to reduce the searching area. The overhead in memory is only linearly proportional to N and k. The time complexity, including the overhead, is of order O(kN). According to our experiments, the proposed algorithm can reduce the time burden while the distortion remains identical to that of the full search.  相似文献   

3.
《成像科学杂志》2013,61(6):348-362
Abstract

SOM-based image quantisation requires a considerable amount of processing time even during the pixel mapping stage. Basically, a full search algorithm is employed to find a codeword, within a codebook, whose distance to the queried pixel is minimum. In this paper, we present a novel approach to accelerate the pixel mapping stage by utilisation of the spatial redundancy of pixels in the image and the inherent topological preservation nature of the resulting codebook. The experimental results confirm that the proposed approach outperforms ordinary solutions and is comparable to state-of-the-art solutions in terms of execution time. In addition, as the proposed approach does not require codebook sorting and a complex data structure with variable sizes, this simplifies its implementation and makes it feasible for hardware realisation.  相似文献   

4.
A new secret image transmission scheme suitable for narrow communication channel is proposed in this article. A set of secret images can be simultaneously and efficiently delivered to the receiver via a small and meaningless data stream by the proposed scheme. To reduce the volume of secret images, a codebook is first generated and these secret images are encoded into binary indexes based on the vector quantization (VQ) technique. The compressed message is then embedded into the VQ codebook utilized in the encoding procedure by an adaptive least‐significant‐bits (LSB) modification technique. For the purpose of security, the slightly modified codebook is further encrypted into a meaningless data stream by the AES cryptosystem. Simulation results show that the proposed scheme provides an impressive improvement both in the visual quality of the extracted secret images at the receiver and in the hiding capacity of the cover medium. Experimental data also reveal the feasibility of the proposed secret image transmission scheme for limited‐bandwidth environment. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 1–9, 2007  相似文献   

5.
《成像科学杂志》2013,61(2):195-203
Abstract

In this paper, we propose two reversible information hiding schemes based on side-match vector quantisation (SMVQ). In 2006, Chang et al. proposed a data-hiding scheme based on SMVQ coding. In order to increase both the image quality and the embedding capacity, we improve their method by embedding multiple secret bits into a block and finding out the second selected codeword from the full codebook. In addition, we propose another reversible information hiding scheme whose output is a pure VQ index table. The weighted bipartite graph is used to model our scheme and the matching approach is used to find out the best solution. Compared with Chang et al.’s scheme, we have higher visual quality in the experimental results.  相似文献   

6.
Abstract

In this paper, a new artifact reduction algorithm for compressed color images using MMRCVQ is proposed. The algorithm extends and modifies vector quantization (VQ) for discovering the relationships between the uncompressed color images and their deblocked compressed versions by classifying the deblocked compressed blocks into several categories using information from their neighboring blocks. The discovered relationships are stored in two codebooks and are used to recover the missing information of compressed color images. To increase the availability of codewords and reduce the memory needed for storing codewords, mean‐removed vectors are used to generate codebooks. The experimental results show that the proposed approach can remove, effectively, the artifacts caused by high compression and improve perceptual quality significantly. Compared to existing methods, the proposed approach usually uses much less computing time to recover a compressed color image and has much better image quality.  相似文献   

7.
Abstract

The digital property of multimedia has received significant attention in recent years. Robust watermarking of digital images and video for copyright protection is an important and challenging topic. In this paper, a watermarking method is presented which is based on vector quantization (VQ). In the proposed method, the watermark is related to the codebook that is permuted by an owner-specific random sequence. Also, the extraction of the watermark does not require the existence of the original image. The proposed method exploits the relation of VQ indices to provide the property of invisibility and robustness for various attacks. The experimental results in this work demonstrate the effectiveness of the proposed method.  相似文献   

8.
Finite state vector quantization (FSVQ) has been proven to be a high-quality and low-bit rate coding scheme. An FSVQ has achieved the efficiency of a small codebook (the state codebook) VQ while maintaining the quality of a large codebook (the master codebook) VQ. However, the large master codebook becomes a primary limitation of FSVQ when the implementation is carefully taken into account. A large amount of memory would be required in storing the master codebook, and much effort would be spent in maintaining the state codebook if the master codebook became too large. This problem could be partially solved by the mean/residual technique (MRVQ)-that is, the block means and the residual vectors would be separately coded. However, MRVQ has its own drawbacks. Additional bits would be required in coding those means. Moreover, electing the state codebooks in the residual domain would be difficult. A new hybrid coding scheme called the finite state residual vector quantization (FSRVQ) is proposed in this article for the sake of using both advantage in FSVQ and MRVQ. The codewords in FSRVQ were designed by removing the block means to reduce the codebook size. The block means were predicted by the neighboring blocks to reduce the bit rate. In addition, the predicted means were added to the residual vectors so that the state codebooks could be generated entirely. The performance of FSRVQ was indicated from the experimental results to be better than that of both ordinary FSVQ and MRVQ uniformly.©1994 John Wiley & Sons Inc  相似文献   

9.
Wavelet transform coding (WTC) with vector quantization (VQ) has been shown to be efficient in the application of image compression. An adaptive vector quantization coding scheme with the Gold‐Washing dynamic codebook‐refining mechanism in the wavelet domain, called symmetric wavelet transform‐based adaptive vector quantization (SWT‐GW‐AVQ), is proposed for still‐image coding in this article. The experimental results show that the GW codebook‐refining mechanism working in the wavelet domain rather than the spatial domain is very efficient, and the SWT‐GW‐AVQ coding scheme may improve the peak signal‐to‐noise ratio (PSNR) of the reconstructed images with a lower encoding time. © 2002 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 166–174, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10024  相似文献   

10.
《成像科学杂志》2013,61(2):219-231
Abstract

In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching it across. Only the domain blocks with entropy greater than a threshold are considered to belong to the domain pool. The algorithm has been tested for some well-known images and the results have been compared with the state-of-the-art algorithms. The experiments show that our proposed algorithm has considerably lower encoding time than the other algorithms giving approximately the same quality for the encoded images.  相似文献   

11.
The basic goal of image compression through vector quantization (VQ) is to reduce the bit rate for transmission or data storage while maintaining an acceptable fidelity or image quality. The advantage of VQ image compression is its fast decompression by table lookup technique. However, the codebook supplied in advance may not handle the changing image statistics very well. The need for online codebook generation became apparent. The competitive learning neural network design has been used for vector quantization. However, its training time can be very long, and the number of output nodes is somewhat arbitrarily decided before the training starts. Our modified approach presents a fast codebook generation procedure by searching for an optimal number of output nodes evolutively. The results on two medical images show that this new approach reduces the training time considerably and still maintains good quality for recovered images. © 1997 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 8, 413–418, 1997  相似文献   

12.
Abstract

In this paper, we propose and evaluate a method for fractal image coding in the subband domain. The subband decomposition scheme acts as a classifier, which can efficiently reduce encoding time. The proposed fractal image coding scheme is an adaptive one. The adaptability is based on the variance of each subband. At each subband, the scheme adaptively sets the map block size that should be encoded. In addition, the domain blocks are adaptively restricted to the neighborhood of their respective range block. Simulation results show that good picture quality of the coded image is obtained at 0.370 bpp. It also indicates that such an adaptive scheme makes a better trade‐off between the required bit rate and picture quality than a fixed size one. Moreover, the adaptive shceme can save a large amount of time.  相似文献   

13.
Abstract

In this study we present an efficient global optimization method, DIviding RECTangle (DIRECT) algorithm, for parametric analysis of dynamic systems. In a bounded constrained problem the DIRECT algorithm explores multiple potentially optimal subspaces in one search. The algorithm also eliminates the need for derivative calculations which are required in some efficient gradient‐based methods. In this study the first optimization example is to find the dynamic parameters of a tennis racket. The second example is a biomechanical parametric study of a heel‐toe running model governed by six factors. The effectiveness of the DIRECT algorithm is compared with a genetic algorithm in an analysis of heel‐toe running. The result shows that the DIRECT algorithm obtains an improved result in 83% less execution time. It is demonstrated that the straightforward DIRECT algorithm provides a general procedure for solving global optimization problems efficiently and confidently.  相似文献   

14.
This article presents a compression method to encode a 2D‐gel image by using hybrid lossless and lossy techniques. In this method, areas containing protein spots are encoded using lossless method while the background is encoded using the lossy method. A 2D‐gel image usually covers a large portion of the background in which has colors that are close to white. The VQ codebook‐generating approach gives more codewords to describe the background; consequently, the proposed method can nearly precisely depict the background of the 2D‐gel image and exactly record protein spots without any losses. Therefore, it can provide a high compression ratio. Image compressed by this method can nearly be lossless reconstructed. The experimental results show that the compression ratio is significantly improved with acceptable image quality compared to the JPEG‐LS method. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 1–8, 2006  相似文献   

15.
In this article, an efficient image coding scheme that takes advantages of feature vector in wavelet domain is proposed. First, a multi‐stage discrete wavelet transform is applied on the image. Then, the wavelet feature vectors are extracted from the wavelet‐decomposed subimages by collecting the corresponding wavelet coefficients. And finally, the image is coded into bit‐stream by applying vector quantization (VQ) on the extracted wavelet feature vectors. In the encoder, the wavelet feature vectors are encoded with a codebook where the dimension of codeword is less than that of wavelet feature vector. By this way, the coding system can greatly improve its efficiency. However, to fully reconstruct the image, the received indexes in the decoder are decoded with a codebook where the dimension of codeword is the same as that of wavelet feature vector. Therefore, the quality of reconstructed images can be preserved well. The proposed scheme achieves good compression efficiency by the following three methods. (1) Using the correlation among wavelet coefficients. (2) Placing different emphasis on wavelet coefficients at different decomposing levels. (3) Preserving the most important information of the image by coding the lowest‐pass subimage individually. In our experiments, simulation results show that the proposed scheme outperforms the recent VQ‐based image coding schemes and wavelet‐based image coding techniques, respectively. Moreover, the proposed scheme is also suitable for very low bit rate image coding. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 123–130, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20045  相似文献   

16.
ABSTRACT

In this paper, we first propose a new embedded multilevel block truncation coding (BTC) technique. Unlike Differential pulse code modulation (DPCM), Vector quantization (VQ), and general multilevel BTC algorithms which determine the image quality at a time, the embedded multilevel BTC improves the image quality largely and progressively until obtaining an image with excellent quality. In order to reduce the bit rate efficiently, we propose a perception model and utilize it to develop a pruning algorithm. The pruning algorithm removes the useless information, which the human eyes are not sensitive, generated by the embedded multilevel BTC. The simulation results indicate that the bit rate with the proposed method is much less than that with the DPCM and general multilevel BTC under the same objective criterion, PSNR, or subjective criterion. This paper also shows that the computation complexity of the proposed method is much less than that with VQ under the same high quality reconstructed image.  相似文献   

17.
近年来,无参考图像质量评价发展迅速,但是对雾天图像质量进行评价的无参考算法还鲜有报道。该文提出了一种基于码书的无参考雾天图像质量评价算法。目的是使该方法评价雾天图像质量的结果与人类主观感知相一致。寻找能反映雾天图像质量的特征,运用这些特征构建码书,然后用码书对训练图像进行编码得到训练图像的特征向量,最后用这些向量与训练图像的主观评分进行回归得到雾天图像质量评价模型。该方法在仿真的雾天图像库中进行了测试,结果表明:Pearson线性相关系数和Spearman等级相关系数值都在0.99以上。并与经典的无参考算法NIQE和CONIA方法进行了比较,优于这些算法,能够很好地预测人对雾天图像的主观感知。  相似文献   

18.
Abstract

In this paper, a novel genetic algorithm, including domain specific knowledge into the crossover operator and the local search mechanism for solving weapon‐target assignment (WTA) problems is proposed. The WTA problem is a full assignment of weapons to hostile targets with the objective of minimizing the expected damage value to own‐force assets. It is an NP‐complete problem. In our study, a greedy reformation and a new crossover operator are proposed to improve the search efficiency. The proposed algorithm outperforms its competitors on all test problems.  相似文献   

19.
The paper addresses minimizing makespan by a genetic algorithm (GA) for scheduling jobs with non-identical sizes on a single-batch-processing machine. A batch-processing machine can process up to B jobs simultaneously. The processing time of a batch is equal to the longest processing time among all jobs in the batch. Two different GAs are proposed based on different encoding schemes. The first is a sequence-based GA (SGA) that generates random sequences of jobs using GA operators and applies the batch first fit heuristic to group the jobs. The second is a batch-based hybrid GA (BHGA) that generates random batches of jobs using GA operators and ensures feasibility by using knowledge of the problem based on a heuristic procedure. A greedy local search heuristic based on the problem characteristics is hybridized with a BHGA that has the ability of steering efficiently the search toward the optimal or near-optimal schedules. The performance of proposed GAs is compared with a simulated annealing (SA) approach proposed by Melouk et al. (Melouk, S., Damodaran, P. and Chang, P.Y., Minimizing makespan for single machine batch processing with non-identical job sizes using simulated annealing. Int. J. Prod. Econ., 2004, 87, 141–147) and also against a modified lower bound proposed for the problem. Computational results show that BHGA performs considerably well compared with the modified lower bound and significantly outperforms the SGA and SA in terms of both quality of solutions and required runtimes.  相似文献   

20.
Abstract

To achieve high coding efficiency, modern speech coders adopt hybrid coding approaches, which utilize different coding mechanisms for various classified speech segments. With known voiced/unvoiced detection, in this paper, a classified LPC quantization (CLPQ) scheme is presented to effectively encode line spectral frequencies (LSF). The proposed CLPQ scheme improves the performance of the classified LSF vector quantizer, which adopts two LSF codebooks derived separately from voiced and unvoiced speech frames. With an objective spectral distortion measure, the CLPQ scheme successfully reduces the bit rate by about 1 bit/frame. Many classified LSF quantizers with different codebook structures and bit rates were evaluated. It would be helpful to design a classified LSF quantizer, which arrives at a compromise between distortion, bit rate and computational complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号