首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Multiple images with different exposures are used to produce a high dynamic range (HDR) image. Sometimes high-sensitivity setting is needed for capturing images in low light condition as in an indoor room. However, current digital cameras do not produce a high-quality HDR image when noise occurs in low light condition or high-sensitivity setting. In this paper, we propose a noise reduction method in generating HDR images using a set of low dynamic range (LDR) images with different exposures, where ghost artifacts are effectively removed by image registration and local motion information. In high-sensitivity setting, motion information is used in generating a HDR image. We analyze the characteristics of the proposed method and compare the performance of the proposed and existing HDR image generation methods, in which Reinhard et al.’s global tone mapping method is used for displaying the final HDR images. Experiments with several sets of test LDR images with different exposures show that the proposed method gives better performance than existing methods in terms of visual quality and computation time.  相似文献   

2.
Lossless compression of VQ index with search-order coding   总被引:1,自引:0,他引:1  
In memoryless vector quantization (VQ) for images, each block is quantized independently and its corresponding index is sent to the decoder. This paper presents a new lossless algorithm that exploits the interblock correlation in the index domain. We compare the current index with previous indices in a predefined search path, and then send the corresponding search order to the decoder. The new algorithm achieves significant reduction of bit rates without introducing extra coding distortion when compared to memoryless VQ. It is very simple and computationally efficient.  相似文献   

3.
Pan  J.S. Chu  S.C. 《Electronics letters》1996,32(17):1545-1546
The tabu search approach is applied to codevector index assignment for noisy channels for the purpose of minimising the distortion due to bit errors without introducing any redundancy. Experimental results demonstrate the robustness of this approach compared with the standard parallel genetic algorithm and the binary switching algorithm  相似文献   

4.
Conditional entropy-constrained residual VQ with application toimage coding   总被引:1,自引:0,他引:1  
This paper introduces an extension of entropy constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements, moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.  相似文献   

5.
In a memoryless vector quantization (VQ) of images, each image block is quantized independently, while its corresponding index is sent to the decoder. Since the image blocks are highly correlated, many blocks may correspond to the same index. This paper presents a noiseless coding algorithm which groups up the blocks with the same index and then encodes the group path efficiently. Simulation results indicate that the new algorithm improves coding efficiency without introducing any extra loss of information  相似文献   

6.
Conditional entropy coding of VQ indexes for image compression   总被引:1,自引:0,他引:1  
Block sizes of practical vector quantization (VQ) image coders are not large enough to exploit all high-order statistical dependencies among pixels. Therefore, adaptive entropy coding of VQ indexes via statistical context modeling can significantly reduce the bit rate of VQ coders for given distortion. Address VQ was a pioneer work in this direction. In this paper we develop a framework of conditional entropy coding of VQ indexes (CECOVI) based on a simple Bayesian-type method of estimating probabilities conditioned on causal contexts, CECOVI is conceptually cleaner and algorithmically more efficient than address VQ, with address-VQ technique being its special case. It reduces the bit rate of address VQ by more than 20% for the same distortion, and does so at only a tiny fraction of address VQ's computational cost.  相似文献   

7.
In this paper, we propose a noise reduction algorithm for digital color images using a nonlinear image decomposition approach. Most existing noise reduction methods do not adequately consider spatial correlation of color noise in digital color images. Color noise components in color images captured by digital cameras are observed as irregular grains with various sizes and shapes, which are spatially randomly distributed. We use a modified multiscale bilateral decomposition to effectively separate signal and mixed-type noise components, in which a noisy input image is decomposed into a base layer and several detail layers. A base layer contains strong edges, and most of noise components are contained in detail layers. Noise components in detail layers are reduced by an adaptive thresholding function. We obtain a denoised image by combining a base layer and noise-reduced detail layers. Experimental results show the effectiveness of the proposed algorithm, in terms of both the peak signal-to-noise ratio and visual quality.  相似文献   

8.
Adaptive data hiding based on VQ compressed images   总被引:2,自引:0,他引:2  
Data hiding involves embedding secret data into various forms of digital media such as text, image, audio, and video. With the rapid growth of network communication, data-hiding techniques are widely used in protecting copyright, embedding captions and communicating secretly. The authors propose an adaptive algorithm to embed data into VQ compressed images. This method adaptively varies the embedding process according to the amount of hidden data. The proposed method provides more effective hiding and higher quality images than conventional methods. The results of experimental comparisons are also presented.  相似文献   

9.
Edge-preserving denoising is of great interest in medical image processing. This paper presents a wavelet-based multiscale products thresholding scheme for noise suppression of magnetic resonance images. A Canny edge detector-like dyadic wavelet transform is employed. This results in the significant features in images evolving with high magnitude across wavelet scales, while noise decays rapidly. To exploit the wavelet interscale dependencies we multiply the adjacent wavelet subbands to enhance edge structures while weakening noise. In the multiscale products, edges can be effectively distinguished from noise. Thereafter, an adaptive threshold is calculated and imposed on the products, instead of on the wavelet coefficients, to identify important features. Experiments show that the proposed scheme better suppresses noise and preserves edges than other wavelet-thresholding denoising methods.  相似文献   

10.
Sonar images are usually suffering from speckle noise which results in poor visual quality. In order to improve the sonar imaging quality, removing or reducing these speckle noises is a very important and arduous task. In this paper, the imaging principle and noise characteristics of the side-scan sonar (SSS) are analyzed, and five typical probability distribution functions are used to fit the seabed reverberation. Through experiment comparison, the Gamma distribution is selected to simulate the noise of the SSS image caused by the reverberation. Simultaneously, the fields of experts denoising algorithm based on the Gamma distribution (Gamma FoE) is proposed for SSS image denoising. In order to perceive and measure the denoising effect better, evaluation indexes of Fast Noise Variance Estimation (FNVE, an image noise estimation method) and Blind Referenceless Image Spatial Quality Evaluator (BRISQUE, an image quality evaluation method) are selected for image quality perception. The final results of the SSS image denoise experiment show that the Gamma FoE denoise algorithm has a better effect on SSS image denoise application than other denoise algorithms.  相似文献   

11.
Steganography is one of protective methods for secret communications over public networks such as the Internet. This paper proposes a novel reversible information hiding method for vector quantization (VQ) compressed images based on locally adaptive coding method. The proposed steganographic method embeds a secret message into VQ indices in an index table during the process of compressing the index table in the block-by-block manner. The experimental results show that, in average, the proposed method achieves the best visual quality of reconstructed images and the best embedding rate compared to two related works. In terms of compression rate and encoding execution time, in average, Yang et al.’s method is the best, followed by our proposed method, and then Lin and Chang’s method.  相似文献   

12.
A technique that enables the variation of bias currents in a filter without causing disturbances at the output is presented. Thus, the bias current can be kept at the minimum value necessary for the total input signal being processed, reducing the noise and power consumption. To demonstrate this approach, a dynamically biased log-domain filter has been designed in a 0.25-μm BiCMOS technology. The chip occupies 0.52 mm2. In its quiescent condition, the filter consumes 575 μW and has an output noise of 4.4 nA rms. Signal-to-noise ratio greater than 50 dB over 3 decades of input and total harmonic distortion less than 1% for inputs less than 2.5 mA peak are achieved. The bias can be varied to minimize noise and power consumption without disturbing the output  相似文献   

13.
According to the circle-packing theorem, the packing efficiency of a hexagonal lattice is higher than an equivalent square tessellation. Consequently, in several contexts, hexagonally sampled images compared to their Cartesian counterparts are better at preserving information content. In this paper, novel mapping techniques alongside the wavelet compression scheme are presented for hexagonal images. Specifically, we introduce two tree-based coding schemes, referred to as SBHex (spirally-mapped branch-coding for hexagonal images) and BBHex (breadth-first block-coding for hexagonal images). Both of these coding schemes respect the geometry of the hexagonal lattice and yield better compression results. Our empirical results show that the proposed algorithms for hexagonal images produce better reconstruction quality at low bits per pixel representations compared to the tree-based coding counterparts for the Cartesian grid.  相似文献   

14.
Tree coding of bilevel images   总被引:1,自引:0,他引:1  
Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional probabilities to an arithmetic coder. The conditional probabilities are estimated from co-occurrence statistics of past pixels, the statistics are stored in a tree. By organizing the code length calculations properly, a vast number of possible models (trees) reflecting different pixel orderings can be investigated within reasonable time prior to generating the code. A number of general-purpose coders are constructed according to this principle. Rissanen's (1989) one-pass algorithm, context, is presented in two modified versions. The baseline is proven to be a universal coder. The faster version, which is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding  相似文献   

15.
This paper proposes a novel scheme of scalable coding for encrypted images. In the encryption phase, the original pixel values are masked by a modulo-256 addition with pseudorandom numbers that are derived from a secret key. After decomposing the encrypted data into a downsampled subimage and several data sets with a multiple-resolution construction, an encoder quantizes the subimage and the Hadamard coefficients of each data set to reduce the data amount. Then, the data of quantized subimage and coefficients are regarded as a set of bitstreams. At the receiver side, while a subimage is decrypted to provide the rough information of the original content, the quantized coefficients can be used to reconstruct the detailed content with an iteratively updating procedure. Because of the hierarchical coding mechanism, the principal original content with higher resolution can be reconstructed when more bitstreams are received.  相似文献   

16.
The requirement for improved picture quality in videophone and videoconference systems operating at low bit-rates has stimulated interest in model-based image coding. Two model-based coding techniques are described which are capable of producing either improved picture quality at bit-rates around 64 kbit/s or acceptable picture quality at bit-rates far lower than 64 kbit/s. The first technique produces facial expressions by using feature code-books; the second technique produces facial expressions by distorting an underlying three-dimensional model. The problems of image analysis and synthesis, which are concomitant in model-based coding, are discussed  相似文献   

17.
Data hiding is designed to solve the problem of secure information exchange through public networks such as Internet. In this paper, we present an improved reversible data hiding scheme that can recover original VQ indices after data extraction. As with Chang et al.’s scheme, our proposed scheme also depends on the locally adaptive coding scheme. However, experimental results confirm that the hiding capacity of our proposed scheme is around 1.36 bpi in most digital images, which is typically higher than that of Chang et al.’s [17]. Moreover, the average compression rate that can be achieved with our proposed scheme is 0.49 bpp, which outperforms both Lin and Chang’s scheme (0.50 bpp), Tsai (0.50 bpp), Chang et al.’s scheme (0.53 bpp), and Yang and Lin’s scheme (0.53 bpp).  相似文献   

18.
Achieving a high embedding capacity and low compression rate with a reversible data hiding method in the vector quantization (VQ) compressed domain is a technically challenging problem. This paper proposes a novel reversible steganographic scheme for VQ compressed images based on a locally adaptive data compression method. The proposed method embeds n secret bits into one VQ index of an index table in Hilbert-curve scan order. The experimental results show that the proposed method can achieve the different average embedding rates of 0.99, 1.68, 2.28, and 3.04 bit per index (bpi) and average compression rates of 0.45, 0.46, 0.5, and 0.56 bit per pixel (bpp) for n = 1, 2, 3, and 4, respectively. These results indicate that the proposed scheme is superior to Chang et al.’s scheme 1 [19], Yang and Lin’s scheme [21], and Chang et al.’s scheme 2 [24].  相似文献   

19.
A hierarchical image coding algorithm based on sub-band coding and adaptive block-size multistage vector quantization (VQ) is proposed, and its coding performance is examined for super high definition (SHD) image. First, the concept on SHD image is briefly described. Next, the signal power spectrum is evaluated, and the sub-band analysis pattern is determined from its characteristics. Several quadrature mirror filters are examined from the viewpoints of reconstruction accuracy, coding gain, and low-pass signal quality. Then an optimum filter is selected for the sub-band analysis. The two-stage VQ using the adaptive bit allocation is also introduced to control quantization accuracy and to achieve high-quality image reproduction. Coding performance and hierarchical image reconstruction are demonstrated using SNR and some photographs.  相似文献   

20.
陶长武  蔡自兴 《信息技术》2007,31(12):53-56
阐述了图像压缩编码的基本原理,系统地介绍了几种比较有应用前景的现代图像编码方法及其特点,最后对图像编码进行了总结和展望,指出从图像模型的角度研究图像编码将成为新一代图像编码的研究方向。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号