首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we propose a new perceptually significant video quality metric for the H.264/Motion Picture Expert Group (MPEG)‐4 Advanced Video Coding (AVC) and MPEG2 standard. Our method operates in the spatial domain by using the Sobel filter. The proposed approach does not require a high computational complexity and can be suitable for real‐time evaluation. We evaluate the performance of the proposed method by using three Common Intermediate Format sequences at different compression rates. The comparison of the obtained results is made with some video quality models using “LIVE”, “IVP” and “IRCCyN/IVC 1080i” databases. The performance metrics, i.e. Pearson and Spearman correlation coefficients, indicate that the proposed method gives a good performance in H264 and MPEG2 codec distortions with the three databases comparing with other models.  相似文献   

2.
A very low profile and ultra‐thin “H‐Shaped” antenna for IEEE 802.11a and HIPERLAN 2 wireless applications in the laptop computer is developed. The antenna is designed using only a pure copper strip of size 17.5(L) × 4(W) mm2 with thickness of only 0.035 mm. The novelty of the proposed antenna is that the antenna is designed with only one rectangular radiating strip without using any additional reactive components, vias or three dimensional structure. Furthermore, the proposed antenna does not require any additional ground plane for installing in laptops. The proposed antenna is comprised of one radiating strip, one rectangular stub, and two resonating slots, namely, “X” and “Y” of length 7.5 mm and 7 mm, respectively. The proposed structure resonates at around 5.5 GHz can cover the (5.15‐5.35/5.725‐5.825) GHz IEEE 802.11a and (5.15‐5.35/5.470‐5.725/5.725‐5.925) GHz HIPERLAN 2 bands. The fabricated prototype antenna has measured impedance bandwidth (VSWR<2) of 15% (5.10‐5.92 GHz) across the operating bands. The measured radiation patterns are nearly omnidirectional along with stable gain of 5 dBi. Moreover, the proposed antenna exhibits excellent radiation efficiency of around 90% across the operating bands. The simulated and measured results of antenna are found to be in good agreement. The very low profile and ultra‐thin structure make it an excellent candidate for wireless operations in the ultra‐thin laptop computers.  相似文献   

3.
This paper re‐examines the concept of “meme” in the context of digital culture. Defined as cultural units that spread from person to person, memes were debated long before the digital era. Yet the Internet turned the spread of memes into a highly visible practice, and the term has become an integral part of the netizen vernacular. After evaluating the promises and pitfalls of memes for understanding digital culture, I address the problem of defining memes by charting a communication‐oriented typology of 3 memetic dimensions: content, form, and stance. To illustrate the utility of the typology, I apply it to analyze the video meme “Leave Britney Alone.” Finally, I chart possible paths for further meme‐oriented analysis of digital content.  相似文献   

4.
Electronic magnification of an image results in a decrease in its perceived contrast. The decrease in perceived contrast could be due to a perceived blur or to limited sampling of the range of contrasts in the original image. We measured the effect on perceived contrast of magnification in two contexts: either a small video was enlarged to fill a larger area or a portion of a larger video was enlarged to fill the same area as the original. Subjects attenuated the source video contrast to match the perceived contrast of the magnified videos, with the effect increasing with magnification and decreasing with viewing distance. These effects are consistent with expectations based on both the contrast statistics of natural images and the contrast sensitivity of the human visual system. We demonstrate that local regions within videos usually have lower physical contrast than the whole and that this difference accounts for a minor part of the perceived differences. Instead, visibility of “missing content” (blur) in a video is misinterpreted as a decrease in contrast. We detail how the effects of magnification on perceived contrast can be measured while avoiding confounding factors.  相似文献   

5.
This paper presents an infrared single‐pixel video imaging to surveille sea surface. Based on the temporal redundancy of the surveillance video, a two‐step scheme, including low‐scale detection and high‐scale detection, is proposed. For each frame, low‐scale detection performs low‐resolution single‐pixel imaging to obtain a “preview” image of the scene, where the moving target can be located. These targets are further refined in the high‐scale detection where the high‐resolution single‐pixel imagings focusing on these targets are used. The frame is reconstructed by merging these two‐level images. The simulated experiments show that for a video with 128 × 128 pixels and 150 frames, the sampling rate of our scheme is about 17.8%, and the reconstructed video presents a good visual quality.  相似文献   

6.
Spatiotemporal Visual Considerations for Video Coding   总被引:3,自引:0,他引:3  
Human visual sensitivity varies with not only spatial frequencies, but moving velocities of image patterns. Moreover, the loss of visual sensitivity due to object motions might be compensated by eye movement. Removing the psychovisual redundancies in both the spatial and temporal frequency domains facilitates an efficient coder without perceptual degradation. Motivated by this, a visual measure is proposed for the purpose of video compressions. The novelty of this analysis relies on combining three visual factors altogether: the motion attention model, unconstrained eye-movement incorporated spatiovelocity visual sensitivity model, and visual masking model. For each motion-unattended macroblock, the retinal velocity is evaluated so that discrete cosine transform coefficients to which the human visual system has low sensitivity are picked up with the aid of eye movement incorporated spatiovelocity visual model. Based on masking thresholds of those low-sensitivity coefficients, a spatiotemporal distortion masking measure is determined. Accordingly, quantization parameters at macroblock level for video coding are adjusted on the basis of this measure. Experiments conducted by H.264 exhibit the effectiveness of the proposed scheme in improving coding performance without picture quality degradation  相似文献   

7.
提出一种针对H.264标准的新型视频加密算法,弥补了选择性加密压缩比变化及熵编码加密编码器不通用的问题。该算法将由密钥决定的一帧图像的细节与背景分别异或在原始图像帧的对应位置,生成视觉重叠图像以达到加密目的。实验结果表明,算法能够有效控制压缩率变化并可作为通用插件植入H.264编解码器中,适应于视频实时通信场合的需求。  相似文献   

8.
Abstract— Film is recorded at 24 Hz, which is sufficient to achieve the effect of motion but is well within the flicker sensitivity of the human‐visual system (HVS) and thus would result in severe flicker. To avoid this, film projectors project at twice this rate, using a 48‐Hz screen refresh rate. While this greatly mitigates flicker, projected film images still exhibit considerable flicker in bright scenes. DLP Cinema? projection technology allows us to display images at any frame rate, and in practice we have been able to match the 48‐Hz refresh rate of film projectors. This paper describes a technique by which we take advantage of the fact that the HVS temporal sensitivity curve shows more sensitivity with bright content but much less sensitivity as content dims. This is done using the control versatility of the Digital Micromirror Device? (DMD?), which allows independent control of every bit. The result is an overall image signal that is beyond the HVS temporal sensitivity curve, resulting in the complete removal of any visible flicker. This technique gives DLP Cinema? projection its characteristic “solid” and “stable” appearance that standard film projection does not provide.  相似文献   

9.
李侃  陈耀武 《计算机工程》2011,37(23):261-263
设计一种基于现场可编程门阵列(FPGA)和双数字信息处理器(DSP)的嵌入式高清内窥镜视频处理系统。利用FPGA对视频数据进行预处理,采用2片基于DaVinci-HD技术的DSP进行H.264编解码并行运算,通过PowerPC处理器完成系统管理、视频存储与网络传输。测试结果表明,该系统实时处理的视频分辨率达到1080i60,在图像质量上能达到H.264的高画质级别。  相似文献   

10.
With the aid of spectrum technique, a new concept called “ ??(0, α)‐stabilizability” (0<α≤1) is introduced, for which a necessary and sufficient condition is also proposed via a linear matrix inequality (LMI)‐based approach. Especially, ??(0, α)‐stabilizability is identical with asymptotic mean square stabilizability when α=1. A more general regional stability called “ ??R‐stability” is discussed extensively and some concrete examples are given. As applications, the relationship among ??(0, α;β)‐stability, the decay rate of the system state response and the second‐order moment Lyapunov exponent is revealed. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

11.
针对传统的视频压缩技术不适合在恶劣的网络环境下应用的问题,提出了一种基于H.264的远程视频监控系统的设计方案。该系统将嵌入式技术、网络技术和视频编码技术结合在一起,嵌入式系统中的摄像头采集到的视频信号经H.264标准编码压缩后,通过系统内置的Web服务器实时地传递给远程监控端,从而实现了远程监控端的浏览器与嵌入式Web服务器之间的互联通信。测试结果表明,该系统的视频压缩率高、图像清晰、实时性强。  相似文献   

12.
Traditional video compression methods consider the statistical redundancy among pixels as the only adversary of compression, with the perceptual redundancy totally neglected. However, it is well-known that none criterion is as eloquent as the visual quality of an image. To reach higher compression ratios without perceptually degrading the reconstructed signal, the properties of the human visual system (HVS) need to be better exploited. Recent research indicates that HVS has different sensitivities towards different image content, based on which a novel perceptual video coding method is explored in this paper to achieve better perceptual coding quality while spending fewer bits. A new texture segmentation method exploiting just noticeable distortion (JND) profile is first devised to detect and classify texture regions in video scenes. To effectively remove temporal redundancies while preserving high visual quality, an auto-regressive (AR) model is then applied to synthesize the texture regions and combine with other regions which are encoded by the traditional hybrid coding scheme. To demonstrate the performance, the proposed scheme is integrated into the H.264/AVC video coding system. Experimental results show that on various sequences with different types of texture regions, we can reduce the bit-rate for 15% to 58% while maintaining good perceptual quality.  相似文献   

13.
针对彩色视频压缩过程中压缩率不高或者视频质量损失过大的问题,提出一种基于视觉感知模型与色差的自适应视频压缩预处理算法。首先,在颜色对比敏感度模型中,人类视觉对高色度区域的色差敏感度较低,因此为不同色度区域分配不同的压缩权重;然后,基于帧之间的偏差图建立运动预测帧的空间运动模型,降低了高运动区域的残差;最终,使用动态的色调映射函数来控制视频的压缩等级,从而保证视频的视觉质量。基于多组视频进行实验,本算法成功地将标准的压缩软件提高了35%的压缩率,但同时可看出,本算法的性能依赖于具体的视频上下文。  相似文献   

14.
With the increasing growth of multimedia applications over the networking in recent years, users have put forward much higher requirements for multimedia quality of experience (QoE) than before. One of the representative requirements is the image quality. Therefore, the image quality assessment ranging from two-dimension (2D) image to three-dimension (3D) image has been getting much attention. In this paper, an efficient objective image quality assessment metric in block-based discrete cosine transform (DCT) coding is proposed. The metric incorporates properties of human visual system (HVS) to improve its validity and reliability in evaluating the quality of stereoscopic image. This is fulfilled by calculating the local pixel-based distortions in frequency domain, combining the simplified models of local visibility properties embodied in frequency domain, which consist of region of interest (ROI) mechanism (visual sensitivity), contrast sensitivity function (CSF) and contrast masking effect. The performance of the proposed metric is compared with other currently state-of-the-art objective image quality assessment metrics. The experimental results have demonstrated that the proposed metric is highly consistent with the subjective test scores. Moreover, the performance of the metric is also confirmed with the popular IRCCyN/IVC database. Therefore, the proposed metric is promising in term of the practical efficiency and reliability for real-life multimedia applications.  相似文献   

15.
杨佳义  陈勇 《计算机应用》2020,40(8):2372-2377
针对低照度环境下视频图像对比度低、难以识别的问题,提出对比度自适应补偿增强算法。首先,提取低照度环境下视频图像特征参数的平均灰度,根据原始图像的灰度级差异建立人类视觉对比度分辨率补偿的数学模型,并对真彩色三原色分别采用比例积分补偿。然后,当补偿程度低于明视觉恰可分辨差异时,设置补偿阈值线性补偿明视觉至满带宽。最后,结合主观图像质量评价和图像特征参数建立补偿比例系数的自动寻优模型,并把该模型嵌入到Directshow视频处理系统,应用于视频图像自适应增强。实验测试结果表明,补偿增强系统的实时性好,可以有效挖掘暗视觉信息,能够广泛应用于不同场景。  相似文献   

16.

The ever-growing video streaming services require accurate quality assessment with often no reference to the original media. One primary challenge in developing no-reference (NR) video quality metrics is achieving real-timeliness while retaining the accuracy. A real-time no-reference video quality assessment (VQA) method is proposed for videos encoded by H.264/AVC codec. Temporal and spatial features are extracted from the encoded bit-stream and pixel values to train and validate a fully connected neural network. The hand-crafted features and network dynamics are designed in a manner to ensure a high correlation with human judgment of quality as well as minimizing the computational complexities. Proof-of-concept experiments are conducted via comparison with: 1) video sequences rated by a full-reference quality metric, and 2) H.264-encoded sequences from the LIVE video dataset which are subjectively evaluated through differential mean opinion scores (DMOS). The performance of the proposed method is verified by correlation measurements with the aforementioned objective and subjective scores. The framework achieves real-time execution while outperforming state-of-art full-reference and no-reference video quality assessment methods.

  相似文献   

17.
Adverse weather conditions such as snow, fog or heavy rain greatly reduce the visual quality of outdoor surveillance videos. Video quality enhancement can improve the visual quality of surveillance videos providing clearer images with more details to better meet human perception needs and also improve video analytics performance. Existing work in this area mainly focuses on the quality enhancement for high-resolution videos or still images, but few algorithms are developed for enhancing surveillance videos, which normally have low resolution, high noises and compression artifacts. In addition, for snow or rain conditions, the image quality of near-field view is degraded by the obscuration of apparent snowflakes or raindrops, while the quality of far-field view is degraded by the obscuration of fog-like snowflakes or raindrops. Very few video quality enhancement algorithms have been developed to handle both problems. In this paper, we propose a novel video quality enhancement algorithm for see-through snow, fog or heavy rain. Our algorithm not only improves human visual perception experiences for video surveillance, but also reveal more video contents for better video content analyses. The proposed algorithm handles both near-field and far-field snow/rain effects by proposed a two-step approach: (1) the near-field enhancement algorithm identifies obscuration pixels by snow or rain in the near-field view and removes these pixels as snowflakes or raindrops; different from state-of-the-art methods, our proposed algorithm in this step can detect snowflakes on foreground objects or background, and apply different methods to fill in the removed regions. (2) The far-field enhancement algorithm restores the image’s contrast information not only to reveal more details in the far-field view, but also to enhance the overall image’s quality; in this step, the proposed algorithm adaptively enhances the global and local contrast, which is inspired on the human visual system, and accounts for the perceptual sensitivity to noises, compression artifacts, and the texture of image content. From our extensive testing, the proposed approach significantly improves the visual quality of surveillance videos by removing snow/fog/rain effects.  相似文献   

18.
针对现有基于纹理分析合成的视频编码方法难以对纹理实现自动分割的问题,提出基于纹理合成和视觉显著度的视频编码方法,将编码可移除的纹理宏块定义为宏块内图像信息包含在周围宏块中的宏块,被移除宏块移除后,能够使用周围宏块信息进行重建,同时,为了保证重建图像的主观质量,对视觉显著的图像区域,强制地不被标记为纹理移除块。在解码端,采用了全局运动补偿和纹理优化的组合方式对纹理移除宏块进行重建,以确保连续帧之间的纹理时空一致性。与H.264/AVC编码方法相比,在相同主观质量的情况下,本编码方法能够降低码率5%到20%,与Njiki-Nya Patrick等人提出的基于纹理分析合成的编码方法相比,本编码方法不但能够实现自动的纹理提取,在低码率端,编码效率提高一倍以上。  相似文献   

19.
人类深度感知立体图像质量评价方法   总被引:1,自引:0,他引:1       下载免费PDF全文
立体图像质量评价是感知与显示的基础,也是立体视频系统设计的依据。由于图像的最终接受者是人,所以评价图像质量的关键在于其是否符合人类的视觉系统特性。通过对视觉非线性、对比敏感度、多通道结构和掩蔽效应等人类视觉特性的分析,提出一种符合人类主观感知的立体图像质量评价方法。该方法首先对图像进行5级小波分解,将图像的空间频率按视觉系统的掩盖效应特点分成6个频带分别进行滤波,以改变原始图像的空间频率,然后对每个频带进行相似度度量。根据对比敏感度特性对各个频带的质量评价结果进行加权平均,得到最终的质量评价尺度。实验结果表明,本文方法优于传统的客观质量评价方法,与人的主观感受有更好的一致性,能够反映图像的质量以及立体感。  相似文献   

20.
A metric of the 3D image quality of autostereoscopic displays based on optical measurements is proposed. This metric uses each view's luminance contrast, which is defined as the ratio of maximum luminance at each viewing position to total luminance at that position. Conventional metrics of the autostereoscopic display based on crosstalk, which uses “wanted” and “unwanted” lights. However, in case of the multiple‐views‐type autostereoscopic displays, it is difficult to distinguish exactly which lights are wanted lights and which are unwanted lights. This paper assumes that the wanted light has a maximum luminance at the good stereoscopic viewing position, and the unwanted light also has a maximum luminance at the worst pseudo‐stereoscopic viewing position. By using the maximum luminance that is indexed by view number of the autostereoscopic display, the proposed method enables characterizing stereoscopic viewing conditions without using wanted/unwanted light. A 3D image quality metric called “stereo luminance contrast,” the average of both eyes' contrast, is proposed. The effectiveness of the proposed metric is confirmed by the results of optical measurement analyses of different types of autostereoscopic displays, such as the two‐view, scan‐backlight, multi‐view, and integral.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号