首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 127 毫秒
1.
近年来,科技界掀起一场"虚拟现实"风暴,给人们带来了前所未有的视听体验。全景视频作为虚拟现实的主要构造方式之一,凭借其良好的用户体验与人机交互性被广泛应用于医疗、娱乐、教育以及工业设计等诸多领域,在一定程度上改变了原有的生产设计方式。与此同时,用户对于全景视频视觉体验的要求也日益增长,如何为用户提供良好的全景视频视觉体验已成为近年来相关领域的研究热点。本文以HTC Vive平台为基础,结合虚拟现实技术相关指标以及用户对于全景视频视觉体验的要求,搭建了全景视频显示系统,并在此平台上通过主观实验分析了渲染全景视频时的球体网格面片数与全景视频主观质量之间的关系。实验结果表明,对于分辨率为720P、1 080P以及4K的视频,当面片数小于64时,视频的主观质量与面片数量成正相关关系;当面片数超过64后,视频主观质量方可由其自身参数所决定。本文中所搭建的全景视频显示系统以及文中相关结论可为后续有关全景视频的研究与应用开发提供有效的建议。  相似文献   

2.
目前,我国科技水平已经上升到了前所未有的高度,同时也推动了现代信息技术的飞速发展,许多先进的信息传输技术和图像处理技术都得到了质的飞跃。在大数据时代背景下,虚拟现实技术得到了全面推广,并在各个领域都得到了极大的突破,随着虚拟现实视频的引入,为观赏者带来了前所未有的视觉盛宴,让用户体验可交互式视频体验。我国当前的虚拟现实视频包括全景3D视频、非全景3D视频、局部全景3D视频、VR全景视频以及全景3D交互视频等,按照交互体验来看,又分为强交互视频与若交互视频。虚拟现实视频是基于虚拟现实技术为基础,通过不同场景视频的制作,在影视、直播、综艺等多个领域中都有着广泛的应用。主要针对虚拟显示视频的制作和应用场景进行探究。  相似文献   

3.
为了满足机载显示器对综合视频图形处理技术的需求,提出了一种基于SoC嵌入式处理平台的机载视频图形融合显示与视频记录系统实现方法。该方法以SoC为核心搭建硬件平台,使用SoC内部集成ARM处理器和视频图形协处理单元执行图形生成算法与外视频采集,配合SoC片上高速存储和显示接口,采用双缓存与多线程并发机制实现视频图形融合显示和外视频实时记录。本方法支持多种格式分辨率的视频源采集和大分辨率图形同步生成。实验结果表明,采用该系统技术后机载显示器采集1024×768分辨率外视频同时生成1920×1080分辨率图形时,融合处理后帧率可达45 fps,能够满足机载显示器实时显示需求。  相似文献   

4.
针对当前网络视频形式单一、缺乏多元化的视频形式展示的问题,结合三维全景展示技术,提出了面向网络视频的三维全景展示技术系统方案。重点分析了系统方案中的全景图拼接技术以及基于Flash视频技术的三维全景展示技术,并给出了一个具体的网络视频的三维全景展示示例,实现在互联网上逼真的展示三维场景。该方法能通过Flash脚本编程将图片、视频、音频等有机地结合起来,为网络视频用户提供三维全景与用户交互,使用户获得更好的网络视频体验。  相似文献   

5.
罗传飞  孔德辉  刘翔凯  徐科  杨浩 《电信科学》2017,33(10):185-193
全景视频作为一种新的视频格式极大程度地改变了用户观看视频的方式,可以提升智慧家庭在视频领域的用户体验。IPTV良好的解码能力以及优异的网络支撑等优势对全景视频的应用推广具有积极作用。结合VR全景视频的特点,分析了超高分辨率全景视频在IPTV平台中的技术挑战,同时基于FOV视点自适应框架,结合MCTS编码、多分辨率和多tile视频分发和终端根据视点部分解码tile技术,给出一种IPTV超高分辨率全景业务实现方案。  相似文献   

6.
通过简要介绍数字交互系统的参考模型,交互式电视的种类、特点和实现方法,以及交互式电视的业务类型,说明交互式电视的基本特征;针对交互电视网系统结构,说明各部分的主要作用;最后介绍交互式电视的视频服务器技术、用户接入网技术、用户接口技术等几个关键技术,其中包括视频服务器中采用的数字压缩技术、视频流传输技术和数据库技术。  相似文献   

7.
随着技术进步,手机的视频播放功能也越来越强,IPhone的3.5英寸、480K分辨率的多重触摸屏,为用户提供了较完美的视觉体验和交互能力.手机移动视频已经为更多的用户使用。ABI Research的研究报告显示.2008年,中国移动视频用户将超过3200万。  相似文献   

8.
针对传统虚拟装配系统交互匹配度偏低的问题,设计一个基于AR技术的三维交互式虚拟装配系统。选用数字图像融合机、环形投影屏幕等作为显示层系统硬件,中控系统、光学追踪器和体感控制器作为交互层系统硬件,图形工作站、视频切换器、磁盘作为数据层系统硬件构成虚拟装配系统操作平台。利用该平台选用SolidWorks软件,根据实体物品比例制作虚拟装配组件模型,搭建"虚拟手"模型并设置交互程序,对三维模型进行装配碰撞检测,实现基于AR技术的虚拟装配系统设计。实验结果表明,所设计的虚拟装配系统与传统虚拟装配系统相比,其交互匹配度提高了13.18%。由此可见,所设计的三维交互式虚拟装配系统装配操作能力更优越。  相似文献   

9.
杜丽娜  卓力  李嘉锋 《信号处理》2022,38(9):1831-1842
随着5G移动通信技术、高性能计算、传感技术的不断进步,全景视频受到了越来越多的关注。全景视频通过头戴显示设备,可以为用户提供远超于平面视频的逼真的立体视觉感知,具有良好的发展前景。为用户提供良好的体验质量(Quality of Experience,QoE)是视频服务提供商吸引和留住用户,在激烈的市场竞争中取得成功的关键。与平面视频相比,全景视频的数据量倍增,对视频数据的采集、编码、传输和存储均提出了更高的要求。因此,如何在网络传输带宽、存储资源有限的情况下,保证用户的QoE就成为工业界和学术界共同关注的研究热点问题。本文对全景视频QoE评价进行了综述,首先对全景视频在采集、拼接、投影、编码、传输、解码、反投影、渲染等各个环节可能存在的失真进行分析,总结归纳了用户QoE的各种影响因素,如人的因素、系统因素、情境上下文和视频内容特性等;在此基础上,从影响因素和建模方法等多个方面归纳了全景视频QoE评价模型的研究进展,及其在码率自适应、资源优化分配和码率控制等方面的应用情况;最后,介绍了具有代表性的全景视频QoE评价数据集以及常见的QoE模型性能评价准则,并探讨了QoE评价模型目前存在的问题和未来的研究方向。   相似文献   

10.
为了实现全景视频的沉浸式显示,开发了一个基于CAVE的交互式全景视频显示系统.本系统成功地实现了pgr全景视频文件在CAVE上的播放.首先,将全景视频中的每一帧全景图像映射到一个虚拟球面上;其次,使用五个虚拟相机对该球面的四面及顶面进行拍摄,调整各相机的参数使所得各幅图像覆盖整个球面区域,并能够无缝拼接;最后,将所得五幅图像分别投影到CAVE系统中各投影幕中;另外,实现通过鼠标、手柄等输入设备调整CAVE中各投影幕的画面.  相似文献   

11.
This paper presents a spatiotemporal super-resolution method to enhance both the spatial resolution and the frame rate in a hybrid stereo video system. In this system, a scene is captured by two cameras to form two videos, including a low spatial resolution with high-frame-rate video and a high spatial resolution with low-frame-rate video. For the low-spatial-resolution video, the low-resolution frames are spatially super-resolved by the high-resolution video via the stereo matching, the bilateral overlapped block motion estimation, and the adaptive overlapped block motion compensation algorithms, while for the low-frame-rate video, those missed frames are interpolated using the high-resolution frames obtained by fusing the disparity compensation and the motion compensation frame rate up-conversion. Experimental results demonstrate that the proposed mixed spatiotemporal super-resolution method has a more significant contribution to both the subjective and objective qualities than the pure spatial super-resolution or the frame rate up-conversion.  相似文献   

12.
Video frame interpolation is a technology that generates high frame rate videos from low frame rate videos by using the correlation between consecutive frames. Presently, convolutional neural networks (CNN) exhibit outstanding performance in image processing and computer vision. Many variant methods of CNN have been proposed for video frame interpolation by estimating either dense motion flows or kernels for moving objects. However, most methods focus on estimating accurate motion. In this study, we exhaustively analyze the advantages of both motion estimation schemes and propose a cascaded system to maximize the advantages of both the schemes. The proposed cascaded network consists of three autoencoder networks, that process the initial frame interpolation and its refinement. The quantitative and qualitative evaluations demonstrate that the proposed cascaded structure exhibits a promising performance compared to currently existing state-of-the-art-methods.  相似文献   

13.
Performance model of interactive video-on-demand systems   总被引:5,自引:0,他引:5  
An interactive video-on-demand (VoD) system allows users to access video services, such as movies, electronic encyclopedia, interactive games, and educational videos from video servers on a broadband network. This paper develops a performance evaluation tool for the system design. In particular, a user activity model is developed to describe the usage of system resources, i.e., network bandwidth and video server usage, by a user as it interacts with the service. In addition, we allow batching of user requests, and the effect of such batching is captured in a batching model. Our proposed queueing model integrates both the user activity and the batching model. This model can be used to determine the requirements of network bandwidth and video server and, hence, the trade-off in communication and storage costs for different system resource configurations  相似文献   

14.
We introduce a highly scalable video compression system for very low bit-rate videoconferencing and telephony applications around 10-30 kbits/s. The video codec first performs a motion-compensated three-dimensional (3-D) wavelet (packet) decomposition of a group of video frames, and then encodes the important wavelet coefficients using a new data structure called tri-zerotrees (TRI-ZTR). Together, the proposed video coding framework forms an extension of the original zero tree idea of Shapiro (1992) for still image compression. In addition, we also incorporate a high degree of video scalability into the codec by combining the layered/progressive coding strategy with the concept of embedded resolution block coding. With scalable algorithms, only one original compressed video bit stream is generated. Different subsets of the bit stream can then be selected at the decoder to support a multitude of display specifications such as bit rate, quality level, spatial resolution, frame rate, decoding hardware complexity, and end-to-end coding delay. The proposed video codec also allows precise bit rate control at both the encoder and decoder, and this can be achieved independently of the other video scaling parameters. Such a scheme is very useful for both constant and variable bit rate transmission over mobile communication channels, as well as video distribution over heterogeneous multicast networks. Finally, our simulations demonstrated comparable objective and subjective performance when compared to the ITU-T H.263 video coding standard, while providing both multirate and multiresolution video scalability  相似文献   

15.
Interactive Broadcasting System for VBR Encoded Videos   总被引:1,自引:0,他引:1  
Video broadcasting has been proved to be an efficient technique to increase the scalability of a video-on-demand (VoD) system. In this paper, we address the problems in providing interactive functions for VBR encoded videos in a broadcast VoD system. A traffic smoothing scheme is proposed to support the VCR functions in delivering VBR videos over CBR channels by the staggered broadcasting protocol. By introducing a small buffering delay, the customers are able to join back to the broadcasting groups after the interactive functions. A system model is then developed to determine the optimal parameters such that the system can meet the delay requirement as well as provide the expected quality of service to the customers. The results show that the proposed system framework is very efficient in terms of bandwidth requirement and buttering delay to provide interactive VoD services.  相似文献   

16.
To optimize scarce network resources and present the highest quality video, streaming video systems need adapt to the video content as well as the network conditions. This paper presents ARMOR, a video streaming system that dynamically adjusts repair and media scaling to meet current video and network conditions. In order to adapt effectively, ARMOR, and any dynamic video adaptation system, needs to predict the video quality as perceived by end users over the range of scaling and repair choices. Thus, this paper first proposes a novel video quality metric called distorted playable frame rate that provides estimation of user perceptual quality considering temporal and quality degradations. Comprehensive user studies show distorted playable frame rate is more accurate than other video quality metrics. Analytic experiments with distorted playable frame rate and the ARMOR optimization algorithm illustrate the predictive power of the metric in a dynamic, streaming video system. Lastly, implementation and experiments of a complete, fully-functioning ARMOR system show the effective practicality of the proposed approach.  相似文献   

17.
In this paper, we present an automatic foreground object detection method for videos captured by freely moving cameras. While we focus on extracting a single foreground object of interest throughout a video sequence, our approach does not require any training data nor the interaction by the users. Based on the SIFT correspondence across video frames, we construct robust SIFT trajectories in terms of the calculated foreground feature point probability. Our foreground feature point probability is able to determine candidate foreground feature points in each frame, without the need of user interaction such as parameter or threshold tuning. Furthermore, we propose a probabilistic consensus foreground object template (CFOT), which is directly applied to the input video for moving object detection via template matching. Our CFOT can be used to detect the foreground object in videos captured by a fast moving camera, even if the contrast between the foreground and background regions is low. Moreover, our proposed method can be generalized to foreground object detection in dynamic backgrounds, and is robust to viewpoint changes across video frames. The contribution of this paper is trifold: (1) we provide a robust decision process to detect the foreground object of interest in videos with contrast and viewpoint variations; (2) our proposed method builds longer SIFT trajectories, and this is shown to be robust and effective for object detection tasks; and (3) the construction of our CFOT is not sensitive to the initial estimation of the foreground region of interest, while its use can achieve excellent foreground object detection results on real-world video data.  相似文献   

18.
Video traffic over the Internet becomes increasingly popular and is expected to comprise the largest proportion of the traffic carried by wired and wireless networks. On the other hand, videos are usually compressed by exploiting spatial and temporal redundancy for the reason of increasing the number of video streams that can be simultaneously carried over links. Unfortunately, receiving high-quality video streaming over the Internet remains a challenge due to the packet loss encountered in the congested wired and wireless links. In addition, the problem is more apparent in wireless links due to not only employing limited system capacity, but also some of the major drawbacks of wireless networks, out of which the bandwidth limitations and link asymmetry which refers to the situation where the forward and reverse paths of a transmission have different channel capacities. Therefore, the wireless hops may be congested which result in dropping many video frames. Additionally, as a result of compressing videos, dependencies among frames and within a frame arise. Consequently, the overall video quality tends to be degraded dramatically. The main challenge is to support the growth of video traffic while keeping the perceived quality of the delivered videos high. In this paper, we extend our previous work concerning improving video traffic over wireless networks through professionally studying the dependencies between video frames and their implications on the overall network performance. In other words, we propose very efficient network and buffer models proportionately to novel algorithms that aim to minimize the cost of aforementioned possible losses by selectively discarding frames based on their contribution to picture quality, namely, partial and selective partial frame discarding policies considering the dependencies between video frames. The performance metrics that are employed to evaluate the performance of the proposed algorithms include the rate of non-decodable frames, peak signal-to-noise ratio, frameput, average buffer occupancy, average packet delay, as well as jitter. Our results are so promising and show significant improvements in the perceived video quality over what is relevant in the current literature. We do not end up to this extent, but rather the effect of producing different bit-stream rates by the FFMPEG codecs on aforementioned performance metrics has been extensively studied.  相似文献   

19.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号