首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Multimedia content adaption strategies are becoming increasingly important for effective video streaming over the actual heterogeneous networks. Thus, evaluation frameworks for adaptive video play an important role in the designing and deploying process of adaptive multimedia streaming systems. This paper describes a novel simulation framework for rate-adaptive video transmission using the Scalable Video Coding standard (H.264/SVC). Our approach uses feedback information about the available bandwidth to allow the video source to select the most suitable combination of SVC layers for the transmission of a video sequence. The proposed solution has been integrated into the network simulator NS-2 in order to support realistic network simulations. To demonstrate the usefulness of the proposed solution we perform a simulation study where a video sequence was transmitted over a three network scenarios. The experimental results show that the Adaptive SVC scheme implemented in our framework provides an efficient alternative that helps to avoid an increase in the network congestion in resource-constrained networks. Improvements in video quality, in terms of PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity Index) are also obtained.  相似文献   

2.
In recent years, the BitTorrent (BT) has been one of the most efficient mechanisms for Peer-to-Peer (P2P) non-realtime content distribution. Many approaches for streaming videos over BT have been proposed by modifying the piece selection policy to cope with the real time requirement. Most of them provide the continuity of video playback at the cost of the variety of pieces on P2P streaming networks. In this paper, we propose the novel urgency-aware BT streaming mechanism, called UR-Aware, which intends to enhance the video streaming efficiency over BT on P2P networks by balancing playback continuity and distribution of piece rareness. There are two sets for piece downloading, called Uset and Rset. The Uset contains the most urgent pieces of video playback, while the Rset contains the rarest pieces. Downloading bandwidth of a peer is dynamically allocated to the two sets based on the number of consecutive pieces beyond these playback position in Uset. Furthermore, to implicitly preserve the continuity of pieces for future playback, consecutive k pieces are grouped as a section which would be selected to insert into Rset based on its rareness. The experimental results show that the UR-Aware has the best performance among other BT streaming approaches in terms of video download time, piece late rate, and network throughput. In addition, the accumulated standard deviation of UR-Aware is closest to that of the native BT among those approaches, which also confirms the superiority of UR-Aware on video streaming over BT.  相似文献   

3.
The popular Internet service, YouTube, has adopted by default the HyperText Markup Language version 5 (HTML5). With this adoption, YouTube has moved to Dynamic Adaptive Streaming over HTTP (DASH) as Adaptive BitRate (ABR) video streaming technology. Furthermore, rate adaptation in DASH is solely receiver-driven. This issue motivates this work to make a deep analysis of YouTube’s particular DASH implementation. Firstly, this article provides a state of the art about DASH and adaptive streaming technology, and also YouTube traffic characterization related work. Secondly, this paper describes a new methodology and test-bed for YouTube’s DASH implementation traffic characterization and performance measurement. This methodology and test-bed do not make use of proxies and, moreover, they are able to cope with YouTube traffic redirections. Finally, a set of experimental results are provided, involving a dataset of 310 YouTube’s videos. The depicted results show a YouTube’s traffic pattern characterization and a discussion about allowed download bandwidth, YouTube’s consumed bitrate and quality of the video. Moreover, the obtained results are cross-validated with the analysis of HTTP requests performed by YouTube’s video player. The outcomes of this article are applicable in the field of Quality of Service (QoS) and Quality of Experience (QoE) management. This is valuable information for Internet Service Providers (ISPs), because QoS management based on assured download bandwidth can be used in order to provide a target end-user’s QoE when YouTube service is being consumed.  相似文献   

4.
It is popular to watch a 3D video through a 3D display nowadays. However, it is still difficult to enjoy the 3D multimedia contents with a mobile device even if a mobile device with a 3D display is currently introduced into the market. The main technological challenges for watching 3D contents via the mobile devices can be identified as the following: generating and streaming 3D contents. Generating 3D contents requires extra computational resources. Moreover, streaming 3D contents demands additional network bandwidth for receiving and transmitting the 3D data. To overcome these technological challenges, we propose ReMA, a novel 3D video streaming system in this paper. We devised a novel architecture for transmitter, receiver, and a distribution system to efficiently disseminate and generate 3D videos for the mobile devices. We implemented ReMA in a real test-bed and conducted a thorough empirical evaluation study to see the feasibility of streaming 3D contents for the mobile devices. Based on our empirical study, the resulting system presents a great promise in streaming 3D video in real-time to the mobile devices.  相似文献   

5.
The problem of kNN (k Nearest Neighbor) queries has received considerable attention in the database and information retrieval communities. Given a dataset D and a kNN query q, the k nearest neighbor algorithm finds the closest k data points to q. The applications of kNN queries are board, not only in spatio-temporal databases but also in many areas. For example, they can be used in multimedia databases, data mining, scientific databases and video retrieval. The past studies of kNN query processing did not consider the case that the server may receive multiple kNN queries at one time. Their algorithms process queries independently. Thus, the server will be busy with continuously reaccessing the database to obtain the data that have already been acquired. This results in wasting I/O costs and degrading the performance of the whole system. In this paper, we focus on this problem and propose an algorithm named COrrelated kNN query Evaluation (COKE). The main idea of COKE is an “information sharing” strategy whereby the server reuses the query results of previously executed queries for efficiently processing subsequent queries. We conduct a comprehensive set of experiments to analyze the performance of COKE and compare it with the Best-First Search (BFS) algorithm. Empirical studies indicate that COKE outperforms BFS, and achieves lower I/O costs and less running time.  相似文献   

6.
We provide a constant time schedulability test for an on-line multiprocessor server handling aperiodic tasks. Dhall's effect is avoided by dividing the tasks in two priority classes based on task utilization: heavy and light. We prove that if the load on the multiprocessor server stays below U threshold = 3 ? √7 ≈ 35.425%, the server can accept an incoming aperiodic task and guarantee that the deadlines of all accepted tasks will be met. The same number 35.425% is also a threshold for a task to be characterized as heavy.The bound U threshold = 3 ? √7≈ 35.425% is easy-to-use, but not sharp if we know the number of processors in the multiprocessor system. Assuming the server to be equipped with m processors, we calculate a formula for the sharp bound U threshold (m), which converges to U threshold from above as m → ∞.The results are based on a utilization function u(x) = 2(1 ? x)/(2 + √2+2x). By using this function, the performance of the multiprocessor server can in some cases be improved beyond U threshold(m) by paying the extra overhead of monitoring the individual utilization of the current tasks.  相似文献   

7.
Existing media streaming protocols provide bandwidth adaptation features in order to deliver seamless video streams in an abrupt bandwidth shortage on the networks. For instance, popular HTTP streaming protocols such as HTTP Live Streaming (HLS) and MPEG-DASH are designed to select the most appropriate streaming quality based on client side bandwidth estimation. Unfortunately, controlling the quality at the client side means the effectiveness of the adaptive streaming is not controlled by service providers, and it harms the consistency in quality-of-service. In addition, recent studies show that selecting media quality based on bandwidth estimation may exhibit unstable behavior in certain network conditions. In this paper, we demonstrate that the drawbacks of existing protocols can be overcome with a server side, buffer based quality control scheme. Server side quality control solves the service quality problem by eliminating client assistance. Buffer based control scheme eliminates the side effects of bandwidth based stream selection. We achieve this without client assistance by designing a play buffer estimation algorithm. We prototyped the proposed scheme in our streaming service testbed which supports pre-transcoding and live-transcoding of the source media file. Our evaluation results show that the proposed quality control performs very well both in simulated and real environments.  相似文献   

8.
In this paper, an innovative framework labeled as cooperative cognitive maritime big data systems (CCMBDSs) on the sea is developed to provide opportunistic channel access and secure communication. A two-phase frame structure is applied to let Secondary users (SUs) entirely utilize the transmission opportunities for a portion of time as the reward by cooperation with Primary users (PUs). Amplify-and-forward (AF) relaying mode is exploited in SU nodes, and Backward induction method based Stackelberg game is employed to achieve optimal determination of SU, power consumption and time portion of cooperation both for non-secure communication scenario and secure communication. Specifically, a jammer-based secure communications scheme is developed to maximize the secure utility of PU, to confront of the situation that the eavesdropper could overheard the signals from SU i and the jammer. Close-form solutions for the best access time portion as well as the power for SU i and jammer are derived to realize the Nash Equilibrium. Simulation results validate the effectiveness of our proposed strategy.  相似文献   

9.
We advocate the Loop-of-stencil-reduce pattern as a means of simplifying the implementation of data-parallel programs on heterogeneous multi-core platforms. Loop-of-stencil-reduce is general enough to subsume map, reduce, map-reduce, stencil, stencil-reduce, and, crucially, their usage in a loop in both data-parallel and streaming applications, or a combination of both. The pattern makes it possible to deploy a single stencil computation kernel on different GPUs. We discuss the implementation of Loop-of-stencil-reduce in FastFlow, a framework for the implementation of applications based on the parallel patterns. Experiments are presented to illustrate the use of Loop-of-stencil-reduce in developing data-parallel kernels running on heterogeneous systems.  相似文献   

10.
A video streaming proxy server needs to handle hundreds of simultaneous connections between media servers and clients. Inside, every video arrived at the server and delivered from it follows a specific arrival and delivery schedule. While arrival schedules compete for incoming network bandwidth, delivery schedules compete for outgoing network bandwidth. As a result, a proxy server has to provide sufficient buffer and disk cache for storage, together with memory space, disk space and disk bandwidth. In order to optimize the throughput, a proxy server has to govern the usage of these resources. In this paper, we first analyze the property of a traditional smoothing algorithm and a video staging algorithm. Then we develop, based on the smoothing algorithm, a video staging algorithm for video streaming proxy servers. This algorithm allows us to devise an arrival schedule based on the delivery schedule. Under this arrival and delivery schedule pair, we can achieve a better resource utilization rate gracefully between different parameter sets. It is also interesting to note that the usage of the resources such as network bandwidth, disk bandwidth and memory space becomes interchangeable. It provides the basis for inter-resource scheduling to further improve the throughput of a video streaming proxy server system.
Daniel P. K. LunEmail:
  相似文献   

11.
In the era of data explosion, high volume of various data is generated rapidly at each moment of time; and if not processed, the profits of their latent information would be missed. This is the main current challenge of most enterprises and Internet mega-companies (also known as the big data problem). Big data is composed of three dimensions: Volume, Variety, and Velocity. The velocity refers to the high speed, both in data arrival rate (e.g., streaming data) and in data processing (i.e., real-time processing). In this paper, the velocity dimension of big data is concerned; so, real-time processing of streaming big data is addressed in detail. For each real-time system, to be fast is inevitable and a necessary condition (although it is not sufficient and some other concerns e.g., real-time scheduling must be issued, too). Fast processing is achieved by parallelism via the proposed deadline-aware dispatching method. For the other prerequisite of real-time processing (i.e., real-time scheduling of the tasks), a hybrid clustering multiprocessor real-time scheduling algorithm is proposed in which both the partitioning and global real-time scheduling approaches are employed to have better schedulablity and resource utilization, with a tolerable overhead. The other components required for real-time processing of streaming big data are also designed and proposed as real time streaming big data (RT-SBD) processing engine. Its prototype is implemented and experimentally evaluated and compared with the Storm, a well-known real-time streaming big data processing engine. Experimental results show that the proposed RT-SBD significantly outperforms the Storm engine in terms of proportional deadline miss ratio, tuple latency and system throughput.  相似文献   

12.
A survey on peer-to-peer video streaming systems   总被引:2,自引:1,他引:2  
Video-over-IP applications have recently attracted a large number of users on the Internet. Traditional client-server based video streaming solutions incur expensive bandwidth provision cost on the server. Peer-to-Peer (P2P) networking is a new paradigm to build distributed network applications. Recently, several P2P streaming systems have been deployed to provide live and on-demand video streaming services on the Internet at low server cost. In this paper, we provide a survey on the existing P2P solutions for live and on-demand video streaming. Representative P2P streaming systems, including tree, multi-tree and mesh based systems are introduced. We describe the challenges and solutions of providing live and on-demand video streaming in P2P environment. Open research issues on P2P video streaming are also discussed.
Chao LiangEmail:
  相似文献   

13.
In the field of micromanipulation, an in situ three-axial rotation of a microscale object remains difficult to realize, with rotational resolution and repeatability remaining low. In this paper, we describe the fundamental principle, properties, and experimental results of multi-axial non-contact in situ micromanipulation of an egg cell driven by steady streaming generated around an oscillating cylinder. A continuously oscillating cylinder generates the steady streaming that draws an egg cell toward the cylinder. If it is trapped by an eddy near the tip of the cylinder, it continuously rotates around the vertical axis at a fixed point. If it is trapped by a swirl flow generated around the side of the cylinder, it rotates around the horizontal axis. We define Reynolds number, Re, as arcω/ν, where a is half of the oscillation’s amplitude, rc is the cylinder’s radius, ω is the oscillation’s angular frequency, and ν is the kinematic viscosity. We demonstrate that the conditions of the vertical and horizontal rotations are determined by two dimensionless numbers: Re and a/rc. In our experiments, we obtained rotational resolutions of 0.05° and 0.11° and maximal angular velocities of 34.8°/s and 188°/s for the vertical and horizontal rotations, respectively. We also developed unique micromanipulation methods using two oscillating pipettes attached to holonomic miniature robots. We successfully manipulated five degrees of freedom (DoF) of the cell (three posture angles and two translational displacements along the X and Y axes) with the steady streaming. The proposed method enables a multi-axial, non-contact, in situ, and compact micromanipulation independent of the electrical, optical, magnetic, shape, and stiffness properties of the objects; moreover, it can be applied in microfluidics, biomedical, and heterogeneous microassembly applications.  相似文献   

14.
A video segmentation algorithm that takes advantage of using a background subtraction (BS) model with low learning rate (LLR) or a BS model with high learning rate (HLR) depending on the video scene dynamics is presented in this paper. These BS models are based on a neural network architecture, the self-organized map (SOM), and the algorithm is termed temporal modular self-adaptive SOM, TMSA_SOM. Depending on the type of scenario, the TMSA_SOM automatically classifies and processes each video into one of four different specialized modules based on an initial sequence analysis. This approach is convenient because unlike state-of-the-art (SoA) models, our proposed model solves different situations that may occur in the video scene (severe dynamic background, initial frames with dynamic objects, static background, stationary objects, etc.) with a specialized module. Furthermore, TMSA_SOM automatically identifies whether the scene has drastically changed (e.g., stationary objects of interest become dynamic or drastic illumination changes have occurred) and automatically detects when the scene has become stable again and uses this information to update the background model in a fast way. The proposed model was validated with three different video databases: Change Detection, BMC, and Wallflower. Findings showed a very competitive performance considering metrics commonly used in the literature to compare SoA models. TMSA_SOM also achieved the best results on two perceptual metrics, Ssim and D-Score, and obtained the best performance on the global quality measure, FSD (based on F-Measure, Ssim, and D-Score), demonstrating its robustness with different and complicated non-controlled scenarios. TMSA_SOM was also compared against SoA neural network approaches obtaining the best average performance on Re, Pr, and F-Measure.  相似文献   

15.
In order to meet the emerging demands of high-fidelity video services, a new video coding standard — High Efficiency Video Coding (HEVC) is developed to improve the compression performance of high definition (HD) videos and save half of the bitrate for the same perceptual video quality compared with H.264/Advanced Video Coding (AVC). Rate control still plays a significant role in HD video data transmission via the communication channel. However, R-lambda model based HEVC rate control algorithm does not take the relationship between the encoding complexity and Human Visual System (HVS) into account, what’s more, the convergence speed of Least Mean Square (LMS) algorithm is slow. In this paper, an adaptive gradient information and Broyden Fletcher Goldfarb Shanno (BFGS) based R-lambda model (GBRL) is proposed for the inter frame rate control, where the gradient based on Sobel operator can effectively measure the frame-content complexity and BFGS algorithm converges speedily than LMS algorithm. Experimental results show that the proposed GBRL method can achieve bitrate error reduction and peak signal to noise ratio (PSNR) improvement especially for the sequences with large motion, compared to the state-of-the-art rate control methods. In addition, if the optimal initial quantization parameter (QP) prediction model based on linear regression can be incorporated into the proposed GBRL method, the performance of rate control can be further improved.  相似文献   

16.
17.
A community within a graph can be broadly defined as a set of vertices that exhibit high cohesiveness (relatively high number of edges within the set) and low conductance (relatively low number of edges leaving the set). Community detection is a fundamental graph processing analytic that can be applied to several application domains, including social networks. In this context, communities are often overlapping, as a person can be involved in more than one community (e.g., friends, and family); and evolving, since the structure of the network changes. We address the problem of streaming overlapping community detection, where the goal is to maintain communities in the presence of streaming updates. This way, the communities can be updated more efficiently. To this end, we introduce SONIC—a find-and-merge type of community detection algorithm that can efficiently handle streaming updates. SONIC first detects when graph updates yield significant community changes. Upon the detection, it updates the communities via an incremental merge procedure. The SONIC algorithm incorporates two additional techniques to speed-up the incremental merge; min-hashing and inverted indexes. Results show that SONIC can provide high quality overlapping communities, while handling streaming updates several orders of magnitude faster than the alternatives performing from-scratch computation.  相似文献   

18.
19.
This work proposes the E-Top system for the efficient processing of top-k queries in mobile ad hoc peer to peer (M-P2P) networks using economic incentive schemes. In E-Top, brokers facilitate top-k query processing in lieu of a commission. E-Top issues economic rewards to the mobile peers, which send relevant data items (i.e., those that contribute to the top-k query result), and penalizes peers otherwise, thereby optimizing the communication traffic. Peers use the payoffs (rewards/penalties) as a means of feedback to re-evaluate the scores of their items for re-ranking purposes. The main contributions of E-Top are three-fold. First, it proposes two economic incentive schemes, namely ETK and ETK+, in which peers act individually towards top-k query processing. Second, it extends ETK and ETK+ to propose a peer group-based economic incentive scheme ETG. Third, our performance evaluation shows that our schemes are indeed effective in improving the performance of top-k queries in terms of query response times and accuracy at reasonable communication traffic cost.  相似文献   

20.
In secure delegatable computation, computationally weak devices (or clients) wish to outsource their computation and data to an untrusted server in the cloud. While most earlier work considers the general question of how to securely outsource any computation to the cloud server, we focus on concrete and important functionalities and give the first protocol for the pattern matching problem in the cloud. Loosely speaking, this problem considers a text T that is outsourced to the cloud \({\textsc {S}}\) by a sender \({\textsc {SEN}}\). In a query phase, receivers \({\textsc {REC}}_1, \ldots , {\textsc {REC}}_l\) run an efficient protocol with the server \({\textsc {S}}\) and the sender \({\textsc {SEN}}\) in order to learn the positions at which a pattern of length m matches the text (and nothing beyond that). This is called the outsourced pattern matching problem which is highly motivated in the context of delegatable computing since it offers storage alternatives for massive databases that contain confidential data (e.g., health-related data about patient history). Our constructions are simulation-based secure in the presence of semi-honest and malicious adversaries (in the random oracle model) and limit the communication in the query phase to O(m) bits plus the number of occurrences—which is optimal. In contrast to generic solutions for delegatable computation, our schemes do not rely on fully homomorphic encryption but instead use novel ideas for solving pattern matching, based on a reduction to the subset sum problem. Interestingly, we do not rely on the hardness of the problem, but rather we exploit instances that are solvable in polynomial time. A follow-up result demonstrates that the random oracle is essential in order to meet our communication bound.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号