首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Video processing in software is often characterized by highly fluctuating, content-dependent processing times, and a limited tolerance for deadline misses. We present an approach that allows close-to-average-case resource allocation to a single video processing task, based on asynchronous, scalable processing, and QoS adaptation. The QoS adaptation balances different QoS parameters that can be tuned, based on user-perception experiments: picture quality, deadline misses, and quality changes. We model the balancing problem as a discrete stochastic decision problem, and propose two solution strategies, based on a Markov decision process and reinforcement learning, respectively. We enhance both strategies with a compensation for structural (non-stochastic) load fluctuations. Finally, we validate our approach by means of simulation experiments, and conclude that both enhanced strategies perform close to the theoretical optimum.Clemens Wüst received the M.Sc. degree in mathematics with honors from the University of Groningen, The Netherlands. Since then, he has been with the Philips Research Laboratories in Eindhoven, The Netherlands, where he has been working mainly on QoS for resource-constrained real-time systems using stochastic optimization techniques. Currently, he is pursuing a Ph.D. degree at the Technische Universiteit Eindhoven.Liesbeth Steffens received her M.Sc. from Utrecht University (NL) in 1972. She spent most of her professional life in Philips Research in Eindhoven. She contributed to the design of a real-time distributed operating system, a video-on-demand server, a DVD player, a set-top box, and a QoS-based Resource-Management framework for streaming video. Her current focus is on characterization of resource requirements, resource reservation, and system-on-chip infrastructure.Wim F. J. Verhaegh received the mathematical engineering degree with honors in 1990 from the Technische Universiteit Eindhoven, The Netherlands. Since then, he is with the Philips Research Laboratories in Eindhoven, The Netherlands. From 1990 until 1998, he has been a member of the department Digital VLSI, where he has been working on high-level synthesis of DSP systems for video applications, with the emphasis on scheduling problems and techniques. Based on this work, he received a Ph.D. degree in 1995 from the Technische Universiteit Eindhoven. Since 1998, he is working on various optimization aspects of multimedia systems, networks, and applications. On the one hand, this concerns application-level resource management and scheduling, for optimization of quality of service of multimedia systems. On the other hand, this concerns adaptive algorithms and machine learning algorithms for user interaction issues, such as content filtering and automatic playlist generation.Reinder J. Bril received a B.Sc. and a M.Sc. (both with honors) from the Department of Electrical Engineering of the University of Twente, and a Ph.D. from the Technische Universiteit Eindhoven (TU/e), The Netherlands. He started his professional career at the Delft University of technology in the Department of Electrical Engineering. From May 1985 till August 2004, he has been with Philips. He has worked in both Philips Research as well as Philips Business Units, on various topics, including fault-tolerance, formal specifications, and software architecture analysis, and in different application domains. The last five years, he worked at Philips Research Laboratories Eindhoven (PRLE), the Netherlands, in the area of Quality of Service (QoS) for consumer devices, with a focus on dynamic resource management in receivers in broadcast environments (such as digital TV-sets and set-top boxes). In September 2004, he made a transfer to the Technische Universiteit Eindhoven (TU/e), Department of Mathematics and Computer Science, Group System Architecture and Networking (SAN), i.e. back to the academic world, after 19 years in industry.Christian Hentschel received his Dr.-Ing. (Ph.D.) in 1989 and Dr.-Ing. habil. in 1996 at the University of Technology in Braunschweig, Germany. He worked on digital video signal processing with focus on quality improvement. In 1995, he joined Philips Research in Briarcliff Manor, USA, where he headed a research project on moiré analysis and suppression for CRT based displays. In 1997, he moved to Philips Research in Eindhoven, The Netherlands, leading a cluster for Programmable Video Architectures. Later he held a position of a Principal Scientist and coordinated a project on scalable media processing with dynamic resource control between different research laboratories. In 2003, he became a full professor at the Brandenburg University of Technology in Cottbus, Germany. Currently he chairs the department of Media Technology. He is a member of the Technical Committee of the International Conference on Consumer Electronics (IEEE) and a member of the FKTG in Germany.  相似文献   

2.
This paper addresses the energy minimization issue when executing real-time applications that have stringent reliability and deadline requirements. To guarantee the satisfaction of the application’s reliability and deadline requirements, checkpointing, Dynamic Voltage Frequency Scaling (DVFS) and backward fault recovery techniques are used. We formally prove that if using backward fault recovery, executing an application with a uniform frequency or neighboring frequencies if the desired frequency is not available, not only consumes the minimal energy but also results in the highest system reliability. Based on this theoretical conclusion, we develop a strategy that utilizes DVFS and checkpointing techniques to execute real-time applications so that not only the applications reliability and deadline requirements are guaranteed, but also the energy consumption for executing the applications is minimized. The developed strategy needs at most one execution frequency change during the execution of an application, hence, the execution overhead caused by frequency switching is small, which makes the strategy particularly useful for processors with a large frequency switching overhead. We empirically compare the developed real-time application execution strategy with recently published work. The experimental results show that, without sacrificing reliability and deadline satisfaction guarantees, the proposed approach can save up to 12% more energy when compared with other approaches.  相似文献   

3.
Grid computing has emerged a new field, distinguished from conventional distributed computing. It focuses on large-scale resource sharing, innovative applications and in some cases, high performance orientation. The Grid serves as a comprehensive and complete system for organizations by which the maximum utilization of resources is achieved. The load balancing is a process which involves the resource management and an effective load distribution among the resources. Therefore, it is considered to be very important in Grid systems. For a Grid, a dynamic, distributed load balancing scheme provides deadline control for tasks. Due to the condition of deadline failure, developing, deploying, and executing long running applications over the grid remains a challenge. So, deadline failure recovery is an essential factor for Grid computing. In this paper, we propose a dynamic distributed load-balancing technique called “Enhanced GridSim with Load balancing based on Deadline Failure Recovery” (EGDFR) for computational Grids with heterogeneous resources. The proposed algorithm EGDFR is an improved version of the existing EGDC in which we perform load balancing by providing a scheduling system which includes the mechanism of recovery from deadline failure of the Gridlets. Extensive simulation experiments are conducted to quantify the performance of the proposed load-balancing strategy on the GridSim platform. Experiments have shown that the proposed system can considerably improve Grid performance in terms of total execution time, percentage gain in execution time, average response time, resubmitted time and throughput. The proposed load-balancing technique gives 7 % better performance than EGDC in case of constant number of resources, whereas in case of constant number of Gridlets, it gives 11 % better performance than EGDC.  相似文献   

4.
Two important components of a global scheduling algorithm are its transfer policy and its location policy. While the transfer policy determines whether a task should be transferred, the location policy determines where it should be transferred. Many global scheduling algorithms have been proposed to schedule tasks with deadline constraints. These algorithms try to transfer tasks only when task's deadlines cannot be met locally or local load is high (i.e. they take only corrective measures). However, a scheduling algorithm that takes preventive measures in addition to corrective measures can reduce potential deadline misses substantially. In this paper we present: (a) a load index which characterizes the system state and is more conducive to preventive and corrective measures; (b) a new transfer policy which takes preventive measures by doing anticipatory task transfers in addition to corrective measures. The proposed transfer policy adapts better to the workload by availing of the accurate system state made available by the proposed load index. An algorithm making use of the new transfer policy and the new load index is shown to reduce the number of deadline misses significantly when compared to algorithms taking only corrective measures.  相似文献   

5.
Tasks in a real-time control application are usually periodic and they have deadline constraints by which each instance of a task is expected to complete its computation, even in the adverse circumstances caused by component failures. Techniques to recover from processor failures often involve a reconfiguration in which all tasks are assigned to fault-free processors. This reconfiguration may result in processor overload where it is no longer possible to meet the deadlines of all tasks. In this paper, we discuss an overload management technique which discards selected task instances in such a way that the performance of the control loops in the system remain satisfactory even after a failure. The technique is based on the rationale that real-time control applications can tolerate occasional misses of the control law updates, especially if the control law is modified to account for these missed updates. The paper devises a scheduling policy which deterministically guarantees when and where the misses will occur. The paper also proposes a methodology for modifying the control law to minimize the deterioration in the control system behavior as a result of these missed control law updates  相似文献   

6.
This paper presents a family of real-time executives, designed by Telettra, for telecommunication, telecontrol, process control and supervisory applications.These applications are subjects to two different types of real-time requirements: deadlines and throughput. Rather than designing a single executive capable to face both requirements, a two level, hierarchical approach has been taken. Two executives coexist on a single processor: a low level, periodic executive for tasks with strict deadline constraints, and a higher level, multitask executive for tasks with throughput constraints, or tasks with weaker deadline constraints that can be specified and dealt with at application level with little support from the executive.Thanks to the hierarchical approach, simple mechanisms are sufficient to support communications between the two levels. Facilities are also provided to support load control in the deadline oriented environment, according to policies that are defined by the multitask level application.The presence on all computational nodes of a multitask environment is a key characteristic, since it allows a highly modular style of programming, and facilitates the construction of distributed systems.The paper shows how these ideas are applied in the design of the peripheral processor of a telephone switching system.  相似文献   

7.
Recently, a growing number of scientific applications have been migrated into the cloud. To deal with the problems brought by clouds, more and more researchers start to consider multiple optimization goals in workflow scheduling. However, the previous works ignore some details, which are challenging but essential. Most existing multi-objective workflow scheduling algorithms overlook weight selection, which may result in the quality degradation of solutions. Besides, we find that the famous partial critical path (PCP) strategy, which has been widely used to meet the deadline constraint, can not accurately reflect the situation of each time step. Workflow scheduling is an NP-hard problem, so self-optimizing algorithms are more suitable to solve it.In this paper, the aim is to solve a workflow scheduling problem with a deadline constraint. We design a deadline constrained scientific workflow scheduling algorithm based on multi-objective reinforcement learning (RL) called DCMORL. DCMORL uses the Chebyshev scalarization function to scalarize its Q-values. This method is good at choosing weights for objectives. We propose an improved version of the PCP strategy calledMPCP. The sub-deadlines in MPCP regularly update during the scheduling phase, so they can accurately reflect the situation of each time step. The optimization objectives in this paper include minimizing the execution cost and energy consumption within a given deadline. Finally, we use four scientific workflows to compare DCMORL and several representative scheduling algorithms. The results indicate that DCMORL outperforms the above algorithms. As far as we know, it is the first time to apply RL to a deadline constrained workflow scheduling problem.  相似文献   

8.
Cost optimization for workflow applications described by Directed Acyclic Graph (DAG) with deadline constraints is a fundamental and intractable problem on Grids. In this paper, an effective and efficient heuristic called DET (Deadline Early Tree) is proposed. An early feasible schedule for a workflow application is defined as an Early Tree. According to the Early Tree, all tasks are grouped and the Critical Path is given. For critical activities, the optimal cost solution under the deadline constraint can be obtained by a dynamic programming strategy, and the whole deadline is segmented into time windows according to the slack time float. For non-critical activities, an iterative procedure is proposed to maximize time windows while maintaining the precedence constraints among activities. In terms of the time window allocations, a local optimization method is developed to minimize execution costs. The two local cost optimization methods can lead to a global near-optimal solution. Experimental results show that DET outperforms two other recent leveling algorithms. Moreover, the deadline division strategy adopted by DET can be applied to all feasible deadlines.  相似文献   

9.
协同过滤推荐系统中聚类搜索方法研究   总被引:1,自引:0,他引:1  
最近邻计算是协同过滤方法中直接影响到推荐系统的运行效率和推荐准确率的重要一环。当用户和项目数目达到一定规模的时候,推荐系统的可扩展性明显降低。聚类的方法能在一定程度上弥补这个缺陷,但同时又会带来推荐准确性的下降。提出了一种与信息检索领域中的倒排索引相结合并采用“成员策略”的用户聚类搜索算法,缩短了最近邻计算的时间,实验的结果证明,该方法能在保证推荐正确性的前提下有效改善协同过滤推荐系统的可扩展性。  相似文献   

10.
异构环境下层次编码多视频源多共享信道分层组播   总被引:1,自引:0,他引:1  
视频组播是许多当前和将来网络服务的重要组成部分,如视频会议,远程学习、远地展示及视频点播,随着网络传送基础设施的改善和端系统处理能力的增强,组播视频应用日益变得可行,组播视频传输中存在的主要问题是网络送资源的异构性和动态性,其使得视频流的多个接收方都达到可接受的流量特性变得异常困难,目前该问题的一个有效解决方式就是利用自适应的分层视频传输机制,在该机制中,各源产生层次媒体流,并在多个网络信道中传输。对视频会议类的多点到多点视频组播应用,信道往往被所有潜在的发送方共享,任何发送方都可在任何一个共享信道中发送其视频层次。在该多点到多点、共享信道、分层视频组播模型下,一个关键问题就是如何动态确定各视频源层次到各共享组播信道的映射,映射策略直接影响到会话整体视频接收质量和网络带宽利用率。典型的方式是顺序映射,该映射方式同等对待各发送方,但利用该方式,随源数目的增加,在各共享网络信度上会出现带宽可伸缩性问题,而且顺序映射方式无法适应网络传送资源和会话状态的动态变化。为此,该文设计了一种基于接收方反馈信息的自适应的层次映射算法,接收方周期性地将其当前感兴趣的发送方及接收速率的信息反馈给某控制节点,而控制节点就利用当前反馈信息动态地调整映射策略。经证实,该算法始终能比顺序层次映射算法获得更高的整体视频接收质量,并具有高的带宽利用率和很小的复杂度。  相似文献   

11.
Apan Qasem  Josh Magee 《Software》2013,43(6):705-729
Translation Lookaside Buffers (TLBs) can play a critical role in improving the performance of emerging parallel workloads. Most current chip multiprocessor systems include multilevel TLBs and provide support for superpages both at the hardware and software level. Judicious use of superpages can significantly cut down the number of TLB misses and improve overall system performance. However, indiscriminate superpage allocation results in page fragmentation and increased application footprint, which often outweigh the benefits of reduced TLB misses. Previous research has explored policies for smart allocation of superpages from an operating system perspective. This paper presents a compiler‐based strategy for automatic and profitable memory allocation via superpages. A significant advantage of a compiler‐based approach is the availability of data‐reuse information within an application. Our strategy employs data‐locality analysis to estimate the TLB demands for both single‐threaded and multi‐threaded programs and uses this metric to apply selective superpage allocation. Apart from its obvious utility in improving TLB performance, this strategy can be used to improve the effectiveness of certain data‐layout transformations and can be a useful tool in benchmarking and automatic performance tuning. We demonstrate the effectiveness of this strategy with experiments on three multicore platforms on a workload that contains both sequential and parallel applications. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
The admission control problem can be modelled as a Markov decision process (MDP) under the average cost criterion and formulated as a linear programming (LP) problem. The LP formulation is attractive in the present and future communication networks, which support an increasing number of classes of service, since it can be used to explicitly control class-level requirements, such as class blocking probabilities. On the other hand, the LP formulation suffers from scalability problems as the number C of classes increases. This article proposes a new LP formulation, which, even if it does not introduce any approximation, is much more scalable: the problem size reduction with respect to the standard LP formulation is O((C?+?1)2/2 C ). Theoretical and numerical simulation results prove the effectiveness of the proposed approach.  相似文献   

13.
Real-time media streams with (m,k)-firm are increasing and attractive as it proposes an alternative quality of service (QoS). In order to meet these ever increasing demands, the problem of scalability should be considered under limited resource. The problem is ignored in the existing solutions all along, because these solutions maintain a separate queue for each stream and per-stream state information. In this paper, we extend the work made in[Ji Ming Chen, Zhi Wang, Ye Qiong Song, and You Xian Sun. A scalable approach for streams with (m,k)-firm deadline. In Proceedings of IEEE Workshop on Quality of Service for Application Servers (2004 October)] and propose loss-rate balance (static) and stream number balance (dynamic) Class Selection Algorithm (CSA) for real-time media streams with (m,k)-firm, which is scalable while offering dynamic performance close to that of existing solutions. The effectiveness of static CSA (S-CSA) that captures the trade-off between scalability and QoS granularity is evaluated through extensive simulation studies. It is also shown that the S-CSA is as effective to guarantee QoS in terms of dynamic failure rate as the dynamic one.  相似文献   

14.
The demand for real-time e-commerce data services has been increasing recently. In many e-commerce applications, it is essential to process user requests within their deadlines, i.e., before the market status changes, using fresh data reflecting the current market status. However, current data services are poor at processing user requests in a timely manner using fresh data. To address this problem, we present a differentiated real-time data service framework for e-commerce applications. User requests are classified into several service classes according to their importance, and they receive differentiated real-time performance guarantees in terms of deadline miss ratio. At the same time, a certain data freshness is guaranteed for all transactions that commit within their deadlines. A feedback-based approach is applied to differentiate the deadline miss ratio among service classes. Admission control and adaptable update schemes are applied to manage potential overload. A simulation study, which reflects the e-commerce data semantics, shows that our approach can achieve a significant performance improvement compared to baseline approaches. Our approach can support the specified per-class deadline miss ratios maintaining the required data freshness even in the presence of unpredictable workloads and data access patterns, whereas baseline approaches fail.  相似文献   

15.
Recently using Scalable Video Coding (SVC) in Dynamic Adaptive Streaming of over HTTP (DASH) has attracted more and more attention. In this paper, we present a Quality-of-Experience (QoE) driven Cross-Layer Scheme (QCLS) for DASH-based scalable video transmission over LTE. Specifically, assuming the priority-based extraction be exploited for bitstream adaption, we first propose a new continuous Rate-Distortion (RD) model for scalable video stream. Then to guarantee continuous playback, a two-level rate adaption algorithm is presented: a novel throughput-based algorithm is implemented for dynamic selection of segment bitrate on the DASH client side at the first level, and the second level applies the rate adaption by designing a suitable packet scheduling strategy at the Base Station. The packets of each segment with lower priority that are still left in the packet queues when their playback deadline is missed, would be considered as the ones that are beyond the actual transmission ability and discarded by the second-level rate adaption. Furthermore, in order to reasonably utilize the wireless resources in LTE (Long-Term Evolution) system, a cross-layer optimization problem that maximizes the total weighted received quality of the currently transmitted segments for all clients is formulated. In view of its high complexity of obtaining the optimal solution, we develop an approach of the suboptimal solution, which can determine a locally optimal transmission strategy in resource allocation as well as the corresponding Modulation and Coding Scheme. Accordingly, the transmission rate of each client can be obtained. Simulation results show that our proposed cross-layer scheme can provide better performance than the existing ones for DASH-based scalable video transmission over LTE.  相似文献   

16.
This survey addresses a number of challenges and research areas identified in real time image processing for state-of-the-art hand-held device implementation in networked electronic media. The challenges appear when having to develop and map processing algorithms not only on fading, noisy, and multi-path band limited transmission channels, but more specifically here on the limited resources available for decoding and scalable rendering on battery-limited hand-held mobile devices. Networked electronic media requires scalable video coding which in turn introduces additional degradation. These problems raise some complex issues discussed in the paper. A need to extend, modify and even create new algorithms and tools, targeting architectures, technology platforms, and design techniques as well as scalability, computational load and energy efficiency considerations has established itself as a key research area. A multidisciplinary approach is advocated.  相似文献   

17.
Kubernetes是目前主流的容器云编排和管理系统,其内置的伸缩策略是通过监测衡量指标并与阈值比较计算,从而实现伸缩的功能.该策略主要存在单一衡量指标和响应延迟问题:单一指标在衡量多种资源消耗的复杂应用时存在明显缺陷;响应延迟问题会造成应用在一段时间内的服务质量无法得到保障.针对上述问题,提出了一种改进的Kuber?...  相似文献   

18.
Real-time applications when mapped to distributed memory multiprocessors produce periodic messages with an associated deadline and priority. Real-time messages may be hard or soft deadline. Real-time extensions to wormhole routing (WR) with multiple virtual channels (VCs) and priority-based physical link arbitration and VC allocation have been proposed in the literature. With a fixed number of VCs/link, a message can face an unbounded priority inversion, rendering the global priority ineffective. In this paper, we propose a new flow control mechanism called Preemptive Pipelined Circuit Switching for Real-Time messages (PPCS-RT) to reduce the priority inversion problem. For the proposed model, with some architectural support, we present an off-line approach to compute delivery guarantees of hard deadline real-time messages. We also perform a comparison of real-time WR and PPCS-RT in terms of performance with soft deadline traffic. The overall miss ratio percentage is over 30 percent higher for WR than PPCS-RT with one VC/link at high traffic loads. Finally, we compare the architectural complexity of a PPCS-RT router and other real-time routers  相似文献   

19.
Chip Multiprocessors (CMPs) have different technological parameters and physical constraints than earlier multi-processor systems, which should be taken into consideration when designing cache coherence protocols. Also, contemporary cache coherence protocols use invalidate schemes that are known to generate a high number of coherence misses. This is especially true under producer-consumer sharing patterns that can become a performance bottleneck as the number of cores increases. This paper presents two mechanisms to design efficient and scalable cache coherence protocols for CMPs. First, we propose an adaptive hybrid protocol to reduce coherence misses observed in write-invalidate based protocols. The proposed protocol is based on a write-invalidate scheme. However, adaptively, it can push updates to potential consumers based on observed producer-consumer sharing patterns. Secondly, we extend this adaptive protocol with an interconnection resource aware mechanism. Experimental evaluations, conducted on a tiled-CMP via full-system simulation, were used to assess the performance from our proposed dynamic hybrid protocols. Performance analysis is presented on a set of scientific applications from the SPLASH-2 and NAS parallel benchmark suites. Results showed that the proposed mechanisms reduce cache-to-cache sharing misses up to 48 % and speed up application performance up to 34 %. In addition, the proposed interconnection resource aware mechanism is proven to perform well under varying interconnection utilizations.  相似文献   

20.
In a distributed system, broadcasting is an efficient way to dispense data in certain highly dynamic environments. While there are several well-known on-line broadcast scheduling strategies that minimize wait time, there has been little research that considers on-demand broadcasting with timing constraints. One application which could benefit from a strategy for on-demand broadcast with timing constraints is a real-time database system. Scheduling strategies are needed in real-time databases that identify which data item to broadcast next in order to minimize missed deadlines. The scheduling decisions required in a real-time broadcast system allow the system to be modeled as a Markov Decision Process (MDP). In this paper, we analyze the MDP model and determine that finding an optimal solution is a hard problem in PSPACE. We propose a scheduling approach, called Aggregated Critical Requests (ACR), which is based on the MDP formulation and present two algorithms based on this approach. ACR is designed for timely delivery of data to clients in order to maximize the reward by minimizing the deadlines missed. Results from trace-driven experiments indicate the ACR approach provides a flexible strategy that can outperform existing strategies under a variety of factors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号