首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   1篇
  国内免费   1篇
自动化技术   13篇
  2021年   1篇
  2019年   1篇
  2016年   2篇
  2014年   1篇
  2013年   2篇
  2012年   4篇
  2010年   1篇
  2007年   1篇
排序方式: 共有13条查询结果,搜索用时 15 毫秒
1.
With the increased penetration of real-time systems into our surroundings, the selection of an efficient schedulability test under fixed priority system from a plethora of existing results, has become a matter of primary interest to real-time system designers. The need for a faster schedulability tests becomes more prominent when it applies to online systems, where processor time is a sacred resource and it is of central importance to assign processor to execute tasks instead of determining system schedulability. Under fixed priority nonpreemptive real-time systems, current schedulability tests (in exact form) can be divided into: response time based tests, and scheduling points tests. To the best of our knowledge, no comparative study of these test to date has ever been presented. The aim of this work is to assist the system designers in the process of selecting a suitable technique from the existing literature after knowing the pros and cons associated with these tests. We highlight the mechanism behind the feasibility tests, theoretically and experimentally. Our experimental results show that response time based tests are faster than scheduling points tests, which make the response time based tests an excellent choice for online systems.  相似文献   
2.
An efficient resource allocation is a fundamental requirement in high performance computing (HPC) systems. Many projects are dedicated to large-scale distributed computing systems that have designed and developed resource allocation mechanisms with a variety of architectures and services. In our study, through analysis, a comprehensive survey for describing resource allocation in various HPCs is reported. The aim of the work is to aggregate under a joint framework, the existing solutions for HPC to provide a thorough analysis and characteristics of the resource management and allocation strategies. Resource allocation mechanisms and strategies play a vital role towards the performance improvement of all the HPCs classifications. Therefore, a comprehensive discussion of widely used resource allocation strategies deployed in HPC environment is required, which is one of the motivations of this survey. Moreover, we have classified the HPC systems into three broad categories, namely: (a) cluster, (b) grid, and (c) cloud systems and define the characteristics of each class by extracting sets of common attributes. All of the aforementioned systems are cataloged into pure software and hybrid/hardware solutions. The system classification is used to identify approaches followed by the implementation of existing resource allocation strategies that are widely presented in the literature.  相似文献   
3.
Wang  Chong  Min-Allah  Nasro  Guan  Bei  Lin  Yu-Qi  Wu  Jing-Zheng  Wang  Yong-Ji 《计算机科学技术学报》2019,34(6):1351-1365
Journal of Computer Science and Technology - Covert channels have been an effective means for leaking confidential information across security domains and numerous studies are available on typical...  相似文献   
4.

Interest in real-time systems has grown considerably over recent years, primarily due to significant increase in the use of smart technologies and latency-sensitive applications such as cloud gaming, audio/video streaming, and smart homes. Significant work has been done on resource mapping in the cloud environment, and a number of promising results have been established accordingly where the focus is mainly on resource provisioning. However, the applicability of cloud computing services for real-time systems generated from smart systems is still in its infancy and remains unexplored, relatively. To address this gap, we propose a model for the smart systems that periodically offload computational workload to the cloud environment where virtual machines are allocated according to rate-monotonic scheduling policy to ensure requests are processed within the associated deadlines. Deadlines of tasks have been relaxed to improve server utilization as well as maintain a level of confidence in the timing constrains. Experimental results are discussed to highlight the applicability of static priority assignment for the workload in the context of virtual machines allocation.

  相似文献   
5.
Grid is a distributed high performance computing paradigm that offers various types of resources (like computing, storage, communication) to resource-intensive user tasks. These tasks are scheduled to allocate available Grid resources efficiently to achieve high system throughput and to satisfy user requirements. The task scheduling problem has become more complex with the ever increasing size of Grid systems. Even though selecting an efficient resource allocation strategy for a particular task helps in obtaining a desired level of service, researchers still face difficulties in choosing a suitable technique from a plethora of existing methods in literature. In this paper, we explore and discuss existing resource allocation mechanisms for resource allocation problems employed in Grid systems. The work comprehensively surveys Gird resource allocation mechanisms for different architectures (centralized, distributed, static or dynamic). The paper also compares these resource allocation mechanisms based on their common features such as time complexity, searching mechanism, allocation strategy, optimality, operational environment and objective function they adopt for solving computing- and data-intensive applications. The comprehensive analysis of cutting-edge research in the Grid domain presented in this work provides readers with an understanding of essential concepts of resource allocation mechanisms in Grid systems and helps them identify important and outstanding issues for further investigation. It also helps readers to choose the most appropriate mechanism for a given system/application.  相似文献   
6.
More computational power is offered by current real-time systems to cope with CPU intensive applications. However, this facility comes at the price of more energy consumption and eventually higher heat dissipation. As a remedy, these issues are being encountered by adjusting the system speed on the fly so that application deadlines are respected and also, the overall system energy consumption is reduced. In addition, the current state of the art of multi-core technology opens further research opportunities for energy reduction through power efficient scheduling. However, the multi-core front is relatively unexplored from the perspective of task scheduling. To the best of our knowledge, very little is known as of yet to integrate power efficiency component into real-time scheduling theory that is tailored for multi-core platforms. In this paper, we first propose a technique to find the lowest core speed to schedule individual tasks. The proposed technique is experimentally evaluated and the results show the supremacy of our test over the existing counterparts. Following that, the lightest task shifting policy is adapted for balancing core utilization, which is utilized to determine the uniform system speed for a given task set. The aforementioned guarantees that: (i) all the tasks fulfill their deadlines and (ii) the overall system energy consumption is reduced.  相似文献   
7.
We study the multi-objective problem of mapping independent tasks onto a set of data center machines that simultaneously minimizes the energy consumption and response time (makespan) subject to the constraints of deadlines and architectural requirements. We propose an algorithm based on goal programming that effectively converges to the compromised Pareto optimal solution. Compared to other traditional multi-objective optimization techniques that require identification of the Pareto frontier, goal programming directly converges to the compromised solution. Such a property makes goal programming a very efficient multi-objective optimization technique. Moreover, simulation results show that the proposed technique achieves superior performance compared to the greedy and linear relaxation heuristics, and competitive performance relative to the optimal solution implemented in Linear Interactive and Discrete Optimizer (LINDO) for small-scale problems.  相似文献   
8.
9.
As we delve deeper into the ‘Digital Age’, we witness an explosive growth in the volume, velocity, and variety of the data available on the Internet. For example, in 2012 about 2.5 quintillion bytes of data was created on a daily basis that originated from myriad of sources and applications including mobile devices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, etc. Such ‘Data Explosions’ has led to one of the most challenging research issues of the current Information and Communication Technology era: how to optimally manage (e.g., store, replicated, filter, and the like) such large amount of data and identify new ways to analyze large amounts of data for unlocking information. It is clear that such large data streams cannot be managed by setting up on-premises enterprise database systems as it leads to a large up-front cost in buying and administering the hardware and software systems. Therefore, next generation data management systems must be deployed on cloud. The cloud computing paradigm provides scalable and elastic resources, such as data and services accessible over the Internet Every Cloud Service Provider must assure that data is efficiently processed and distributed in a way that does not compromise end-users’ Quality of Service (QoS) in terms of data availability, data search delay, data analysis delay, and the like. In the aforementioned perspective, data replication is used in the cloud for improving the performance (e.g., read and write delay) of applications that access data. Through replication a data intensive application or system can achieve high availability, better fault tolerance, and data recovery. In this paper, we survey data management and replication approaches (from 2007 to 2011) that are developed by both industrial and research communities. The focus of the survey is to discuss and characterize the existing approaches of data replication and management that tackle the resource usage and QoS provisioning with different levels of efficiencies. Moreover, the breakdown of both influential expressions (data replication and management) to provide different QoS attributes is deliberated. Furthermore, the performance advantages and disadvantages of data replication and management approaches in the cloud computing environments are analyzed. Open issues and future challenges related to data consistency, scalability, load balancing, processing and placement are also reported.  相似文献   
10.
Designing real-time systems is a challenging task and many conflicting issues arise in the process. Among them, the most fundamental one is the adjustment of appropriate values for task parameters such as task periods, deadlines, and computation times that directly influence the system feasibility. Task periods and deadlines are generally known at design stage and remains fixed throughout, however, task computation times fluctuates significantly. For a better quality of service or higher system utilization, higher task computation values are required, while this flexibility comes at the price of system infeasibility. To the best of our knowledge, no optimal solution exists for extracting the optimal task computation times in a given range so that the overall system remains feasible under a specific scheduling algorithm. In this paper, we present a generalized bound on the task schedulability defined as a nonlinear inequality h i ≤0 in the space of the execution times c i . Based on this bound, the adjustment problem of tasks execution times, which determines the optimum c i for a better system performance while still meeting all temporal requirements, is addressed by solving the standard nonlinear constrained optimization problem. Simulations on synthetic task sets are presented to compare the performance of our work with the most celebrated result, i.e., LL-bound by Liu and Layland in (J. ACM 20(1):40–61, 1973).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号