首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   26篇
  免费   0篇
无线电   3篇
自动化技术   23篇
  2018年   1篇
  2013年   1篇
  2010年   1篇
  2009年   3篇
  2008年   4篇
  2007年   3篇
  2006年   2篇
  2005年   2篇
  2004年   1篇
  2003年   3篇
  2002年   1篇
  1997年   1篇
  1994年   2篇
  1990年   1篇
排序方式: 共有26条查询结果,搜索用时 15 毫秒
1.
On real-time databases: concurrency control and scheduling   总被引:7,自引:0,他引:7  
In addition to maintaining database consistency as in conventional databases, real-time database systems must also handle transactions with timing constraints. While transaction response time and throughput are usually used to measure a conventional database system, the percentage of transactions satisfying the deadlines or a time-critical value function is often used to evaluate a real-time database system. Scheduling real-time transactions is far more complex than traditional real-time scheduling in the sense that (1) worst case execution times are typically hard to estimate, since not only CPU but also I/O requirement is involved; and (2) certain aspects of concurrency control may not integrate well with real-time scheduling. In this paper, we first develop a taxonomy of the underlying design space of concurrency control including the various techniques for achieving serializability and improving performance. This taxonomy provides us with a foundation for addressing the real-time issues. We then consider the integration of concurrency control with real-time requirements. The implications of using run policies to better utilize real-time scheduling in a database environment are examined. Finally, as timing constraints may be more important than data consistency in certain hard realtime database applications, we also discuss several approaches that explore the nonserializable semantics of real-time transactions to meet the hard deadlines  相似文献   
2.
In service-oriented computing (SOC) environments, service clients interact with service providers for services or transactions. From the point view of service clients, the trust status of a service provider is a critical issue to consider, particularly when the service provider is unknown to them. Typically, the trust evaluation is based on the feedback on the service quality provided by service clients. In this paper, we first present a trust management framework that is event-driven and rule-based. In this framework, trust computation is based on formulae. But rules are defined to determine which formula to use and what arguments to use, according to the event occurred during the transaction or service. In addition, we propose some trust evaluation metrics and a formula for trust computation. The formula is designed to be adaptable to different application domains by setting suitable arguments. Particularly, the proposed model addresses the incremental characteristics of trust establishment process. Furthermore, we propose a fuzzy logic based approach for determining reputation ranks that particularly differentiates new service providers and old (long-existing) ones. This is further incentive to new service providers and penalize poor quality services from service providers. Finally, a set of empirical studies has been conducted to study the properties of the proposed approaches, and the method to control the trust changes in both trust increment and decrement cases. The proposed framework is adaptable for different domains and complex trust evaluation systems.
Vijay VaradharajanEmail:
  相似文献   
3.
Many advances have been introduced recently for service-oriented computing and applications (SOCA). The Internet of Things (IoT) has been pervasive in various application domains. Fog/Edge computing models have shown techniques that move computational and analytics capabilities from centralized data centers where most enterprise business services have been located to the edge where most customer’s Things and their data and actions reside. Network functions between the edge and the cloud can be dynamically provisioned and managed through service APIs. Microservice architectures are increasingly used to simplify engineering, deployment and management of distributed services in not only cloud-based powerful machines but also in light-weighted devices. Therefore, a key question for the research in SOCA is how do we leverage existing techniques and develop new ones for coping with and supporting the changes of data and computation resources as well as customer interactions arising in the era of IoT and Fog/Edge computing. In this editorial paper, we attempt to address this question by focusing on the concept of ensembles for IoT, network functions and clouds.  相似文献   
4.
EDZL (Earliest Deadline first until Zero Laxity) is an efficient and practical scheduling algorithm on multiprocessor systems. It has a comparable number of context switch to EDF (Earliest Deadline First) and its schedulable utilization seems to be higher than that of EDF. Previously, there was a conjecture that the utilization bound of EDZL is 3m/4=0.75m for m processors. In this paper, we disprove this conjecture and show that the utilization bound of EDZL is no greater than m(1−1/e)≈0.6321m, where e≈2.718 is the Euler's number.  相似文献   
5.
6.
Real-time systems have stringent deadline requirements for their tasks. To meet the requirements, a real-time system must use scheduling algorithms that ensure a predictable response even in the face of mutually exclusive accesses to critical sections. We present a concurrency control protocol for systems using the earliest deadline first scheduling algorithm. The protocol specifies a dynamic priority ceiling for each critical section which is the earliest deadline of jobs which are currently in or will enter the critical section. Jobs trying to enter a critical section will be blocked if they do not have a priority higher than the priority ceiling of any critical section which is in use. We show that the protocol prevents both deadlock and chained blocking. The schedulability condition and implementation issues of the protocol are also discussed.  相似文献   
7.
Rate monotonic schedulability tests using period-dependent conditions   总被引:1,自引:0,他引:1  
Feasibility and schedulability problems have received considerable attention from the real-time systems research community in recent decades. Since the publication of the Liu and Layland bound, many researchers have tried to improve the schedulability bound of the RM scheduling. The LL bound does not make any assumption on the relationship between any of the task periods. In this paper we consider the relative period ratios in a system. By reducing the difference between the smallest and the second largest virtual period values in a system, we can show that the RM schedulability bound can be improved significantly. This research has also proposed a system design methodology to improve the schedulability of real time system with a fixed system load.
Wei-Kuan ShihEmail:
  相似文献   
8.
Distributed trust management addresses the challenges of eliciting, evaluating and propagating trust for service providers on the distributed network. By delegating trust management to brokers, individual users can share their feedbacks for services without the overhead of maintaining their own ratings. This research proposes a two-tier trust hierarchy, in which a user relies on her broker to provide reputation rating about any service provider, while brokers leverage their connected partners in aggregating the reputation of unfamiliar service providers. Each broker collects feedbacks from its users on past transactions. To accommodate individual differences, personalized trust is modeled with a Bayesian network. Training strategies such as the expectation maximization (EM) algorithm can be deployed to estimate both server reputation and user bias. This paper presents the design and implementation of a distributed trust simulator, which supports experiments under different configurations. In addition, we have conducted experiments to show the following. 1) Personal rating error converges to below 5% consistently within 10,000 transactions regardless of the training strategy or bias distribution. 2) The choice of trust model has a significant impact on the performance of reputation prediction. 3) The two-tier trust framework scales well to distributed environments. In summary, parameter learning of trust models in the broker-based framework enables both aggregation of feedbacks and personalized reputation prediction.
Kwei-Jay LinEmail:
  相似文献   
9.
Web services are new forms of Internet software that can be universally deployed and invoked using standard protocols. Services from different providers can be integrated into a composite service regardless of their locations, platforms, and/or execution speeds to implement complex business processes and transactions. In this paper, we study the end-to-end QoS issues of composite services by utilizing a QoS broker that is responsible for selecting and coordinating the individual service component. We design the service selection algorithms used by QoS brokers to construct the optimal composite service. The objective of the algorithms is to maximize the user-defined utility function value while meeting the end-to-end delay constraint. We propose two solution approaches to the service selection problem: the combinatorial approach, by modeling the problem as the Multiple Choice Knapsack Problem (MCKP), and the graph approach, by modeling the problem as the constrained shortest path problem in the graph theory. We study efficient solutions for each approach.This research was supported in part by NSF CCR-9901697.  相似文献   
10.
Imprecise computations   总被引:20,自引:0,他引:20  
The imprecise computation technique has been proposed as a way to handle transient overload and to enhance fault tolerance of real-time systems. In a system based on this technique, each time-critical task is designed in such a way that it can produce a usable, approximate result in time whenever a failure or overload prevents it from producing the desired, precise result. This paper describes ways to implement imprecise computations, models to characterize them and algorithms for scheduling them. An imprecise mechanism for the generation and use of approximate results can be integrated in a natural way with a traditional fault-tolerance mechanism. An architectural framework for this integration is described  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号