首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We study the precise conditions under which all optimization strategies for a given family of finite functions yield the same expected maximization performance, when averaged over a uniform distribution of the functions. In the case of bounded-length searches in a family of Boolean functions, we provide tight connections between such “No Free Lunch” conditions and the structure of t-designs and t-wise balanced designs for arbitrary values t. As a corollary, we obtain a nontrivial family of n-variate Boolean functions that satisfies the “No Free Lunch” condition with respect to searches of length Ω(n1/2/log1/2n). Modifying the construction, we also obtain nontrivial “No Free Lunch” families of functions with large ranges.  相似文献   

2.
Due to the dynamic properties of autonomous resource providers, the coupling of independent services as a Grid transaction may abort with inconsistency. In many situations people would resort to compensation actions to regain consistency; consequently there comes the issue of compensation-cost. To handle such an issue, for the first time we set up a costing model for the all-or-nothing transaction of Grid services, and introduce the ECC metric to evaluate related service scheduling. The analysis of ECC estimation is based on the so-called CC-PreC commit pattern, which is an abstract of a category of common use cases of commit handling. Our analysis theoretically illustrates the high degree of computational complexity of scheduling optimization with respect to the cost labeling, timing and order of requests. Under certain typical conditions we prove that infinite possible schemes of scheduling can be reduced down to a finite set of candidates of scheduling. Especially based on the ECC metric, the caution scheduling is thoroughly investigated, which as a basic policy could be employed in certain common scenarios, and under which the intuitive product-first or cost-first schemes are justified in several typical situations.  相似文献   

3.
Many companies have adopted Process-aware Information Systems (PAIS) to support their business processes in some form. On the one hand these systems typically log events (e.g., in transaction logs or audit trails) related to the actual business process executions. On the other hand explicit process models describing how the business process should (or is expected to) be executed are frequently available. Together with the data recorded in the log, this situation raises the interesting question “Do the model and the log conform to each other?”. Conformance checking, also referred to as conformance analysis, aims at the detection of inconsistencies between a process model and its corresponding execution log, and their quantification by the formation of metrics. This paper proposes an incremental approach to check the conformance of a process model and an event log. First of all, the fitness between the log and the model is measured (i.e., “Does the observed process comply with the control flow specified by the process model?”). Second, the appropriateness of the model can be analyzed with respect to the log (i.e., “Does the model describe the observed process in a suitable way?”). Appropriateness can be evaluated from both a structural and a behavioral perspective. To operationalize the ideas presented in this paper a Conformance Checker has been implemented within the ProM framework, and it has been evaluated using artificial and real-life event logs.  相似文献   

4.
分布式实时事务提交协议   总被引:2,自引:1,他引:2  
在分布式实时数据库系统中,保证事务原子性的唯一途径是研究和开发出一个实时的原子提交协议.首先详细分析了事务因数据访问冲突而形成的各种依赖关系,在此基础上提出了实时的原子乐观提交协议——2SC协议,该协议减少了事务的等待时间,提高了事务的并发度,且能无缝地和现有的并发控制协议集成在一起,保证事务的可串行化和原子性.通过模拟实验研究表明,采用该协议能够减少超过截止期的事务数目。  相似文献   

5.
It is fairly well known that there are fundamental differences between adaptive control of continuous-time and discrete-time nonlinear systems. In fact, even for the seemingly simple single-input single-output control system yt+1=θ1f(yt)+ut+wt+1 with a scalar unknown parameter θ1 and noise disturbance {wt}, and with a known function f(⋅) having possible nonlinear growth rate characterized by |f(x)|=Θ(|x|b) with b≥1, the necessary and sufficient condition for the system to be globally stabilizable by adaptive feedback is b<4. This was first found and proved by Guo (1997) for the Gaussian white noise case, and then proved by Li and Xie (2006) for the bounded noise case. Subsequently, a number of other types of “critical values” and “impossibility theorems” on the maximum capability of adaptive feedback were also found, mainly for systems with known control parameter as in the above model. In this paper, we will study the above basic model again but with additional unknown control parameter θ2, i.e., ut is replaced by θ2ut in the above model. Interestingly, it turns out that the system is globally stabilizable if and only ifb<3. This is a new critical theorem for adaptive nonlinear stabilization, which has meaningful implications for the control of more general uncertain nonlinear systems.  相似文献   

6.
Ramamritham gives three common types of constraints for the execution his-tory of concurrent transactions. This paper extends the constraints and gives the fourth type of constraint. Then the weak commit dependency and abort dependency between transactions, be-cause of data access conflicts, axe analyzed. Based on the analysis, an optimistic commit protocol 2LC (two-Level Commit) is proposed, which is specially designed for the distributed real-time do-main. It allows transactions to optimistically access the locked data in a controlled manner, which reduces the data inaccessibility and priority inversion inherent and undesirable in distributed real-time database systems. Furthermore, if the prepared transaction is aborted, the transactions in its weak commit dependency set will execute as normal according to 2LC. Extensive simulation ex-periments have been performed to compare the performance of 2LC with that of the base protocol,the permits reading of modified prepared-data for timeliness (PROMPT) and the deadline-driven conflict resolution (DDCR). The simulation results show that 2LC is effective in reducing the num-ber of missed transaction deadlines. Furthermore, it is easy to be incorporated with the existing concurrency control protocols.  相似文献   

7.
分布式实时事务提交处理   总被引:1,自引:0,他引:1  
覃飙  刘云生 《软件学报》2002,13(8):1395-1401
由于提交处理的复杂性,分布式实时事务很难满足其截止期.提出了一种新的提交协议A2SC(主动的双空间提交),它适合于分布式实时事务提交处理的需要.分析了由于数据冲突访问而形成的各种依赖关系.当处于准备状态的事务和处于提交状态的事务发生数据冲突访问时,A2SC允许处于执行状态的事务在一种控制的方式下乐观地访问锁住的数据.当处于准备状态的事务夭折时,仅仅只有其夭折依赖集中的事务夭折.进一步提出了"没有结果的运行"的观念.当一个事务发现它是没有结果的允许时,它将主动夭折.进行了广泛的模拟实验比较A2SC和其它协议比如基准协议、PROMPT和DDCR的性能.模拟结果表明A2SC在最小化错过截止期的事务数方面较成功,因此A2SC适合于高性能分布式实时事务.  相似文献   

8.
文中提出一种高效的软硬件协同事务内存系统HybridTCache.在通常情况下,事务完全由硬件执行,当事务大小超出了硬件限制时,操作系统将协同硬件执行.HybridTCache提出了一种新的专用事务Cache,称为TCache,缓存事务执行过程中的临时数据,由操作系统协同管理TCache溢出.文中给出了基于GEMS模拟器的HybridTCache原型系统.系统的评测显示HybridTCache比传统系统在性能、可扩展性、设计复杂度方面有较好的改进.  相似文献   

9.
A novel (k, n) scalable secret image sharing (SSIS) scheme was proposed to encrypt a secret image into n shadow images. One can gradually reconstruct a secret image by stacking k or more shadows, but he/she cannot conjecture any information from fewer than k shadows. The advantage of a (k, n)-SSIS scheme is that it provides the threshold property (i.e., k is a threshold value necessary to start in to reveal the secret) as well as the scalability (i.e., the information amount of a reconstructed secret is proportional to the number of shadows used in decryption). All previous (k, n)-SSIS schemes did not have the smooth scalability so that the information amount can be “smoothly” proportional to the number of shadows. In this paper, we consider the smooth scalability in (k, n)-SSIS scheme.  相似文献   

10.
We present a fast algorithm for slope detection on gray scale images, based on 2D Fourier transform and standard filters; this may be used for line or edge detection. Our approach is based on the calculation of “energy” per direction of the image, thus obtaining the “energy spectrum on slope” (θ). This exhibits local maxima at the points where θ equals the slopes of linear or quasi-linear segments within the image, yet it is not affected by their position within it. The process thus outlined has certain advantages as regards its efficiency of linear structure detection, compared to the Radon and Hough transforms. It was motivated by the study of astrophysical images (solar dynamic radio spectra) which necessitated the introduction of a method for fast extraction of “drifting structures”, since they appear as linear or quasi-linear segments on these spectra.  相似文献   

11.
We incorporate a prewrite operation before a write operation in a mobile transaction to improve data availability. A prewrite operation does not update the state of a data object but only makes visible the future value that the data object will have after the final commit of the transaction. Once a transaction reads all the values and declares all the prewrites, it can pre-commit at mobile host (MH) (computer connected to unreliable mobile communication network). The remaining transaction's execution (writes on database) is shifted to the mobile service station (MSS) (computer connected to the reliable fixed network). Writes on database consume time and resources and are therefore shifted to MSS and delayed. This reduces wireless network traffic congestion. Since the responsibility of expensive part of the transaction's execution is shifted to the MSS, it also reduces the computing expenses at mobile host. A pre-committed transaction's prewrite values are made visible both at mobile and at fixed database servers before the final commit of the transaction. Thus, it increases data availability during frequent disconnection common in mobile computing. Since a pre-committed transaction does not abort, no undo recovery needs to be performed in our model. A mobile host needs to cache only prewrite values of the data objects which take less memory, transmission time, energy and can be transmitted over low bandwidth. We have analysed various possible schedules of running transactions concurrently both at mobile and fixed database servers. We have discussed the concurrency control algorithm for our transaction model and proved that the concurrent execution of our transaction processing model produces only serializable schedules. Our performance study shows that our model increases throughput and decreases transaction-abort-ratio in comparison to other lock based schemes. We have briefly discussed the recovery issues and implementation of our model.  相似文献   

12.
The emergency of Hardware Transactional Memory (HTM) has greatly boosted the transaction processing performance in in-memory databases. However, the group commit protocol, aiming at reducing the impact from slow storage devices, leads to high transaction commit latency. Non-Volatile Memory (NVM) opens opportunities for reducing transaction commit latency. However, HTM cannot cooperate with NVM together: flushing data to NVM will always cause HTM to abort. In this paper, we propose a technique called parity version to decouple the process of HTM execution and NVM write. Thus, the transactions can correctly and efficiently use NVM to reduce their commit latency with HTM. We have integrated this technique into DBX, a state-of-the-art HTM-based database, and propose DBXN: a low-latency and high-throughput in-memory transaction processing system. Evaluations using typical OLTP workloads including TPC-C show that it has 99% lower latency and 2.1 times higher throughput than DBX.  相似文献   

13.
In this paper we study and characterize continuous α-migrative t-norms T with respect to a continuous t-norm T0. Depending on whether α is an idempotent element of T0 or not, the (αT0)-migrative property restricts the ordinal sum structure of T especially “locally”, i.e., at α or around it. Outside this well-defined neighbourhood of α, the t-norm T can be arbitrary, under the only condition of keeping it continuous. The investigation exploits the ordinal sum structure of continuous t-norms and our former results related to the migrative property.  相似文献   

14.
Diomidis Spinellis 《Software》2009,39(14):1215-1233
User‐level operating system transactions allow system administrators and ordinary users to perform a sequence of file operations and then commit them as a group, or abort them without leaving any trace behind. Such a facility can aid many system administration and software development tasks. The snapshot isolation concurrency control mechanism allows transactions to be implemented without locking individual system calls; conflicts are detected when the transaction is ready to commit. Along these lines we have implemented a user‐space transaction monitor that is based on ZFS snapshots and a file system event monitor. Transactions are committed through a robust and efficient algorithm that merges the operations performed on a file system's clone back to its parent. Both the performance impact and the implementation cost of the transaction monitor we describe are fairly small. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
The paper proposes analysis and design techniques for switching linear systems (whose commutations occur in an arbitrary manner from the internal dynamics point of view, being determined by exogenous agents). We define and characterize (by “if and only if” conditions) two properties, namely (i) diagonally invariant exponential stability and (ii) diagonally invariant exponential stabilizability. Both properties rely on the existence of contractive invariant sets described by Hölder p  -norms, 1≤p≤∞1p, and imply the standard concepts of “exponential stability” and “exponential stabilizability”, respectively (whereas the counter-parts are, in general, not true). We prove that properties (i), (ii) are equivalent to a set of inequalities written for the matrix measure (associated with the p-norm) applied to the matrices of the open-loop system (property (i)), and, respectively, to the matrices of the closed-loop system (property (ii)). We also develop computational instruments for testing the properties (i), (ii) in the cases of the usual p  -norms with p∈{1,2,∞}p{1,2,}. These instruments represent computable necessary and sufficient conditions for the existence of the properties (i), (ii), and whenever the property (ii) exists, a suitable state-feedback matrix is provided. Two numerical examples are presented in order to illustrate the exploration of properties (i), (ii), as well as the use of software resources available on a powerful environment (such as MATLAB).  相似文献   

16.
硬件事务内存(hardware transactional memory,HTM)能够极大地提升多核内存事务处理的吞吐.然而,为了避免慢速持久化设备对事务吞吐的影响,现有系统以批量的方式提交事务,这使得事务提交有极高的延迟.低时延非易失性内存(non-volatile memory,NVM)的出现,给降低基于HTM的内...  相似文献   

17.
Although there are several factors contributing to the difficulty in meeting distributed real time transaction deadlines, data conflicts among transactions, especially in commitment phase, are the prime factor resulting in system performance degradation. Therefore, design of an efficient commit protocol is of great significance for distributed real time database systems (DRTDBS). Most of the existing commit protocols try to improve system performance by allowing a committing cohort to lend its data to an executing cohort, thus reducing data inaccessibility. These protocols block the borrower when it tries to send WORKDONE/PREPARED message [1, 6, 8, 9], thus increasing the transactions commit time. This paper first analyzes all kind of dependencies that may arise due to data access conflicts among executing-committing transactions when a committing cohort is allowed to lend its data to an executing cohort. It then proposes a static two-phase locking and high priority based, write-update type, ideal for fast and timeliness commit protocol i.e. SWIFT. In SWIFT, the execution phase of a cohort is divided into two parts, locking phase and processing phase and then, in place of WORKDONE message, WORKSTARTED message is sent just before the start of processing phase of the cohort. Further, the borrower is allowed to send WORKSTARTED message, if it is only commit dependent on other cohorts instead of being blocked as opposed to [1, 6, 8, 9]. This reduces the time needed for commit processing and is free from cascaded aborts. To ensure non-violation of ACID properties, checking of completion of processing and the removal of dependency of cohort are required before sending the YES-VOTE message. Simulation results show that SWIFT improves the system performance in comparison to earlier protocol. The performance of SWIFT is also analyzed for partial read-only optimization, which minimizes intersite message traffic, execute-commit conflicts and log writes consequently resulting in a better response time. The impact of permitting the cohorts of the same transaction to communicate with each other [5] on SWIFT has also been analyzed. Recommended by: Ahmed Elmagarmid  相似文献   

18.
The two-dimensional knapsack problem requires to pack a maximum profit subset of “small” rectangular items into a unique “large” rectangular sheet. Packing must be orthogonal without rotation, i.e., all the rectangle heights must be parallel in the packing, and parallel to the height of the sheet. In addition, we require that each item can be unloaded from the sheet in stages, i.e., by unloading simultaneously all items packed at the same either y or x coordinate. This corresponds to use guillotine cuts in the associated cutting problem.In this paper we present a recursive exact procedure that, given a set of items and a unique sheet, constructs the set of associated guillotine packings. Such a procedure is then embedded into two exact algorithms for solving the guillotine two-dimensional knapsack problem. The algorithms are computationally evaluated on well-known benchmark instances from the literature.The C++ source code of the recursive procedure is available upon request from the authors.  相似文献   

19.
Among all classes of faults, Byzantine faults form the most general modeling of value faults. Traditionally, in the Byzantine fault model, faults are statically attributed to a set of up to t processes. This, however, implies that in this model a process at which a value fault occurs is forever “stigmatized” as being Byzantine, an assumption that might not be acceptable for long-lived systems, where processes need to be reintegrated after a fault.We thus consider a model where Byzantine processes can recover in a predefined recovery state, and show that consensus can be solved in such a model. Our model admits executions where over time every process is faulty as long as there are always enough correct processes.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号