首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A fragmentation method that reduces the storage overhead of replicated objects is proposed. A data-management protocol for these fragmented objects is presented, and it is shown that this protocol is a generalization of quorum algorithms for replicated data in which objects are not fragmented. Although the protocol reduces storage requirements, it does not achieve a high level of resiliency for both read and write operations. By integrating a propagation mechanism with the protocol, the same level of resiliency is achieved for both read and write operations as other quorum protocols, while the storage cost is reduced  相似文献   

2.
Many quorum consensus protocols have been proposed for the management of replicated data in a distributed environment. The advantages of a replicated database system over a non-replicated one include high availability and low response time. We note further that the multiple sites can act as multiple agents so that at any time, multiple requests can be handled in parallel. This feature leads to the desirable consequence of high workload capacity. In this paper, we define a new metric of read-capacity for this feature. We propose a new protocol called diamond quorum consensus which has two major properties that are superior to the previous protocols of majority, tree, grid, and hierarchical quorum consensus: (1) it has the highest read-capacity, (2) it has the smallest optimal read quorum size of 2. We show that these two features are achievable without jeopardizing the availability. The small quorum size is a significant feature because it relates to the messaging cost. Few previous work on quorum consensus has discussed the handling of partition failure, which in many cases will depend on the quorum consensus protocol, we show how we can use the generalized virtual partition protocol to handle partition failure in the case of diamond quorum consensus.  相似文献   

3.
A nonblocking quorum protocol for replica control which guarantees one-copy serializability is developed. The effects of a nonblocking protocol are analyzed, and it is shown that the gains can be substantial under certain conditions. It is demonstrated that in order for the protocol to be useful, it must be integrated with a propagation mechanism. It is also shown that the access latency can be reduced significantly in a replicated environment. An interesting aspect of the quorum protocol is that it essentially uses a read quorom/write-quorom approach for concurrency control but uses a read-one/write-all approach for replica control. It is shown that the nonblocking quorom protocol provides the same level of availability and fault tolerance as the quorum protocol proposed by D.K. Gifford (1979)  相似文献   

4.
敖丽  刘璟  姚绍文  武楠 《计算机应用》2018,38(5):1372-1376
逻辑密钥分层(LKH)协议已经被证明在抗完全合谋攻击时,它通信开销的下界是O(log n),但是在一些资源受限或者商业应用场景中,用户仍然要求通信开销低于O(log n)。虽然,有状态的完全排外子树(SECS)协议具有常量通信开销的特性,却只能抵抗单用户攻击。考虑用户愿意牺牲一定安全性来降低通信开销的情况,利用LKH协议的完全抗合谋攻击特性和SECS协议具有常量通信开销的优势,设计并实现了一种混合的组密钥更新协议(H-SECS)。H-SECS协议根据应用场景的安全级别来配置子组数目,在通信开销和抗合谋攻击能力之间作一个最优的权衡。理论分析及仿真实验表明,与LKH协议和SECS协议相比,H-SECS协议的通信开销可以在O(1)和O(log n)区间进行调控。  相似文献   

5.
6.
We previously proved that almost all words of length n over a finite alphabet A with m letters contain as factors all words of length k(n) over A as n→∞, provided limsupn→∞ k(n)/log n<1/log m.

In this note it is shown that if this condition holds, then the number of occurrences of any word of length k(n) as a factor into almost all words of length n is at least s(n), where limn→∞ log s(n)/log n=0. In particular, this number of occurrences is bounded below by C log n as n→∞, for any absolute constant C>0.  相似文献   


7.
We define the 0—1 Integer Programming Problem in a finite field or finite ring with identity as: given an m × n matrix A and an n × 1 vector b with entries in the ring R, find or determine the non-existence of a 0—1 vector x such that Ax = b. We give an easily implemented enumerative algorithm for solving this problem, along with conditions that spurious solutions occur with probability as small as desired. Finally, we show that the problem is NP-complete if R is the ring of integers modulo r for r ≥ 3. This result suggests that it will be difficult to improve on our algorithm.  相似文献   

8.
Steven L. 《Pattern recognition》1995,28(12):1965-1972
Two fast algorithms for median filtering of images using parallel computers having 2-D mesh interconnections are given. Both algorithms assume that an n × n image is loaded onto the mesh with one processing element per pixel. One algorithm performs median filtering over d × d neighborhoods in O(d2) time and works with pixel values in an arbitrarily large range. This algorithm, while theoretically suboptimal, achieves a lower constant than a previously published asymptotically—optimal algorithm and is simpler to program. The second algorithm assumes that the range of pixel values is limited and relatively small, and it accomplishes median filtering in O(d) time.  相似文献   

9.
纪业  魏恒峰  黄宇  吕建 《软件学报》2020,31(5):1332-1352
无冲突复制数据类型(conflict-free replicated data types,简称CRDT)是一种封装了冲突消解策略的分布式复制数据类型,它能保证分布式系统中副本节点间的强最终一致性,即执行了相同更新操作的副本节点具有相同的状态.CRDT协议设计精巧,不易保证其正确性.旨在采用模型检验技术验证一系列CRDT协议的正确性.具体而言,构建了一个可复用的CRDT协议描述与验证框架,包括网络通信层、协议接口层、具体协议层与规约层.网络通信层描述副本节点之间的通信模型,实现了多种类型的通信网络.协议接口层为已知的CRDT协议(分为基于操作的协议与基于状态的协议)提供了统一的接口.在具体协议层,用户可以根据协议的需求选用合适的底层通信网络.规约层则描述了所有CRDT协议都需要满足的强最终一致性与最终可见性(所有的更新操作最终都会被所有的副本节点接收并处理).使用TLA+形式化规约语言实现了该框架,然后以Add-Wins Set复制数据类型为例,展示了如何使用框架描述具体协议,并使用TLC模型检验工具验证协议的正确性.  相似文献   

10.
In statistical data mining and spatial statistics, many problems (such as detection and clustering) can be formulated as optimization problems whose objective functions are functions of consecutive subsequences. Some examples are (1) searching for a high activity region in a Bernoulli sequence, (2) estimating an underlying boxcar function in a time series, and (3) locating a high concentration area in a point process. A comprehensive search algorithm always ends up with a high order of computational complexity. For example, if a length-n sequence is considered, the total number of all possible consecutive subsequences is A comprehensive search algorithm requires at least O(n2) numerical operations.

We present a multiscale-approximation-based approach. It is shown that most of the time, this method finds the exact same solution as a comprehensive search algorithm does. The derived multiscale approximation methods (MAMEs) have low complexity: for a length-n sequence, the computational complexity of an MAME can be as low as O(n). Numerical simulations verify these improvements.

The MAME approach is particularly suitable for problems having large size data. One known drawback is that this method does not guarantee the exact optimal solution in every single run. However, simulations show that as long as the underlying subjects possess statistical significance, a MAME finds the optimal solution with probability almost equal to one.  相似文献   


11.
An important problem in facilities design to find an assignment of n facilities to n locations so that total materials handling cost is minimized. For problems of moderate size, suboptimal solutions must be accepted since optimal algorithms are computationally infeasible. If the mean and standard deviation of the layout cost distribution is known, then statistical methods may be used to measure and compare the efficiencies of various suboptimal solutions as well as to monitor the efficiency of the same assignment under changing production environments. In this paper a new, simple algorithm to calculate the exact value of the standard deviation of the layout cost distribution is presented (the mean is easy). This algorithm has a computational efficiency of O(n2) arithmetic operations for a problem of size n × n, an improvement over previous methods which are either inexact or have a computational efficiency of O(n4). Results of tests verifying the accuracy and claimed efficiency of this algorithm, as implemented on a microcomputer, are also presented (about 0.85 second for a 30 × 30 problem).  相似文献   

12.
The work presents a new protocol, VELOS, for tolerating partitionings in distributed systems with replicated data. Our primary goals were influenced by efficiency and availability constraints. The proposed protocol achieves optimal availability, according to a well known metric, while ensuring one copy serializability. In addition, however, VELOS is designed to reduce the cost involved in achieving high availability. We have developed mechanisms through which transactions, in the absence of failures, can access replicated data objects and observe shorter delays than related protocols, and impose smaller loads on the network and the servers. Furthermore, VELOS offers high availability without relying on system transactions that must execute to restore availability when failures and recoveries occur. Such system transactions typically access all (replicas of all) data objects and thus introduce significant delays to user transactions and consume large quantities of resources such as network bandwidth and CPU cycles. Thus, we offer our protocol as a proof that high availability can be achieved inexpensively  相似文献   

13.
The paper presents an optimal systolic array architecture for rapid solution of dense systems of linear equations. The array solves a system of size n×n in 4n + 1 time units including I/0 time. Data communications are strictly local and the processing elements (PEs) are simple. The complete three-phase solution algorithm is executed on a single array, employing about 3n2/2 PEs without any need for costly inter-phase I/0. Due to a novel data steering mechanism, the three algorithmic phases are maximally overlapped. Design optimality is established using systolic precedence diagrams. It is also shown that merging the functions of two adjacent PEs into a single PE is possible resulting in maximal PE utilization. An interesting result regarding cascading phase-optimal arrays is obtained.  相似文献   

14.
In the weighted voting protocol which is used to maintain the consistency of replicated data, the availability of the data to ready and write operations not only depends on the availability of the nodes storing the data but also on the vote and quorum assignments used. The authors consider the problem of determining the vote and quorum assignments that yield the best performance in a distributed system where node availabilities can be different and the mix of the read and write operations is arbitrary. The optimal vote and quorum assignments depend not only on the system parameters, such as node availability and operation mix, but also on the performance measure. The authors present an enumeration algorithm that can be used to find the vote and quorum assignments that need to be considered for achieving optimal performance. When the performance measure is data availability, an analytical method is derived to evaluate it for any vote and quorum assignment. This method and the enumeration algorithm are used to find the optimal vote and quorum assignment for several systems. The enumeration algorithm can also be used to obtain the optimal performance when other measures are considered  相似文献   

15.
User-perceived dependability and performance metrics are very different from conventional ones in that the dependability and performance properties must be assessed from the perspective of users accessing the system. In this paper, we develop techniques based on stochastic Petri nets (SPN) to analyze user-perceived dependability and performance properties of quorum-based algorithms for managing replicated data. A feature of the techniques developed in the paper is that no assumption is made regarding the interconnection topology, the number of replicas, or the quorum definition used by the replicated system, thus making it applicable to a wide class of quorum-based algorithms. We illustrate this technique by comparing conventional and user-perceived metrics in majority voting algorithms. Our analysis shows that when the user-perceiveness is taken into consideration, the effect of increasing the network connectivity and number of replicas on the availability and dependability properties perceived by users is very different from that under conventional metrics. Thus, unlike conventional metrics, user-perceived metrics allow a tradeoff to be exploited between the hardware invested, i.e., higher network connectivity and number of replicas, and the performance and dependability properties perceived by users.  相似文献   

16.
The approach of ordinal mind change complexity, introduced by Freivalds and Smith, uses (notations for) constructive ordinals to bound the number of mind changes made by a learning machine. This approach provides a measure of the extent to which a learning machine has to keep revising its estimate of the number of mind changes it will make before converging to a correct hypothesis for languages in the class being learned. Recently, this notion, which also yields a measure for the difficulty of learning a class of languages, has been used to analyze the learnability of rich concept classes.

The present paper further investigates the utility of ordinal mind change complexity. It is shown that for identification from both positive and negative data and n 1, the ordinal mind change complexity of the class of languages formed by unions of up to n + 1 pattern languages is only ω ×0 notn(n) (where notn(n) is a notation for n, ω is a notation for the least limit ordinal and ×0 represents ordinal multiplication). This result nicely extends an observation of Lange and Zeugmann that pattern languages can be identified from both positive and negative data with 0 mind changes.

Existence of an ordinal mind change bound for a class of learnable languages can be seen as an indication of its learning “tractability”. Conditions are investigated under which a class has an ordinal mind change bound for identification from positive data. It is shown that an indexed family of languages has an ordinal mind change bound if it has finite elasticity and can be identified by a conservative machine. It is also shown that the requirement of conservative identification can be sacrificed for the purely topological requirement ofM-finite thickness. Interaction between identification by monotonic strategies and existence of ordinal mind change bound is also investigated.  相似文献   


17.
For an arbitrary n × n matrix A and an n × 1 column vector b, we present a systolic algorithm to solve the dense linear equations Ax = b. An important consideration is that the pivot row can be changed during the execution of our systolic algorithm. The computational model consists of n linear systolic arrays. For 1 ≤ in, the ith linear array is responsible to eliminate the ith unknown variable xi of x. This algorithm requires 4n time steps to solve the linear system. The elapsed time unit within a time step is independent of the problem size n. Since the structure of a PE is simple and the same type PE executes the identical instructions, it is very suitable for VLSI implementation. The design process and correctness proof are considered in detail. Moreover, this algorithm can detect whether A is singular or not.  相似文献   

18.
Newton's and Laguerre's methods can be used to concurrently refine all separated zeros of a polynomial P(z). This paper analyses the rate convergence of both procedures, and its implication on the attainable number n of correct figures. In two special cases the number m of iterations required to reach an accuracy η = 10n is shown to grow as logλ n, where λ = 3 for Newton's and λ = 4 for Laguerre's. In the general case m is shown to grow linearly with n for both procedures. An assessment of the efficiency of the two methods is also given, by evaluating the computational complexity of operations in circular arithmetic, and the efficiency indices of the two iterative schemes.  相似文献   

19.
This paper presents a new practical bit-vector algorithm for solving the well-known Longest Common Subsequence (LCS) problem. Given two strings of length m and n, nm, we present an algorithm which determines the length p of an LCS in O(nm/w) time and O(m/w) space, where w is the number of bits in a machine word. This algorithm can be thought of as column-wise “parallelization” of the classical dynamic programming approach. Our algorithm is very efficient in practice, where computing the length of an LCS of two strings can be done in linear time and constant (additional/working) space by assuming that mw.  相似文献   

20.
This paper presents a distributed algorithm for finding the articulation points in an n node communication network represented by a connected undirected graph. For a given graph if the deletion of a node splits the graph into two or more components then that node is called an articulation point. The output of the algorithm is available in a distributed manner, i.e., when the algorithm terminates each node knows whether it is an articulation point or not. It is shown that the algorithm requires O(n) messages and O(n) units of time and is optimal in communication complexity to within a constant factor.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号