共查询到20条相似文献,搜索用时 10 毫秒
1.
Jajodia S. Mutchler D. 《IEEE transactions on pattern analysis and machine intelligence》1989,15(1):39-46
A consistency control algorithm is described for managing replicated files in the face of network partitioning due to node or communication link failures. It adopts a pessimistic approach in that mutual consistency among copies of a file is maintained by permitting files to be accessed only in a single partition at any given time. The algorithm simplifies the Davcev-Burkhard dynamic voting algorithm (1985) and also improves its availability by adding the notion of linearly ordered copies. A proof that any pessimistic algorithm with fresh reads is one-copy serializable is given 相似文献
2.
《Journal of Systems Architecture》2015,61(9):472-485
Triple-modular-redundant applications are widely used for fault-tolerant safety–critical computation. They have strict timing requirements for correct operation. We present an architecture which provides composability and mixed-criticality to support integration and to ease certification of such safety–critical applications. In this architecture, an additional layer is required for the sharing/partitioning of resources. This potentially jeopardizes the synchronization necessary for the triple-modular-redundant applications.We investigate the effects of different (unsynchronized) scheduling methods for the resource-sharing layer in this architecture and conclude that an out-of-the-box solution, which guarantees the technical separation between applications with fast reaction time requirements is only feasible when executing at most one instance of a triple-modular-redundant application per CPU-core for single and multi-core CPUs. Only when accepting changes in the applications or the applications’ synchronization mechanisms, are more flexible solutions with good performance and resource utilization available. 相似文献
3.
Scalability and availability in a large-scale distributed database is determined by the consistency strategies used by the transactions. Most of the big data applications demand consistency and availability at the same time. However, a suitable transaction model that handles the trade-obetween availability and consistency is presently lacking. In this article, we have proposed a hierarchical transaction model that supports multiple consistency levels for data items in a large-scale replicated database. The data items have been classified into different categories based on their consistency requirement, computed using a data mining algorithm. Thereafter, these have been mapped to the appropriate consistency level in the hierarchy. This allows parallel execution of several transactions belonging to each level. The topmost level called the Serializable (SR) level follows strong consistency applicable to data items that are mostly read and updated both. The next level of consistency, Snapshot Isolation (SI), maps to data items which are mostly read and demand unblocking read. Data items which are mostly updated do not follow strict consistent snapshot and have been mapped to the next lower level called Non- monotonic Snapshot Isolation (NMSI). The lowest level in the hierarchy correspond to data items for which ordering of operations does not matter. This level is called the Asynchronous (ASYNC) level. We have tested the proposed transaction model with two different workloads on a test-bed designed following the TPC-C benchmark schema. The performance of the proposed model has been evaluated against other transaction models that support single consistency policy. The proposed model has shown promising results in terms of transaction throughput, commit rate and average latency. 相似文献
4.
通过对移动计算环境下已复制数据一致性的研究,提出了一种新的数据复制控制协议。该协议对已复制数据库对象的读操作或写操作等临时信息进行探索,使单副本可串行化和最终一致性两个标准可以同时得到保证。文章最后还对协议的正确性进行了分析。 相似文献
5.
6.
Peer-to-Peer (P2P) systems have been widely used by networked interactive applications to relieve the drawback and reduce the reliance on well-provisioned servers. A core challenge is to provide consistency maintenance for a massive number of users in a P2P manner. This requires propagating updates on time by only using the uplink bandwidth from individual users instead of relying on dedicated servers. In this paper, we present a P2P system called PPAct to provide consistency maintenance for large-scale fast-interactive applications. We use massive multi-player online games as example applications to illustrate PPAct. The design can be directly applied to other interactive applications. We adopt the Area-of-Interest (AOI) filtering method, which is proposed in prior works [1], [2], to reduce bandwidth consumption of update delivery. We solve the AOI’s critical problem of bandwidth shortage in hot regions by dynamically balancing the workload of each region in a distributed way. We separate the roles of view discovery from consistency maintenance by assigning players as “region hosts” and “object holders.” A region host is responsible for tracking objects and players within a region, and an object holder is responsible for sending updates about an object to interested players. Lookup queries for view discovery are processed by region hosts, while consistency maintenance of objects is taken by object holders. Separating the roles not only alleviates the workload overflow in hot regions, but also speeds up view discovery and update delivery. Another key idea is that peers contribute spare bandwidth in a fully distributed way to forwarding updates about objects of interest. Thus popular, high-demand objects will have more peers forward updates. We also present how to select capable and reliable players for region hosts and object holders.A P2P network simulator is developed to evaluate PPAct on two major types of online games: role-playing games (RPGs) and first-person shooter (FPS) games. The results demonstrate that PPAct successfully supports 10,000 players in RPGs and 1500 players in FPS games. PPAct outperforms SimMud [2] in RPGs and Donnybrook [3] in FPS games by 40% and 30% higher successful update rates respectively. 相似文献
7.
复制式架构下的二维CAD协同图形编辑环境中,用户界面所发出的Undo操作从语义上讲是针对复合操作本身的,但传统的一致性维护算法是基于原子操作的,对Undo操作的支持也是以原子操作为单位的,这样就会导致复合Undo操作的语义不一致性问题。分析了CAD图形编辑环境中复合操作的类型及Undo操作的执行前提,讨论了Undo操作存在的情况下的冲突定义,提出了基于版本分裂机制的冲突解决策略,并描述了复合Undo操作在本地和远程站点的执行流程及主要算法。最后通过实例分析证明了该方法的有效性。 相似文献
8.
Michael Nebeling Michael Grossniklaus Stefania Leone Moira C. Norrie 《World Wide Web》2012,15(4):447-481
There is a vast body of research dealing with the development of context-aware web applications that can adapt to different
user, platform and device contexts. However, the range and growing diversity of new devices poses two significant problems
to existing approaches. First, many techniques require a number of additional design processes and modelling steps before
applications can be adapted. Second, the new generation of platforms and technologies underlying these devices as well as
upcoming web standards HTML5 and CSS3 have partly changed the way in which web applications are implemented nowadays and often
limit the way in which they can be adapted. In this paper, we present XCML as one example of a domain-specific language that
tightly integrates context-aware concepts and adaptivity mechanisms to support developers in the specification and implementation
of multi-channel web applications. In contrast to most existing approaches, the objective is to use a more lightweight approach
to adaptation that can dynamically evolve and support new requirements as they emerge. Our solution builds on versioning principles
in combination with a context matching process based on a declaration of context-dependent variants of content, navigation
and presentation in terms of context expressions at different levels of granularity that are specific to the application.
To support this, a formally defined context algebra is used to parse and resolve the context expressions at compile-time and
to determine the best-matching variants with respect to the client context at run-time. We present the language concepts and
a possible execution environment together with context-aware developer tools for the authoring and testing of adaptive features
and behaviour. We also report on two case studies: the first shows how our general approach allows for integration with existing
technologies to leverage advanced context-aware mechanisms in applications developed using other platforms and languages and
the second how existing web interfaces can be systematically extended to support new adaptation scenarios. 相似文献
9.
陈小碾 《计算机工程与设计》2012,33(8):3069-3073,3116
针对已有的Web协同应用中的一致性维护方法会带来严重的服务器耗费问题,提出了一种基于文档划分的一致性维护模型。该模型在操作转换算法SLOT(symmetric linear operational transformation)的基础上引入文档划分的思想。从降低服务器通信和内存耗费的角度出发,结合用户数量和操作频率的变化,给出一种动态的文档划分策略及其实现算法。仿真实验结果表明,该模型可以有效地降低大规模协同应用中服务器的通信和内存耗费。 相似文献
10.
Aquiles M. F. Burlamaqui Samuel O. Azevedo Rummenigge Rudson Dantas Claudio A. Schneider Josivan S. Xavier Julio C. P. Melo Luiz M. G. Gonçalves Guido L. S. Filho Jauvane C. de Oliveira 《Multimedia Tools and Applications》2009,45(1-3):215-245
We propose a framework with a flexible architecture that have been designed and implemented for collaborative interaction of users, to be applied in massive applications through the Web. We introduce the concept of interperception and use technologies as massive virtual environments and teleoperation for the creation of environments (mixing virtual and real ones) in order to promote accessibility and transparency in the interaction between people, and between people and animate devices (such as robots) through the Web. Experiments with massive games, with interactive applications in digital television, with users and robots interacting in virtual and real versions of museums and cultural centers are presented to validate our proposal. 相似文献
11.
Zef Hemel Danny M. Groenewegen Lennart C.L. Kats Eelco Visser 《Journal of Symbolic Computation》2011,46(2):150-182
Modern web application development frameworks provide web application developers with high-level abstractions to improve their productivity. However, their support for static verification of applications is limited. Inconsistencies in an application are often not detected statically, but appear as errors at run-time. The reports about these errors are often obscure and hard to trace back to the source of the inconsistency. A major part of this inadequate consistency checking can be traced back to the lack of linguistic integration of these frameworks. Parts of an application are defined with separate domain-specific languages, which are not checked for consistency with the rest of the application. Examples include regular expressions, query languages and XML-based languages for definition of user interfaces. We give an overview and analysis of typical problems arising in development with frameworks for web application development, with Ruby on Rails, Lift and Seam as representatives.To remedy these problems, in this paper, we argue that domain-specific languages should be designed from the ground up with static verification and cross-aspect consistency checking in mind, providing linguistic integration of domain-specific sub-languages. We show how this approach is applied in the design of WebDSL, a domain-specific language for web applications, by examining how its compiler detects inconsistencies not caught by web frameworks, providing accurate and clear error messages. Furthermore, we show how this consistency analysis can be expressed with a declarative rule-based approach using the Stratego transformation language. 相似文献
12.
The modeling of uncertainty in continuous and categorical regionalized variables is a common issue in the geosciences. We present a hybrid continuous/categorical model, in which the continuous variable is represented by the transform of a Gaussian random field, while the categorical variable is obtained by truncating one or more Gaussian random fields. The dependencies between the continuous and categorical variables are reproduced by assuming that all the Gaussian random fields are spatially cross-correlated. Algorithms and computer programs are proposed to infer the model parameters and to co-simulate the variables, and illustrated through a case study on a mining data set. 相似文献
13.
Cheng Y.-C. Lu S.-Y. 《IEEE transactions on pattern analysis and machine intelligence》1989,11(4):439-447
For a given set of n tuples, the binary consistency checking scheme generates a subset wherein no two elements intersect. The application of this scheme is illustrated by two problems in seismic horizon detection; seismic skeletonization and loop tying. After a brief introduction to seismic interpretation, these two examples are used to demonstrate how to cast an application problem into the formulism of the scheme. A comparison of this scheme to the dynamic programming approach to string matching due to S.Y. Lu (1982) is included 相似文献
14.
15.
Surface climatic conditions are key determinants of arthropod vector distribution and abundance and consequently affect transmission rates of any diseases they may carry. Remotely sensed observations by satellite sensors are the only feasible means of obtaining regional and continental scale measurements of climate at regular intervals for real-time epidemiological applications such as disease early warning systems. The potential of Pathfinder AVHRR Land (PAL) data to provide surrogate variables for near-surface air temperature and vapour pressure deficit (VPD) over Africa and Europe were assessed in this context. For the years 1988-1990 and 1992, correlations were examined between meteorological ground measurements (monthly mean air temperature and VPD(grd)) and variables derived from Advanced Very High Resolution Radiometer (AVHRR) data (LST and VPD(sat)). The AVHRR indices were derived from both daily and composite PAL data so that their relative performance could be determined. Furthermore, the ground observations were divided into African and European subsets, so that the relative performance of the satellite data at tropical/sub-tropical and temperate latitudes could be assessed.Significant correlations were shown between air temperature and LST in all months. Temporal variability existed in the strength of correlations throughout any twelve-month period, with the pattern of variability consistent between years. The adjusted r(2) values increased when elevation and the Normalised Difference Vegetation Index (NDVI) were included, in addition to LST, as predictor variables of air temperature. Attempts to derive monthly estimates of atmospheric moisture availability resulted in an over-estimation of VPD(sat) compared to ground observations, VPD(grd). The use of daily PAL data to derive monthly mean climatic indices was shown to be more accurate than those obtained using monthly maximum values from 10-day composite data. A subset of the 1992 data was then used to build linear regression models for the direct retrieval of monthly mean air temperature from PAL data. The accuracy of retrieved estimates was greatest when NDVI was included with LST as predictor variables, with root mean square errors varying from 1.83°C to 3.18 °C with a mean of 2.38 °C over the twelve months. 相似文献
16.
Gergely Sipos Author Vitae 《Future Generation Computer Systems》2012,28(3):500-512
Collaborative development environments allow a group of users to view and edit a shared item from geographically dispersed sites. Consistency maintenance in the face of concurrent accesses to shared entities is one of the core issues in the design of these systems. The paper introduces a lock based solution and three different algorithms that enable controlled, concurrent access to workflows for multiple application developers. Collaborative development of workflow applications promises better outcome in shorter time. The described method ensures that collaborators cannot break the consistency criteria of workflows by adding cycles or invalid edges to the graphs. A formal analysis of the three graph locking algorithms is also provided, focusing on the number of users who are allowed to edit a single workflow simultaneously. Based on the findings, a more powerful fourth graph locking algorithm is defined. 相似文献
17.
An epidemic model gives and efficient approach for transaction processing of replication systems in weakly connected environments.The approach has the advantages of high adaptation,support for low-handwidth network,and committing updates in an entirely decentralized control fachion.But the previous impolementing protocols,like ROWA protocol,quorum protocol,and voting protocol,have a common shortcoming that they are pessimistic in conflict reconciliation,therefore bring high transaction abort rate and reduce system performance dramatically when the workload scales up.In this paper,an optimistic voting protocol,which introduces condition vote and order vote in the voting process of transactions,is proposed.The condition vote and order vote postpone the final decision on conflicting transactions and avoid transaction aborts that are incurred by read-wr5ite and write-write conflicts.Experimental results indicate that the optimistic voting protocol decreases abort rate and improves average response time of transactions markedly when compared to other protocols. 相似文献
18.
《Information Systems》2002,27(4):277-297
Data replication can help database systems meet the stringent temporal constraints of current real-time applications, especially Web-based directory and electronic commerce services. A prerequisite for realizing the benefits of replication, however, is the development of high-performance concurrency control mechanisms. In this paper, we present managing isolation in replicated real-time object repositories (MIRROR), a concurrency control protocol specifically designed for firm-deadline applications operating on replicated real-time databases. MIRROR augments the classical O2PL concurrency control protocol with a novel state-based real-time conflict resolution mechanism. In this scheme, the choice of conflict resolution method is a dynamic function of the states of the distributed transactions involved in the conflict. A feature of the design is that acquiring the state knowledge does not require inter-site communication or synchronization, nor does it require modifications to the two-phase commit protocol.Using a detailed simulation model, we compare MIRROR's performance against the real-time versions of a representative set of classical replica concurrency control protocols for a range of transaction workloads and system configurations. Our performance studies show that (a) the relative performance characteristics of these protocols in the real-time environment can be significantly different from their performance in a traditional (non-real-time) database system, (b) MIRROR provides the best performance in both fully and partially replicated environments for real-time applications with low to moderate update frequencies, and (c) MIRROR's simple to implement conflict resolution mechanism works almost as well as more sophisticated strategies. 相似文献
19.
A fault-tolerant algorithm for replicated data management 总被引:1,自引:0,他引:1
Rangarajan S. Setia S. Tripathi S.K. 《Parallel and Distributed Systems, IEEE Transactions on》1995,6(12):1271-1282
We examine the tradeoff between message overhead and data availability that arises in the design of fault-tolerant algorithms for replicated data management in distributed systems. We propose a property called asymptotically high resiliency which is useful for evaluating the fault-tolerance of replica control algorithms and distributed mutual exclusion algorithms. We present a new algorithm for replica control that can be tailored (through a design parameter) to achieve the desired balance between low message overhead and high data availability. Further, we show that for a message overhead of O(√(Nlog N)), our algorithm can achieve asymptotically high resiliency 相似文献
20.
João Felipe S. Ouriques Emanuela G. Cartaxo Patrícia D. L. Machado 《Software Quality Journal》2018,26(4):1451-1482
Recently, several test case prioritization (TCP) techniques have been proposed to order test cases for achieving a goal during test execution, particularly, revealing faults sooner. In the model-based testing (MBT) context, such techniques are usually based on heuristics related to structural elements of the model and derived test cases. In this sense, techniques’ performance may vary due to a number of factors. While empirical studies comparing the performance of TCP techniques have already been presented in literature, there is still little knowledge, particularly in the MBT context, about which factors may influence the outcomes suggested by a TCP technique. In a previous family of empirical studies focusing on labeled transition systems, we identified that the model layout, i.e., amount of branches, joins, and loops in the model, alone may have little influence on the effectiveness of TCP techniques investigated, whereas characteristics of test cases that actually fail definitely influences this aspect. However, we considered only synthetic artifacts in the study, which reduced the ability of representing properly the reality. In this paper, we present a replication of one of these studies, now with a larger and more representative selection of techniques and considering test suites from industrial systems as experimental objects. Our objective is to find out whether the results remain while increasing the validity in comparison to the original study. Results reinforce that there is no best performer among the investigated techniques and characteristics of test cases that fail represent an important factor, although adaptive random-based techniques are less affected by it. 相似文献