首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
In this paper,a qualitative model checking algorithm for verification of distributed probabilistic real-time systems(DPRS)is presented.The model of DPRS,called real-time proba bilistic process model(RPPM),is over continuous time domain.The properties of DPRS are described by using deterministic timed automata(DTA).The key part in the algorithm is to map continuous time to finite time intervals with flag variables.Compared with the existing algorithms,this algorithm uses more general delay time equivalence classes instead of the unit delay time equivalence classes restricted by event sequence,and avoids generating the equivalence classes of states only due to the passage of time.The result shows that this algorithm is cheaper.  相似文献   

3.
The Journal of Supercomputing - Task graphs provide a simple way to describe scientific workflows (sets of tasks with dependencies) that can be executed on both HPC clusters and in the cloud. An...  相似文献   

4.
Johan Moe  David A. Carr 《Software》2002,32(9):889-906
One of the most challenging problems facing today's software engineer is to understand and modify distributed systems. One reason is that in actual use systems frequently behave differently than the developer intended. In order to cope with this challenge, we have developed a three‐step method to study the run‐time behavior of a distributed system. First, remote procedure calls are traced using CORBA interceptors. Next, the trace data is parsed to construct RPC call‐return sequences, and summary statistics are generated. Finally, a visualization tool is used to study the statistics and look for anomalous behavior. We have been using this method on a large distributed system (more than 500000 lines of code) with data collected during both system testing and operation at a customer's site. Despite the fact that the distributed system had been in operation for over three years, the method has uncovered system configuration and efficiency problems. Using these discoveries, the system support group has been able to improve product performance and their own product maintenance procedures. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
6.
This paper describes our work to improve the performance of distributed applications. We aim at certain application characteristics such as balancing load, allowing separately written applications to work better together, allowing a distributed application to adapt its behavior in more flexible ways, and so on. Our approach is to write application‐specific schedulers, which can access the global state of the application in making scheduling decisions. To achieve this goal, we extended our earlier work on CATAPULTS ( C reating A nd T esting AP plication‐specific U ser L evel T hread S chedulers), a domain‐specific language for creating and testing application‐specific user‐level thread schedulers, to distributed applications by adding ‘master schedulers’ for dealing with the distributed parts of applications. This paper presents our design of, experimentation with, and implementation of distributed CATAPULTS. This paper presents several realistic examples to measure the feasibility of this approach, specifically: a website application, an embedded application, and load balancing. Each example has a scheduling goal for which we developed a customized scheduler. We measured the performance with and without the customized scheduler. The customized scheduler for each example was fairly straightforward to develop and each achieved its scheduling goal. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
Markov nets: probabilistic models for distributed and concurrent systems   总被引:1,自引:0,他引:1  
For distributed systems, i.e., large complex networked systems, there is a drastic difference between a local view and knowledge of the system, and its global view. Distributed systems have local state and time, but do not possess global state and time in the usual sense. In this paper, motivated by the monitoring of distributed systems and in particular of telecommunications networks, we develop a generalization of Markov chains and hidden Markov models for distributed and concurrent systems. By a concurrent system, we mean a system in which components may evolve independently, with sparse synchronizations. We follow a so-called true concurrency approach, in which neither global state nor global time are available. Instead, we use only local states in combination with a partial order model of time. Our basic mathematical tool is that of Petri net unfoldings.  相似文献   

8.
Summary A family of simple models for database systems is defined, where a system is composed of a scheduler, a data manager and several user transactions. The basic correctness criterion for such systems is taken to be consistency preservation. The central notion of the paper is that of a general purpose scheduler, a database system scheduler that is blind to the semantics of transactions and integrity assertions. Consistency preservation of a database system is shown to be precisely equivalent to a restriction on the output of a general purpose scheduler GPS, called weak serializability. That is, any database system using GPS will preserve consistency iff the output of GPS is always weakly serializable. This establishes a tight connection between database system correctness and scheduler behavior. Also, aspects of restart facilities and predeclared data accesses are discussed. Finally, several examples of schedulers correct with respect to weak serializability are presented.  相似文献   

9.
If the system under test has multiple interfaces/ports and these are physically distributed then in testing we place a tester at each port. If these testers cannot directly communicate with one another and there is no global clock then we are testing in the distributed test architecture. If the distributed test architecture is used then there may be input sequences that cannot be applied in testing without introducing controllability problems. Additionally, observability problems can allow fault masking. In this paper we consider the situation in which the testers can apply a status message: an input that causes the system under test to identify its current state. We show how such a status message can be used in order to overcome controllability and observability problems.  相似文献   

10.
The problem of entity resolution over probabilistic data (ERPD) arises in many applications that have to deal with probabilistic data. In many of these applications, probabilistic data is distributed among a number of nodes. The simple, centralized approach to the ERPD problem does not scale well as large amounts of data need to be sent to a central node. In this paper, we present FD (Fully Distributed), a decentralized algorithm for dealing with the ERPD problem over distributed data, with the goal of minimizing bandwidth usage and reducing processing time. FD is completely distributed and does not depend on the existence of certain nodes. We validated FD through implementation over a 75-node cluster and simulation using the PeerSim simulator. We used both synthetic and real-world data in our experiments. Our performance evaluation shows that FD can achieve major performance gains in terms of bandwidth usage and response time.  相似文献   

11.
This paper reports on the design of a novel two-stage mechanism, based on strictly proper scoring rules, that allows a centre to acquire a costly forecast of a future event (such as a meteorological phenomenon) or a probabilistic estimate of a specific parameter (such as the quality of an expected service), with a specified minimum precision, from one or more agents. In the first stage, the centre elicits the agents' true costs and identifies the agent that can provide an estimate of the specified precision at the lowest cost. Then, in the second stage, the centre uses an appropriately scaled strictly proper scoring rule to incentivise this agent to generate the estimate with the required precision, and to truthfully report it. In particular, this is the first mechanism that can be applied to settings in which the centre has no knowledge about the actual costs involved in the generation an agents' estimates and also has no external means of evaluating the quality and accuracy of the estimates it receives. En route to this mechanism, we first consider a setting in which any single agent can provide an estimate of the required precision, and the centre can evaluate this estimate by comparing it with the outcome which is observed at a later stage. This mechanism is then extended, so that it can be applied in a setting where the agents' different capabilities are reflected in the maximum precision of the estimates that they can provide, potentially requiring the centre to select multiple agents and combine their individual results in order to obtain an estimate of the required precision. For all three mechanisms (the original and the two extensions), we prove their economic properties (i.e. incentive compatibility and individual rationality) and then perform a number of numerical simulations. For the single agent mechanism we compare the quadratic, spherical and logarithmic scoring rules with a parametric family of scoring rules. We show that although the logarithmic scoring rule minimises both the mean and variance of the centre's total payments, using this rule means that an agent may face an unbounded penalty if it provides an estimate of extremely poor quality. We show that this is not the case for the parametric family, and thus, we suggest that the parametric scoring rule is the best candidate in our setting. Furthermore, we show that the ‘multiple agent’ extension describes a family of possible approaches to select agents in the first stage of our mechanism, and we show empirically and prove analytically that there is one approach that dominates all others. Finally, we compare our mechanism to the peer prediction mechanism introduced by Miller et al. (2007) [29] and show that the centre's total expected payment is the same in both mechanisms (and is equal to total expected payment in the case that the estimates can be compared to the actual outcome), while the variance in these payments is significantly reduced within our mechanism.  相似文献   

12.
Using the probabilistic methods outlined in this paper, a robot can learn to recognize its own motor-controlled body parts, or their mirror reflections, without prior knowledge of their appearance. For each item in its visual field, the robot calculates the likelihoods of each of three dynamic Bayesian models, corresponding to the categories of “self”, “animate other”, or “inanimate”. Each model fully incorporates the object’s entire motion history and the robot’s whole motor history in constant update time, via the forward algorithm. The parameters for each model are learned in an unsupervised fashion as the robot experiments with its arm over a period of four minutes. The robot demonstrated robust recognition of its mirror image, while classifying the nearby experimenter as “animate other”, across 20 experiments. Adversarial experiments, in which a subject mirrored the robot’s motion showed that as long as the robot had seen the subject move for as little as 5 s before mirroring, the evidence was “remembered” across a full minute of mimicry.  相似文献   

13.
Some systems interact with their environment at physically distributed interfaces called ports and we separately observe sequences of inputs and outputs at each port. As a result we cannot reconstruct the global sequence that occurred and this reduces our ability to distinguish different systems in testing or in use. In this paper we explore notions of conformance for an input output transition system that has multiple ports, adapting the widely used ioco implementation relation to this situation. We consider two different scenarios. In the first scenario the agents at the different ports are entirely independent. Alternatively, it may be feasible for some external agent to receive information from more than one of the agents at the ports of the system, these local behaviours potentially being brought together and here we require a stronger implementation relation. We define implementation relations for these scenarios and prove that in the case of a single-port system the new implementation relations are equivalent to ioco. In addition, we define what it means for a test case to be controllable and give an algorithm that decides whether this condition holds. We give a test generation algorithm to produce sound and complete test suites. Finally, we study two implementation relations to deal with partially specified systems.  相似文献   

14.
This paper deals with isolation of failed components in the system. Each component can be affected in a random way by failures. The state of a component or a subsystem is detected using tests. The goal of this paper is to exploit the techniques of built-in tests and available knowledge to generate the sequence of tests required to locate quickly all the components responsible for system failure. We consider an operative system according to a series structure for which we know test cost and the conditional probability that a component is responsible for the failure. The various diagnosis strategies are analyzed. The treated algorithms relay on system probabilistic analysis.  相似文献   

15.
We prove that probabilistic bisimilarity is decidable over probabilistic extensions of BPA and BPP processes. For normed subclasses of probabilistic BPA and BPP processes we obtain polynomial-time algorithms. Further, we show that probabilistic bisimilarity between probabilistic pushdown automata and finite-state systems is decidable in exponential time. If the number of control states in PDA is bounded by a fixed constant, then the algorithm needs only polynomial time. The work has been supported by the research centre Institute for Theoretical Computer Science (ITI), project No. 1M0545.  相似文献   

16.
The multiprocessor Fixed-Job Priority (FJP) scheduling of real-time systems is studied. An important property for the schedulability analysis, the predictability (regardless to the execution times), is studied for heterogeneous multiprocessor platforms. Our main contribution is to show that any FJP schedulers are predictable on unrelated platforms. A convenient consequence is the fact that any FJP schedulers are predictable on uniform multiprocessors.  相似文献   

17.
Skyline计算是多准则决策,数据挖掘和数据库可视化的重要操作。移动对象在运动过程中,由于位置信息的不确定,导致局部各数据点间的支配关系不稳定,从而影响全局概率Skyline集合。针对分布式环境下不确定移动对象的连续概率Skyline查询更新进行研究,提出了一种降低通信开销的连续概率Skyline查询的有效算法CDPS-UMO,该算法在局部节点中对局部概率Skyline点的变化进行跟踪;提出了有效的排序方法和反馈机制,大大降低了通信开销和计算代价;提出一种基本算法naive,与CDPS-UMO进行了对比实验,实验结果证明了算法的有效性。  相似文献   

18.
Tracking frequent items (also called heavy hitters) is one of the most fundamental queries in real-time data due to its wide applications, such as logistics monitoring, association rule based analysis, etc. Recently, with the growing popularity of Internet of Things (IoT) and pervasive computing, a large amount of real-time data is usually collected from multiple sources in a distributed environment. Unfortunately, data collected from each source is often uncertain due to various factors: imprecise reading, data integration from multiple sources (or versions), transmission errors, etc. In addition, due to network delay and limited by the economic budget associated with large-scale data communication over a distributed network, an essential problem is to track the global frequent items from all distributed uncertain data sites with the minimum communication cost. In this paper, we focus on the problem of tracking distributed probabilistic frequent items (TDPF). Specifically, given k distributed sites S = {S 1, … , S k }, each of which is associated with an uncertain database \(\mathcal {D}_{i}\) of size n i , a centralized server (or called a coordinator) H, a minimum support ratio r, and a probabilistic threshold t, we are required to find a set of items with minimum communication cost, each item X of which satisfies P r(s u p(X) ≥ r × N) > t, where s u p(X) is a random variable to describe the support of X and \(N={\sum }_{i=1}^{k}n_{i}\). In order to reduce the communication cost, we propose a local threshold-based deterministic algorithm and a sketch-based sampling approximate algorithm, respectively. The effectiveness and efficiency of the proposed algorithms are verified with extensive experiments on both real and synthetic uncertain datasets.  相似文献   

19.
The generation of test data for state-based specifications is a computationally expensive process. This problem is magnified if we consider that time constraints have to be taken into account to govern the transitions of the studied system. The main goal of this paper is to introduce a complete methodology, supported by tools, that addresses this issue by representing the test data generation problem as an optimization problem. We use heuristics to generate test cases. In order to assess the suitability of our approach we consider two different case studies: a communication protocol and the scientific application BIPS3D. We give details concerning how the test case generation problem can be presented as a search problem and automated. Genetic algorithms (GAs) and random search are used to generate test data and evaluate the approach. GAs outperform random search and seem to scale well as the problem size increases. It is worth to mention that we use a very simple fitness function that can be easily adapted to be used with other evolutionary search techniques.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号