首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
Planning Proofs of Equations in CCS   总被引:1,自引:1,他引:0  
Most efforts to automate formal verification of communicating systems have centred around finite-state systems (FSSs). However, FSSs are incapable of modelling many practical communicating systems, including a novel class of problems, which we call VIPS. VIPSs are value-passing, infinite-state, parameterised systems. Existing approaches using model checking over FSSs are insufficient for VIPSs. This is due to their inability both to reason with and about domain-specific theories, and to cope with systems having an unbounded or arbitrary state space.We use the Calculus of Communicating Systems (CCS) (Communication and Concurrency. London: Prentice Hall, 1989) to express and specify VIPSs. We take program verification to be proving the program and its intended specification equivalent. We use the laws of CCS to conduct the verification task. This approach allows us to study communicating systems and the data such systems communicate. Automating theorem proving in this context is an extremely difficult task.We provide automated methods for CCS analysis; they are applicable to both FSSs and VIPSs. Adding these methods to the CL A M proof planner (Lecture Notes in Artificial Intelligence, Vol. 449, Springer, 1990, pp. 647, 648), we have implemented an automated verification planner capable of dealing with problems that previously required human interaction. This paper describes these methods, gives an account as to why they work, and provides a short summary of experimental results.  相似文献   

2.
The formal verification of a Spiking Neural P System (SN P Systems, for short) designed for solving a given problem is usually a hard task. Basically, the verification process consists of the search of invariant formulae such that, once proved their validity, show the right answer to the problem. Even though there does not exist a general methodology for verifying SN P Systems, in (Păun et al., Int J Found Comput Sci 17(4):975–1002, 2006) a new tool based on the transition diagram of the P system has been developed for helping the researcher in the search of invariant formulae. In this paper we show a software tool which allows to generate the transition diagram of an SN P System in an automatic way, so it can be considered as an assistant for the formal verification of such computational devices.
Daniel Ramírez-MartínezEmail:
  相似文献   

3.
There exist two major problems in application of the conventional block BiCGSTAB method to the O(a)-improved Wilson-Dirac equation with multiple right-hand sides: One is the deviation between the true and the recursive residuals. The other is the convergence failure observed at smaller quark masses for enlarged number of the right-hand sides. The block BiCGGR algorithm which was recently proposed by the authors succeeds in solving the former problem. In this article we show that a preconditioning technique allows us to improve the convergence behavior for increasing number of the right-hand sides.  相似文献   

4.
It is often the case that after a scheduling problem has been solved some small changes occur that make the solution of the original problem not valid. Solving the new problem from scratch can result in a schedule that is very different from the original schedule. In applications such as a university course timetable or flight scheduling, one would be interested in a solution that requires minimal changes for the users. The present paper considers the minimal perturbation problem. It is motivated by scenarios in which a Constraint Satisfaction Problem (CSP) is subject to changes. In particular, the case where some of the constraints are changed after a solution was obtained. The goal is to find a solution to the changed problem that is as similar as possible (e.g. includes minimal perturbations) to the previous solution. Previous studies proposed a formal model for this problem (Barták et al. 2004), a best first search algorithm (Ross et al. 2000), complexity bounds (Hebrard et al. 2005), and branch and bound based algorithms (Barták et al. 2004; Hebrard et al. 2005). The present paper proposes a new approach for solving the minimal perturbation problem. The proposed method interleaves constraint optimization and constraint satisfaction techniques. Our experimental results demonstrate the advantage of the proposed algorithm over former algorithms. Experiments were performed both on random CSPs and on random instances of the Meeting Scheduling Problem.  相似文献   

5.
This paper describes the development steps and core ideas used by the USP Farmers herding team, that has participated in the 2010 edition of the Multi-Agent Programming Contest (MAPC 2010). This is the third year that the competitors must design a team of herding agents, whose global goal is to lead a maximum number of cows to their own corral. As this is a very complex task and requires coordination of the team, we have developed the individual agents using the Jason (Bordini et al. 2007) interpreter for AgentSpeak(L) (Rao 1996). Moreover, the coordination strategy was defined using the M\mathcal{M} OISE  +  (Hübner et al. 2002, 2007) organizational model. We have also used the idea of artifact (Ricci et al. 2007) to develop global services, available to all the agents. Moreover, it is clear that for this contest some pure procedural processing should be developed in a lower abstraction level (Hübner et al. 2008); therefore some calculation and pre-defined global decisions were implemented by Java classes.  相似文献   

6.
We survey the current state of knowledge on the circuit complexity of regular languages and we prove that regular languages that are in AC0 and ACC0 are all computable by almost linear size circuits, extending the result of Chandra et al. (J. Comput. Syst. Sci. 30:222–234, 1985). As a consequence we obtain that in order to separate ACC0 from NC1 it suffices to prove for some ε>0 an Ω(n 1+ε ) lower bound on the size of ACC0 circuits computing certain NC1-complete functions. Partially supported by grant GA ČR 201/07/P276, project No. 1M0021620808 of MŠMT ČR and Institutional Research Plan No. AV0Z10190503.  相似文献   

7.
Semi-supervised graph clustering: a kernel approach   总被引:6,自引:0,他引:6  
Semi-supervised clustering algorithms aim to improve clustering results using limited supervision. The supervision is generally given as pairwise constraints; such constraints are natural for graphs, yet most semi-supervised clustering algorithms are designed for data represented as vectors. In this paper, we unify vector-based and graph-based approaches. We first show that a recently-proposed objective function for semi-supervised clustering based on Hidden Markov Random Fields, with squared Euclidean distance and a certain class of constraint penalty functions, can be expressed as a special case of the weighted kernel k-means objective (Dhillon et al., in Proceedings of the 10th International Conference on Knowledge Discovery and Data Mining, 2004a). A recent theoretical connection between weighted kernel k-means and several graph clustering objectives enables us to perform semi-supervised clustering of data given either as vectors or as a graph. For graph data, this result leads to algorithms for optimizing several new semi-supervised graph clustering objectives. For vector data, the kernel approach also enables us to find clusters with non-linear boundaries in the input data space. Furthermore, we show that recent work on spectral learning (Kamvar et al., in Proceedings of the 17th International Joint Conference on Artificial Intelligence, 2003) may be viewed as a special case of our formulation. We empirically show that our algorithm is able to outperform current state-of-the-art semi-supervised algorithms on both vector-based and graph-based data sets.  相似文献   

8.
Weakly dicomplemented lattices are bounded lattices equipped with two unary operations to encode a negation on concepts. They have been introduced to capture the equational theory of concept algebras (Wille 2000; Kwuida 2004). They generalize Boolean algebras. Concept algebras are concept lattices, thus complete lattices, with a weak negation and a weak opposition. A special case of the representation problem for weakly dicomplemented lattices, posed in Kwuida (2004), is whether complete weakly dicomplemented lattices are isomorphic to concept algebras. In this contribution we give a negative answer to this question (Theorem 4). We also provide a new proof of a well known result due to M.H. Stone (Trans Am Math Soc 40:37–111, 1936), saying that each Boolean algebra is a field of sets (Corollary 4). Before these, we prove that the boundedness condition on the initial definition of weakly dicomplemented lattices (Definition 1) is superfluous (Theorem 1, see also Kwuida (2009)).  相似文献   

9.
In this paper we elaborate on the handling of the ramification problem in the setting of temporal databases. Starting with the observation that solutions from the literature on reasoning about action are inadequate for addressing the ramification problem, in our prior work (Papadakis and Plexousakis in Int. J. Artif. Intel., 12(3):315, 2003) we have presented a solution based on an extension of the situation calculus and the work of McCain and Turner. Also, we have dealt with the ramification problem in spatial databases (Papadakis and Christodoulou in Expert Syst. Appl. 37:1374, 2010). In this paper, we present a tool that connects the theoretical results to practical considerations, by producing the appropriate SQL commands in order to address the ramification problem. (A preliminary version of this work appears in Papadakis et al., 17th Inter Symposium on Methodologies for Intelligent Systems, pp. 381–388, 2008)  相似文献   

10.
In previous works (Nakao et al., Reliab. Comput., 9(5):359–372, 2003; Watanabe et al., J. Math. Fluid Mech., 6(1):1–20, 2004), the authors considered the numerical verification method of solutions for two-dimensional heat convection problems known as Rayleigh-Bénard problem. In the present paper, to make the arguments self-contained, we first summarize these results including the basic formulation of the problem with numerical examples. Next, we will give a method to verify the bifurcation point itself, which should be an important information to clarify the global bifurcation structure, and show a numerical example. Finally, an extension to the three dimensional case will be described.  相似文献   

11.
We study the on-line minimum weighted bipartite matching problem in arbitrary metric spaces. Here, n not necessary disjoint points of a metric space M are given, and are to be matched on-line with n points of M revealed one by one. The cost of a matching is the sum of the distances of the matched points, and the goal is to find or approximate its minimum. The competitive ratio of the deterministic problem is known to be Θ(n), see (Kalyanasundaram, B., Pruhs, K. in J. Algorithms 14(3):478–488, 1993) and (Khuller, S., et al. in Theor. Comput. Sci. 127(2):255–267, 1994). It was conjectured in (Kalyanasundaram, B., Pruhs, K. in Lecture Notes in Computer Science, vol. 1442, pp. 268–280, 1998) that a randomized algorithm may perform better against an oblivious adversary, namely with an expected competitive ratio Θ(log n). We prove a slightly weaker result by showing a o(log 3 n) upper bound on the expected competitive ratio. As an application the same upper bound holds for the notoriously hard fire station problem, where M is the real line, see (Fuchs, B., et al. in Electonic Notes in Discrete Mathematics, vol. 13, 2003) and (Koutsoupias, E., Nanavati, A. in Lecture Notes in Computer Science, vol. 2909, pp. 179–191, 2004). The authors were partially supported by OTKA grants T034475 and T049398.  相似文献   

12.
We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. (Proceedings of the 13th ACM–SIAM Symposium on Discrete Algorithms, pp. 104–105, 2002), who recently introduced this problem, considered caches of size 1, i.e. a cache can store one document. They presented an offline algorithm based on dynamic programming as well as online algorithms that achieve constant factor competitive ratios. For arbitrary cache sizes, Feder et al. (Theor. Comput. Sci. 324:201–218, 2004) gave online strategies that have nearly optimal competitive ratios in several cost models.  相似文献   

13.
This article proposes several two-timescale simulation-based actor-critic algorithms for solution of infinite horizon Markov Decision Processes with finite state-space under the average cost criterion. Two of the algorithms are for the compact (non-discrete) action setting while the rest are for finite-action spaces. On the slower timescale, all the algorithms perform a gradient search over corresponding policy spaces using two different Simultaneous Perturbation Stochastic Approximation (SPSA) gradient estimates. On the faster timescale, the differential cost function corresponding to a given stationary policy is updated and an additional averaging is performed for enhanced performance. A proof of convergence to a locally optimal policy is presented. Next, we discuss a memory efficient implementation that uses a feature-based representation of the state-space and performs TD(0) learning along the faster timescale. The TD(0) algorithm does not follow an on-line sampling of states but is observed to do well on our setting. Numerical experiments on a problem of rate based flow control are presented using the proposed algorithms. We consider here the model of a single bottleneck node in the continuous time queueing framework. We show performance comparisons of our algorithms with the two-timescale actor-critic algorithms of Konda and Borkar (1999) and Bhatnagar and Kumar (2004). Our algorithms exhibit more than an order of magnitude better performance over those of Konda and Borkar (1999).
Shalabh Bhatnagar (Corresponding author)Email:
  相似文献   

14.
Currently, embedded systems have been widely used for ubiquitous computing environments including digital setup boxes, mobile phones, and USN (Ubiquitous Sensor Networks). The significance of security has been growing as it must be necessarily embedded in all these systems. Up until now, many researchers have made efforts to verify the integrity of applied binaries downloaded in embedded systems. The research of problem solving is organized into hardware methods and software-like methods. In this research, the basic approach to solving problems from the software perspective was employed. From the software perspective, unlike in the existing papers (Seshadri et al., Proc. the IEEE symposium on security and privacy, 2004; Seshadri et al., Proc. the symposium on operating systems principals, 2005) based on the standardized model (TTAS.KO-11.0054. 2006) publicized in Korea, there is no extra verifier and conduct for the verification function in the target system. Contrary to the previous schemes (Jung et al. , 2008; Lee et al., LNCS, vol. 4808, pp. 346–355, 2007), verification results are stored in 1 validation check bit, instead of storing signature value for application binary files in the i-node structure for the purpose of reducing run-time execution overhead. Consequently, the proposed scheme is more efficient because it dramatically reduces overhead in storage space, and when it comes to computing, it performs one hash algorithm for initial execution and thereafter compares 1 validation check bit only, instead of signature and hash algorithms for every application binary. Furthermore, in cases where there are frequent changes in the i-node structure or file data depending on the scheme application, the scheme can provide far more effective verification performance compared to the previous schemes.  相似文献   

15.
Computational Fluid Dynamics (CFD) is widely and successfully used in standard design processes for microfluidic μTAS devices. But for an increasing number of advanced applications involving the dynamics of small groups of beads, blood cells or biopolymers in microcapillaries or sorting devices, novel simulation techniques are called for. Representing moving rigid or flexible extended dispersed objects poses serious difficulties for traditional CFD schemes. Meshless, particle-based simulation approaches, such as Dissipative Particle Dynamics (DPD) are suited for addressing these complicated flow problems with sufficient numerical efficiency. Objects can conveniently be represented as compound objects embedded seamlessly within an explicit model for the solvent. However, the application of DPD and related methods to realistic problems, in particular the design of microfluidics systems, is not well developed in general. With this work, we demonstrate how the method appears when used in practice, in the process of designing and simulating a specific microfluidic device, a microfluidic chamber representing a prototypical bead-based immunoassay developed in our laboratory (Glatzel et al. 2006a, b; Riegger et al. 2006). Thomas Steiner: born Glatzel.  相似文献   

16.
The steel mill slab design problem from the CSPLIB is a combinatorial optimization problem motivated by an application of the steel industry. It has been widely studied in the constraint programming community. Several methods were proposed to solve this problem. A steel mill slab library was created which contains 380 instances. A closely related binpacking problem called the multiple knapsack problem with color constraints, originated from the same industrial problem, was discussed in the integer programming community. In particular, a simple integer program for this problem has been given by Forrest et al. (INFORMS J Comput 18:129–134, 2006). The aim of this paper is to bring these different studies together. Moreover, we adapt the model of Forrest et al. (INFORMS J Comput 18:129–134, 2006) for the steel mill slab design problem. Using this model and a state-of-the-art integer program solver all instances of the steel mill slab library can be solved efficiently to optimality. We improved, thereby, the solution values of 76 instances compared to previous results (Schaus et al., Constraints 16:125–147, 2010). Finally, we consider a recently introduced variant of the steel mill slab design problem, where within all solutions which minimize the leftover one is interested in a solution which requires a minimum number of slabs. For that variant we introduce two approaches and solve all instances of the steel mill slab library with this slightly changed objective function to optimality.  相似文献   

17.
In this paper, we study two variants of the bin packing and covering problems called Maximum Resource Bin Packing (MRBP) and Lazy Bin Covering (LBC) problems, and present new approximation algorithms for them. For the offline MRBP problem, the previous best known approximation ratio is \frac65\frac{6}{5} (=1.2) achieved by the classical First-Fit-Increasing (FFI) algorithm (Boyar et al. in Theor. Comput. Sci. 362(1–3):127–139, 2006). In this paper, we give a new FFI-type algorithm with an approximation ratio of \frac8071\frac{80}{71} (≈1.12676). For the offline LBC problem, it has been shown in Lin et al. (COCOON, pp. 340–349, 2006) that the classical First-Fit-Decreasing (FFD) algorithm achieves an approximation ratio of \frac7160\frac{71}{60} (≈1.18333). In this paper, we present a new FFD-type algorithm with an approximation ratio of \frac1715\frac{17}{15} (≈1.13333). Our algorithms are based on a pattern-based technique and a number of other observations. They run in near linear time (i.e., O(nlog n)), and therefore are practical.  相似文献   

18.
We consider the multivariate interlace polynomial introduced by Courcelle (Electron. J. Comb. 15(1), 2008), which generalizes several interlace polynomials defined by Arratia, Bollobás, and Sorkin (J. Comb. Theory Ser. B 92(2):199–233, 2004) and by Aigner and van der Holst (Linear Algebra Appl., 2004). We present an algorithm to evaluate the multivariate interlace polynomial of a graph with n vertices given a tree decomposition of the graph of width k. The best previously known result (Courcelle, Electron. J. Comb. 15(1), 2008) employs a general logical framework and leads to an algorithm with running time f(k)⋅n, where f(k) is doubly exponential in k. Analyzing the GF(2)-rank of adjacency matrices in the context of tree decompositions, we give a faster and more direct algorithm. Our algorithm uses 23k2+O(k)·n2^{3k^{2}+O(k)}\cdot n arithmetic operations and can be efficiently implemented in parallel.  相似文献   

19.
Consider the dynamic program h(n)=min 1≤jn a(n,j), where a(n,j) is some formula that may (online) or may not (offline) depend on the previously computed h(i), for i<n. The goal is to compute all h(n), for 1≤nN. It is well known that, if a(n,j) satisfy the Monge property, then the SMAWK algorithm (Aggarwal et al., Algorithmica 2(1):195–208, 1987) can solve the offline problem in O(N) time; a Θ(N) speedup over the naive algorithm. In this paper we extend this speedup to the online case, that is, to compute h(n) in the order n=1,2,…,N when (i) we do not know the values of a(n′,j) for n′>n before h(n) has been computed and (ii) do not know the problem size N in advance. We show that if a(n,j) satisfy a stronger, but sometimes still natural, property than the Monge one, then each h(n) can be computed in online fashion in O(1) amortized time. This maintains the speedup online, in the sense that the total time to compute all h(n) is O(N). We also show how to compute each h(n) in the worst case O(log N) time, while maintaining the amortized time bound. For a(n,j) satisfying our stronger property, our algorithm is also simpler than the standard SMAWK algorithm for solving the offline case. We illustrate our technique on two examples from the literature; the first is the D-median problem on a line, and the second comes from mobile wireless paging. The research of the first author was partially supported by the NSF program award CNS-0626606; the research of the second and third authors was partially supported by Hong Kong RGC CERG grant HKUST6312/04E.  相似文献   

20.
This paper considers a family of spatially discrete approximations, including boundary treatment, to initial boundary value problems in evolving bounded domains. The presented method is based on the Cartesian grid embedded Finite-Difference method, which was initially introduced by Abarbanel and Ditkowski (ICASE Report No. 96-8, 1996; and J. Comput. Phys. 133(2), 1997) and Ditkowski (Ph.D. thesis, Tel Aviv University, 1997), for initial boundary value problems on constant irregular domains. We perform a comprehensive theoretical analysis of the numerical issues, which arise when dealing with domains, whose boundaries evolve smoothly in the spatial domain as a function of time. In this class of problems the moving boundaries are impenetrable with either Dirichlet or Neumann boundary conditions, and should not be confused with the class of moving interface problems such as multiple phase flow, solidification, and the Stefan problem. Unlike other similar works on this class of problems, the resulting method is not restricted to domains of up to 3-D, can achieve higher than 2nd-order accuracy both in time and space, and is strictly stable in semi-discrete settings. The strict stability property of the method also implies, that the numerical solution remains consistent and valid for a long integration time. A complete convergence analysis is carried in semi-discrete settings, including a detailed analysis for the implementation of the diffusion equation. Numerical solutions of the diffusion equation, using the method for a 2nd and a 4th-order of accuracy are carried out in one dimension and two dimensions respectively, which demonstrates the efficacy of the method. This research was supported by the Israel Science Foundation (grant No. 1362/04).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号