首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We refine the complexity analysis of approximation problems by relating it to a new parameter calledgap location. Many of the results obtained so far for approximations yield satisfactory analysis with respect to this refined parameter, but some known results (e.g.,max-k-colorability, max 3-dimensional matching andmax not-all-equal 3sat) fall short of doing so. As a second contribution, our work fills the gap in these cases by presenting new reductions.Next, we present definitions and hardness results of new approximation versions of some NP-complete optimization problems. The problems we treat arevertex cover (for which we define a different optimization problem from the one treated in Papadimitriou & Yannakakis 1991),k-edge coloring, andset splitting.  相似文献   

2.
TheMuscadet theorem prover is a knowledge-based system able to prove theorems in some non-trivial mathematical domains. The knowledge bases contain some general deduction strategies based onnatural deduction, mathematical knowledge and metaknowledge. Metarules build new rules, easily usable by the inference engine, from formal definitions. Mathematical knowledge may be general or specific to some particular field.Muscadet proved many theorems in set theory, mappings, relations, topology, geometry, and topological linear spaces. Some of the theorems were rather difficult.Muscadet is now intended to become an assistant for mathematicians in discrete geometry for cellular automata. In order to evaluate the difficulty of such a work, researchers were observed while proving some lemmas, andMuscadet was tested on easy ones. New methods have to be added to the knowledge base, such as reasoning by induction, but also new heuristics for splitting and reasoning by cases. It is also necessary to find good representations for some mathematical objects.  相似文献   

3.
We present in this paper some new language features and constructs, that allow the joint synchronous/asynchronous programming of reactive applications, as well as their formal verification. We show that reactive applications may be dealt with from two points of view. First, from the chronological point of view, i.e., when reactions are instantaneous, generated by event occurrences in discrete time. Second, from the chronometrical point of view, when reactions have durations in dense time. This duality must be expressible in languages that allow a consistent programming of both synchronous and asynchronous features. The objective of mixing these dual approaches leads to model reactive systems by using hybrid systems, to deal simultaneously with both discrete and continuous phenomena. Furthermore, this must be followed by some verification of the application's properties, with respect to its behavioural and quantitative features. We analyze several existing frameworks that meet these requirements, and propose our own approach based on the language Electre. Received December 1997 / Accepted in revised form September 1999  相似文献   

4.
Bounded model checking of software using SMT solvers instead of SAT solvers   总被引:1,自引:0,他引:1  
C bounded model checking (cbmc) has proved to be a successful approach to automatic software analysis. The key idea is to (i) build a propositional formula whose models correspond to program traces (of bounded length) that violate some given property and (ii) use state-of-the-art SAT solvers to check the resulting formulae for satisfiability. In this paper, we propose a generalisation of the cbmc approach on the basis of an encoding into richer (but still decidable) theories than propositional logic. We show that our approach may lead to considerably more compact formulae than those obtained with cbmc. We have built a prototype implementation of our technique that uses a satisfiability modulo theories (SMT) solver to solve the resulting formulae. Computer experiments indicate that our approach compares favourably with—and on some significant problems outperforms—cbmc.  相似文献   

5.
Relation algebra is well suited for dealing with many problems on ordered sets. Introducing lattices via order relations, this suggests to apply it and tools for its mechanization for lattice-theoretical problems, too. We combine relation algebra and the BDD-based specific purpose Computer Algebra system RelView to solve some algorithmic problems on orders and lattices and to visualize their solutions.  相似文献   

6.
Astrée was the first static analyzer able to prove automatically the total absence of runtime errors of actual industrial programs of hundreds of thousand lines. What makes Astrée such an innovative tool is its scalability, while retaining the required precision, when it is used to analyze a specific class of programs: that of reactive control-command software. In this paper, we discuss the important choice of algorithms and data-structures we made to achieve this goal. However, what really made this task possible was the ability to also take semantic decisions, without compromising soundness, thanks to the abstract interpretation framework. We discuss the way the precision of the semantics was tuned in Astrée in order to scale up, the differences with some more academic approaches and some of the dead-ends we explored. In particular, we show a development process which was not specific to the particular usage Astrée was built for, hoping that it might prove helpful in building other scalable static analyzers.  相似文献   

7.
In Minimum Error Rate Training (MERT), Bleu is often used as the error function, despite the fact that it has been shown to have a lower correlation with human judgment than other metrics such as Meteor and Ter. In this paper, we present empirical results in which parameters tuned on Bleu may lead to sub-optimal Bleu scores under certain data conditions. Such scores can be improved significantly by tuning on an entirely different metric altogether, e.g. Meteor, by 0.0082 Bleu or 3.38% relative improvement on the WMT08 English–French data. We analyze the influence of the number of references and choice of metrics on the result of MERT and experiment on different data sets. We show the problems of tuning on a metric that is not designed for the single reference scenario and point out some possible solutions.  相似文献   

8.
Conclusion Simpson's Diversity and entropy are two good cases in point. Both indices were conceived outside the field of linguistics but they both relate satisfactorily, on a conceptual basis, tolr which can then be defined as an absolute entity. Moreover, the use of whole frequency distributions will be regarded by some as a guarantee of quality. Yet, these indispensable virtues are, in both cases, blemished by some shortcomings which rule them out for a linguist. Instead of borrowing directly from other sciences, it might be useful to collaborate with mathematicians in order to elaborate an index oflr which would combine linguistic consistency and mathematical reliability. This would have the added advantage of making inevitable a theoretical reflexion on the very concept oflr, especially as regards its relations to the structure of frequency distributions. In this case it might be necessary to question some of our thinking concerning the vision oflr in relational terms. From an epistemological point of view, an interesting situation might arise where, in a given field (in this instance, linguistics), progress in theoretical thinking is made possible not by the automatic incorporation of elements from other fields, but by the critical (and hopefully positive) study of such tools. He is a specialist in literary statistics.  相似文献   

9.
Malicious software and other attacks are a major concern in the computing ecosystem and there is a need to go beyond the answers based on untrusted software. Trusted and secure computing can add a new hardware dimension to software protection. Several secure computing hardware architectures using memory encryption and memory integrity checkers have been proposed during the past few years to provide applications with a tamper resistant environment. Some solutions, such as HIDE, have also been proposed to solve the problem of information leakage on the address bus. We propose the CRYPTOPAGE architecture which implements memory encryption, memory integrity protection checking and information leakage protection together with a low performance penalty (3% slowdown on average) by combining the Counter Mode of operation, local authentication values and MERKLE trees. It has also several other security features such as attestation, secure storage for applications and program identification. We present some applications of the CRYPTOPAGE architecture in the computer virology field as a proof of concept of improving security in presence of viruses compared to software only solutions.  相似文献   

10.
We have developed and implemented the Relational Grid Monitoring Architecture (R-GMA) as part of the DataGrid project, to provide a flexible information and monitoring service for use by other middleware components and applications.R-GMA presents users with a virtual database and mediates queries posed at this database: users pose queries against a global schema and R-GMA takes responsibility for locating relevant sources and returning an answer. R-GMAs architecture and mechanisms are general and can be used wherever there is a need for publishing and querying information in a distributed environment.We discuss the requirements, design and implementation of R-GMA as deployed on the DataGrid testbed. We also describe some of the ways in which R-GMA is being used.L. Field: Now at CERN, Switzerland.J. Leake: Under contract from Objective Engineering Ltd.  相似文献   

11.
A WSDL-based type system for asynchronous WS-BPEL processes   总被引:1,自引:0,他引:1  
We tackle the problem of providing rigorous formal foundations to current software engineering technologies for web services, and especially to WSDL and WS-BPEL, two of the most used XML-based standard languages for web services. We focus on a simplified fragment of WS-BPEL sufficiently expressive to model asynchronous interactions among web services in a network context. We present this language as a process calculus-like formalism, that we call ws-calculus, for which we define an operational semantics and a type system. The semantics provides a precise operational model of programs, while the type system forces a clean programming discipline for integrating collaborating services. We prove that the operational semantics of ws-calculus and the type system are ‘sound’ and apply our approach to some illustrative examples. We expect that our formal development can be used to make the relationship between WS-BPEL programs and the associated WSDL documents precise and to support verification of their conformance.  相似文献   

12.
In this paper we survey some well-known approaches proposed as general models for calculi dealing with names (like for example process calculi with name-passing). We focus on (pre)sheaf categories, nominal sets, permutation algebras and named sets, studying the relationships among these models, thus allowing techniques and constructions to be transferred from one model to the other. Research partially supported by the EU IST-2004-16004 SENSORIA.  相似文献   

13.
This paper introduces DILIGENT, a digital library infrastructure built by integrating digital library and Grid technologies and resources. This infrastructure allows different communities to dynamically build specialised digital libraries capable to support the entire e-Science knowledge production and consumption life-cycle by using shared computing, storage, content, and application resources. The paper presents some of the main software services that implement the DILIGENT system. Moreover, it exemplifies the provided features by presenting how the DILIGENT infrastructure is being exploited in supporting the activity of user communities working in the Earth Science Environmental sector. This work is partially funded by the European Commission in the context of the DILIGENT project, under the 2nd call of FP6 IST priority.  相似文献   

14.
In this paper, we present Rambo, an algorithm for emulating a read/write distributed shared memory in a dynamic, rapidly changing environment. Rambo provides a highly reliable, highly available service, even as participants join, leave, and fail. In fact, the entire set of participants may change during an execution, as the initial devices depart and are replaced by a new set of devices. Even so, Rambo ensures that data stored in the distributed shared memory remains available and consistent. There are two basic techniques used by Rambo to tolerate dynamic changes. Over short intervals of time, replication suffices to provide fault-tolerance. While some devices may fail and leave, the data remains available at other replicas. Over longer intervals of time, Rambo copes with changing participants via reconfiguration, which incorporates newly joined devices while excluding devices that have departed or failed. The main novelty of Rambo lies in the combination of an efficient reconfiguration mechanism with a quorum-based replication strategy for read/write shared memory. The Rambo algorithm can tolerate a wide variety of aberrant behavior, including lost and delayed messages, participants with unsynchronized clocks, and, more generally, arbitrary asynchrony. Despite such behavior, Rambo guarantees that its data is stored consistency. We analyze the performance of Rambo during periods when the system is relatively well-behaved: messages are delivered in a timely fashion, reconfiguration is not too frequent, etc. We show that in these circumstances, read and write operations are efficient, completing in at most eight message delays.  相似文献   

15.
In developing a theory of MT, it is desirable to have a methodology just powerful enough to achieve the intended results without introducing unnecessary complexity. A formalism designed to embody the methodology, and its implementation in some computer language, should also reflect this characteristic. This notion of appropriate complexity underlies the philosophy behind the cat2 MT system, a powerful yet simple instantiation of the Eurotra MT methodology. This report describes the cat2 formalism, and compares it to the Eurotra Engineering Framework, as well as to other formalisms for linguistic analysis. It is stressed that with a minimal set of formal devices the cat2 formalism achieves a level of adequacy equivalent to if not superior to the official Eurotra system.  相似文献   

16.
Zusammenfassung In dieser Arbeit wird gezeigt, wie man mit Hilfe desRichardson-Algorithmus Funktionswerte der Inversen einiger elementarer transzendenter Funktionen in einfacher Weise berechnen kann. Hierzu wurden auf einer Digitalrechenanlage auch einige Vergleichsrechnungen durchgeführt.
Summary In this paper we present a method for the computation of some inverse elementary transcendental functions usingRichardsons “Deferred Approach to the Limit”. A comparison of this method with other methods is also given.
  相似文献   

17.
18.
This paper describes discourse processing inKing Kong, a portable natural language interface.King Kong enables users to pose questions and issue commands to a back end system. The notion of a discourse is central toKing Kong, and underlies much of the intelligent assistance thatkong provides to its users.kong's approach to modeling discourse is based on the work of Grosz and Sidner (1986). We extend Grosz and Sidner's framework in several ways, principally to allow multiple independent discourse contexts to remain active at the same time. This paper also describesKing Kong's method of intention recognition, which is similar to that described in Kautz and Allen (1986) and Carberry (1988). We demonstrate that a relatively simple intention recognition component can be exploited by many other discourserelated mechanisms, for example to disambiguate input and resolve anaphora. In particular, this paper describes in detail the mechanism inKing Kong that uses information from the discourse model to form a range of cooperative extended responses to queries in an effort to aid the user in accomplishing her goals.Judith Schaffer Sider received her Bachelor of Arts degree in Computer Science and Linguistics and Cognitive Science from Brandeis University. Since 1987 she has been a member of the technical staff at the MITRE Corporation, where she works on King Kong, the natural language interface under development there. The joint research with John D. Burger described in this volume reflects some of her work in the areas of cooperative responding and plan recognition.John D. Burger is a Project Leader at the MITRE Corporation and an instructor at Boston University. He received a Bachelor of Science degree in Mathematics and Computer Science from Carnegie Melon University. His research interests lie in the fields of natural language processing and intelligent multimedia interfaces. The joint work with Judith Schaffer Sider described in this volume reflects his interest in making use of discourse models in practical intelligent interfaces.  相似文献   

19.
Dr. W. Brauer 《Computing》1968,3(4):351-353
Summary The purpose of this note is to give some corrections and supplements to the paper [1] ofG. Feichtinger; especially we show that theorem 1 is wrong, prove the correct version of this theorem and make some comments on the notion of normal automorphism.
Zusammenfassung In dieser Note werden einige Korrekturen und Ergänzungen zur der Arbeit [1] vonG. Feichtinger gebracht; insbesondere wird gezeigt, daß Theorem 1 falsch ist, es wird die richtige Version dieses Theorems bewiesen, und es werden einige Bemerkungen zum Begriff des normalen Automorphismus' gemacht.


See the paper ofG. Feichtinger: Some Results on the Relation Between Automata and Their Automorphism Groups. Comp.1, 4, 327 (1966).  相似文献   

20.
Locating potential execution errors in software is gaining more attention due to the economical and social impact of software crashes. For this reason, many software engineers are now in need of automatic debugging tools in their development environments. Fortunately, the work on formal method technologies during the past 25 years has produced a number of techniques and tools that can make the debugging task almost automatic, using standard computer equipment and with a reasonable response time. In particular, verification techniques like model-checking that were traditionally employed for formal specifications of the software can now be directly employed for real source code. Due to the maturity of model-checking technology, its application to real software is now a promising and realistic approach to increase software quality. There are already some successful examples of tools for this purpose that mainly work with self-contained programs (programs with no system-calls). However, verifying software that uses external functionality provided by the operating system via API s is currently a challenging trend. In this paper, we propose a method for using the tool spin to verify C software systems that use services provided by the operating system thorough a given API. Our approach consists in building a model of the underlying operating system to be joined with the original C code in order to obtain the input for the model checker spin. The whole modeling process is transparent for the C programmer, because it is performed automatically and without special syntactic constraints in the input C code. Regarding verification, we consider optimization techniques suitable for this application domain, and we guarantee that the system only reports potential (non-spurious) errors. We present the applicability of our approach focusing on the verification of distributed software systems that use the API Socket and the network protocol stack TCP/IP for communications. In order to ensure correctness, we define and use a formal semantics of the API to conduct the construction of correct models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号