首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we develop a framework for the automated verification of Web sites which can be used to specify integrity conditions for a given Web site, and then automatically check whether these conditions are fulfilled. First, we provide a rewriting-based, formal specification language which allows us to define syntactic as well as semantic properties of the Web site. Then, we formalize a verification technique which obtains the requirements not fulfilled by the Web site, and helps to repair the errors by finding out incomplete information and/or missing pages. Our methodology is based on a novel rewriting-based technique, called partial rewriting, in which the traditional pattern matching mechanism is replaced by tree simulation, a suitable technique for recognizing patterns inside semistructured documents. The framework has been implemented in the prototype Web verification system Verdi which is publicly available.  相似文献   

2.
In this paper, we describe a system, written in Haskell, for the automated verification of Web sites which can be used to specify (partial) correctness and completeness properties of a given Web site, and then automatically check whether these properties are actually fulfilled. It provides a rule-based, formal specification language which allows us to define syntactic/semantic conditions for the Web site by means of a user-friendly graphical interface as well as a verification facility for recognizing forbidden/incorrect patterns and incomplete/missing Web pages.  相似文献   

3.
Improving pattern quality in web usage mining by using semantic information   总被引:1,自引:1,他引:0  
Frequent Web navigation patterns generated by using Web usage mining techniques provide valuable information for several applications such as Web site restructuring and recommendation. In conventional Web usage mining, semantic information of the Web page content does not take part in the pattern generation process. In this work, we investigate the effect of semantic information on the patterns generated for Web usage mining in the form of frequent sequences. To this aim, we developed a technique and a framework for integrating semantic information into Web navigation pattern generation process, where frequent navigational patterns are composed of ontology instances instead of Web page addresses. The quality of the generated patterns is measured through an evaluation mechanism involving Web page recommendation. Experimental results show that more accurate recommendations can be obtained by including semantic information in navigation pattern generation, which indicates the increase in pattern quality.  相似文献   

4.
During the past decade, sequential pattern mining has been the core of numerous research efforts. It is now possible to efficiently extract knowledge of users’ behavior from a huge set of sequences collected over time. This has applications in various domains such as purchases in supermarkets, Web site visits, etc. However, sequence mining algorithms do little to control the risks of extracting false discoveries or overlooking true knowledge. In this paper, the theoretical conditions to achieve a relevant sequence mining process are examined. Then, the article offers a statistical view of sequence mining which has the following advantages: First, it uses a compact and generalized representation of the original sequences in the form of a probabilistic automaton. Second, it integrates statistical constraints to guarantee the extraction of significant patterns. Finally, it provides an interesting solution in a privacy preserving context in order to respect individuals’ information. An application in car flow modeling is presented, showing the ability of our algorithm (acsm) to discover frequent routes without any private information. Comparisons with a classical sequence mining algorithm (spam) are made, showing the effectiveness of our approach.  相似文献   

5.
Graph regularization methods for Web spam detection   总被引:1,自引:0,他引:1  
We present an algorithm, witch, that learns to detect spam hosts or pages on the Web. Unlike most other approaches, it simultaneously exploits the structure of the Web graph as well as page contents and features. The method is efficient, scalable, and provides state-of-the-art accuracy on a standard Web spam benchmark.  相似文献   

6.
7.
We present a framework, called air, for verifying safety properties of assembly language programs via software model checking. air extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, air allows verification of programs for which source code is unavailable—such as legacy and COTS software—and programs that use features—such as pointers, structures, and object-orientation—that are problematic for source-level software verification tools. In addition, air makes no assumptions about the underlying compiler technology. We have implemented a prototype of air and present encouraging results on several non-trivial examples.  相似文献   

8.
The software model checker Blast   总被引:2,自引:0,他引:2  
Blast is an automatic verification tool for checking temporal safety properties of C programs. Given a C program and a temporal safety property, Blast either statically proves that the program satisfies the safety property, or provides an execution path that exhibits a violation of the property (or, since the problem is undecidable, does not terminate). Blast constructs, explores, and refines abstractions of the program state space based on lazy predicate abstraction and interpolation-based predicate discovery. This paper gives an introduction to Blast and demonstrates, through two case studies, how it can be applied to program verification and test-case generation. In the first case study, we use Blast to statically prove memory safety for C programs. We use CCured, a type-based memory-safety analyzer, to annotate a program with run-time assertions that check for safe memory operations. Then, we use Blast to remove as many of the run-time checks as possible (by proving that these checks never fail), and to generate execution scenarios that violate the assertions for the remaining run-time checks. In our second case study, we use Blast to automatically generate test suites that guarantee full coverage with respect to a given predicate. Given a C program and a target predicate p, Blast determines the program locations q for which there exists a program execution that reaches q with p true, and automatically generates a set of test vectors that cause such executions. Our experiments show that Blast can provide automated, precise, and scalable analysis for C programs.  相似文献   

9.
DPLL-based SAT solvers progress by implicitly applying binary resolution. The resolution proofs that they generate are used, after the SAT solver’s run has terminated, for various purposes. Most notable uses in formal verification are: extracting an unsatisfiable core, extracting an interpolant, and detecting clauses that can be reused in an incremental satisfiability setting (the latter uses the proof only implicitly, during the run of the SAT solver). Making the resolution proof smaller can benefit all of these goals: it can lead to smaller cores, smaller interpolants, and smaller clauses that are propagated to the next SAT instance in an incremental setting. We suggest two methods that are linear in the size of the proof for doing so. Our first technique, called Recycle-Units uses each learned constant (unit clause) (x) for simplifying resolution steps in which x was the pivot, prior to when it was learned. Our second technique, called   simplifies proofs in which there are several nodes in the resolution graph, one of which dominates the others, that correspond to the same pivot. Our experiments with industrial instances show that these simplifications reduce the core by ≈5% and the proof by ≈13%. It reduces the core less than competing methods such as run- till- fix, but whereas our algorithms are linear in the size of the proof, the latter and other competing techniques are all exponential as they are based on SAT runs. If we consider the size of the proof (the resolution graph) as being polynomial in the number of variables (it is not necessarily the case in general), this gives our method an exponential time reduction comparing to existing tools for small core extraction. Our experiments show that this result is evident in practice more so for the second method: rarely it takes more than a few seconds, even when competing tools time out, and hence it can be used as a cheap proof post-processing procedure.  相似文献   

10.
Component-based software development is a promising approach for controlling the complexity and quality of software systems. Nevertheless, recent advances in quality control techniques do not seem to keep up with the growing complexity of embedded software; embedded systems often consist of dozens to hundreds of software/hardware components that exhibit complex interaction behavior. Unanticipated quality defects in a component can be a major source of system failure. To address this issue, this paper suggests a design verification approach integrated into the model-driven, component-based development methodology Marmot. The notion of abstract components—the basic building blocks of Marmot—helps to lift the level of abstraction, facilitates high-level reuse, and reduces verification complexity by localizing verification problems between abstract components before refinement and after refinement. This enables the identification of unanticipated design errors in the early stages of development. This work introduces the Marmot methodology, presents a design verification approach in Marmot, and demonstrates its application on the development of a μ-controller-based abstraction of a car mirror control system. An application on TinyOS shows that the approach helps to reuse models as well as their verification results in the development process.  相似文献   

11.
12.
A transformation-based Web site can keep the contents of a Web site consistent by furnishing a single database and a set of transformation programs, each generating a Web page from the database. However, when someone notices an error or stale content on a Web page in this style of Web site construction, the Web site maintainer must access a possibly huge database to update the corresponding content. In this paper, we propose a new approach to Web site construction based on bidirectional transformation, and report our design and implementation of a practical updating system called Vu-X. We bring the idea of bidirectional transformation to Web site construction, describing not only a forward transformation for generating Web pages from the database but also a backward transformation for reflecting modifications on the Web pages to the database. By use of the bidirectional transformation language Bi-X, we can obtain both transformations only by specifying a forward transformation. Our Vu-X system is implemented as a Web server built upon the Bi-X transformation engine, which can keep the content of Web sites consistent by updating Web pages in WYSIWYG style on Web browsers.  相似文献   

13.
Modern multicore processors, such as the Cell Broadband Engine, achieve high performance by equipping accelerator cores with small “scratch-pad” memories. The price for increased performance is higher programming complexity – the programmer must manually orchestrate data movement using direct memory access (DMA) operations. Programming using asynchronous DMA operations is error-prone, and DMA races can lead to nondeterministic bugs which are hard to reproduce and fix. We present a method for DMA race analysis in C programs. Our method works by automatically instrumenting a program with assertions modeling the semantics of a memory flow controller. The instrumented program can then be analyzed using state-of-the-art software model checkers. We show that bounded model checking is effective for detecting DMA races in buggy programs. To enable automatic verification of the correctness of instrumented programs, we present a new formulation of k-induction geared towards software, as a proof rule operating on loops. Our techniques are implemented as a tool, Scratch, which we apply to a large set of programs supplied with the IBM Cell SDK, in which we discover a previously unknown bug. Our experimental results indicate that our k-induction method performs extremely well on this problem class. To our knowledge, this marks both the first application of k-induction to software verification, and the first example of software model checking in the context of heterogeneous multicore processors.  相似文献   

14.
15.
We present functional dependencies, a convenient, formal, but high-level, specification format for a piece of procedural software (function). Functional dependencies specify the set of memory locations, which may be modified by the function, and for each modified location, the set of memory locations that influence its final value. Verifying that a function respects pre-defined functional dependencies can be tricky: the embedded world uses C and Ada, which have arrays and pointers. Existing systems we know of that manipulate functional dependencies, Caveat and SPARK, are restricted to pointer-free subsets of these languages. This article deals with the functional dependencies in a programming language with full aliasing. We show how to use a weakest pre-condition calculus to generate a verification condition for pre-existing functional dependencies requirements. This verification condition can then be checked using automated theorem provers or proof assistants. With our approach, it is possible to verify the specification as it was written beforehand. We assume little about the implementation of the verification condition generator itself. Our study takes place inside the C analysis framework Frama-C, where an experimental implementation of the technique described here has been implemented on top of the WP plug-in in the development version of the tool.  相似文献   

16.
Verification of clocked and hybrid systems   总被引:2,自引:0,他引:2  
This paper presents a new computational model for real-time systems, called the clocked transition system (CTS) model. The CTS model is a development of our previous timed transition model, where some of the changes are inspired by the model of timed automata. The new model leads to a simpler style of temporal specification and verification, requiring no extension of the temporal language. We present verification rules for proving safety a nd liveness properties of clocked transition systems. All rules are associated with verification diagrams. The verification of response properties requires adjustments of the proof rules developed for untimed systems, reflecting the fact that progress in the real time systems is ensured by the progress of time and not by fairness. The style of the verification rules is very close to the verification style of untimed systems which allows the (re)use of verification methods and tools, developed for u ntimed reactive systems, for proving all interesting properties of real-time systems. We conclude with the presentation of a branching-time based approach for verifying that an arbitrary given CTS isnon-zeno. Finally, we present an extension of the model and the invariance proof rule for hybrid systems. Received: 23 September 1998 / 7 June 1999  相似文献   

17.
We study data-driven Web applications provided by Web sites interacting with users or applications. The Web site can access an underlying database, as well as state information updated as the interaction progresses, and receives user input. The structure and contents of Web pages, as well as the actions to be taken, are determined dynamically by querying the underlying database as well as the state and inputs. The properties to be verified concern the sequences of events (inputs, states, and actions) resulting from the interaction, and are expressed in linear or branching-time temporal logics. The results establish under what conditions automatic verification of such properties is possible and provide the complexity of verification. This brings into play a mix of techniques from logic and model checking.  相似文献   

18.
Personalizable edge services for Web accessibility   总被引:1,自引:0,他引:1  
Web Content Accessibility guidelines by W3C (W3C Recommendation, May 1999. ) provide several suggestions for Web designers regarding how to author Web pages in order to make them accessible to everyone. In this context, this paper proposes the use of edge services as an efficient and general solution to promote accessibility and breaking down the digital barriers that inhibit users with disabilities to actively participate to any aspect of society. The idea behind edge services mainly affect the advantages of a personalized navigation in which contents are tailored according to different issues, such as client’s devices capabilities, communication systems and network conditions and, finally, preferences and/or abilities of the growing number of users that access the Web. To meet these requirements, Web designers have to efficiently provide content adaptation and personalization functionalities mechanisms in order to guarantee universal access to the Internet content. The so far dominant paradigm of communication on the WWW, due to its simple request/response model, cannot efficiently address such requirements. Therefore, it must be augmented with new components that attempt to enhance the scalability, the performances and the ubiquity of the Web. Edge servers, acting on the HTTP data flow exchanged between client and server, allow on-the-fly content adaptation as well as other complex functionalities beyond the traditional caching and content replication services. These value-added services are called edge services and include personalization and customization, aggregation from multiple sources, geographical personalization of the navigation of pages (with insertion/emphasis of content that can be related to the user’s geographical location), translation services, group navigation and awareness for social navigation, advanced services for bandwidth optimization such as adaptive compression and format transcoding, mobility, and ubiquitous access to Internet content. This paper presents Personalizable Accessible Navigation (Pan) that is a set of edge services designed to improve Web pages accessibility, developed and deployed on top of a programmable intermediary framework. The characteristics and the location of the services, i.e., provided by intermediaries, as well as the personalization and the opportunities to select multiple profiles make Pan a platform that is especially suitable for accessing the Web seamlessly also from mobile terminals.
Vittorio ScaranoEmail:
  相似文献   

19.
This paper provides an overview of the Active XML project developed at INRIA over the past five years. Active XML (AXML, for short), is a declarative framework that harnesses Web services for distributed data management, and is put to work in a peer-to-peer architecture. The model is based on AXML documents, which are XML documents that may contain embedded calls to Web services, and on AXML services, which are Web services capable of exchanging AXML documents. An AXML peer is a repository of AXML documents that acts both as a client by invoking the embedded service calls, and as a server by providing AXML services, which are generally defined as queries or updates over the persistent AXML documents. The approach gracefully combines stored information with data defined in an intensional manner as well as dynamic information. This simple, rather classical idea leads to a number of technically challenging problems, both theoretical and practical. In this paper, we describe and motivate the AXML model and language, overview the research results obtained in the course of the project, and show how all the pieces come together in our implementation. The first and third authors were partially funded by the European Project Edos. Work done when the second and third authors were at INRIA. Work done when the second and third authors were at INRIA.  相似文献   

20.
Function in Device Representation   总被引:9,自引:0,他引:9  
We explore the meanings of the terms ‘structure’, ‘behaviour’, and, especially, ‘function’ in engineering practice. Computers provide great assistance in calculation tasks in engineering practice, but they also have great potential for helping with reasoning tasks. However, realising this vision requires precision in representing engineering knowledge, in which the terms mentioned above play a central role. We start with a simple ontology for representing objects and causal interactions between objects. Using this ontology, we investigate a range of meanings for the terms of interest. Specifically, we distinguish between function as effect on the environment, and a device-centred view of device function. In the former view, function is seen as an intended or desired role that an artifact plays in its environment. We identify an important concept called mode of deployment that is often left implicit, but whose explicit representation is necessary for correct and complete reasoning. We discuss the task of design and design verification in this framework. We end with a discussion that relates needs in the world to functions of artifacts created to satisfy the needs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号