首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Research in robust data structures can be done both by theoretical analysis of properties of abstract implementations and by empirical study of real implementations. Empirical study requires a support environment for the actual implementation. In particular, if the response of the implementation to errors is being studied, a mechanism must exist for artificially injecting appropriate kinds of errors. This paper discusses techniques used in empirical investigations of data structure robustness, with particular reference to tools developed for this purpose at the University of Waterloo.  相似文献   

2.
Experimenting with the Envelope   总被引:1,自引:0,他引:1  
Although the computer industry has not yet established a formal means of tallying the successes of specific software application approaches, systems managers can evaluate the likely impact these technologies will have using the information architecture framework described here. The information architecture framework enables the systems manager to predict and assess the effect these alternatives will have on information processing systems before the technology is implemented. This column explores the information architecture structure and its effect on the way information processing I systems are selected.  相似文献   

3.
A pattern-matching feature for the Prolog language is described. Through the use of patterns, introduced as Prolog predicates, the feature favors the specification of string handling algorithms in a declarative style. A number of convenient pre-defined patterns, adapted from SNOBOL 4, are included. The use of two-level grammars as a paradigm for developing Prolog programs incorporating the pattern-matching feature is also discussed.  相似文献   

4.
An approach to achieving dynamic reconfiguration within the framework of Ada1 is described. A technique for introducing a kernel facility for dynamic reconfiguration in Ada is illustrated, and its implementation using the Verdix VADS 5.5 Ada compiling system on a Sun3–120 running the 4.3 BSD Unix operating system is discussed. This experimental kernel allows an Ada program to change its own configuration dynamically, linking new pieces of code at run-time. It is shown how this dynamic facility can be integrated consistently at the Ada language level, without introducing severe inconsistencies with respect to the Standard semantics.  相似文献   

5.
The theorem prover Isabelle has been used to axiomatise ZF set theory with natural deduction and to prove a number of theorems concerning functions. In particular, the well-founded recursion theorem has been derived, allowing the definition of functions over recursive types (such as the length and the append functions for lists). The theory of functions has been developed sufficiently within ZF to include PP, the theory of continuous functions forming the basis of LCF. Most of the theorems have been derived using backward proofs, with a small amount of automation.The work has been carried out at the Computer Laboratory of the University of Cambridge.  相似文献   

6.
3-consistency algorithm for temporal constraint propagation over interval-based network, proposed by James Allen, is finding its use in many practical temporal reasoning systems. Apart from the polynomial behavior of this algorithm with respect to the number of nodes in the network, very little is known about its time complexity with respect to other properties of the initially given temporal constraints. In this article we have reported some of our results analyzing the complexity with respect to some structural parameters of the input constraint network. We have identified some regions, with respect to the structural parameters of the input network, where the algorithm takes much more time than it needs over other regions. Similar features have been observed in recent studies on NP-hard problems. Average case complexity of Allen's algorithm is also studied empirically, over a hundred thousand randomly generated networks, and the growth rate is observed to be of the order of quadratic with respect to the problem size (at least up to node 40, and expected to be lower above that). We have analyzed our data statistically to develop a model with which one can calculate the expected time to be consumed by the algorithm for a given input network.  相似文献   

7.
This paper formally describes and studies an algorithm for compiling functions defined through pattern-matching. This algorithm improves on previous proposals by accepting an additional parameter: the domain over which the compiled function will be applied. Thes additional parameter allows the generation of better code, but it also simplifies the definition of the algorithm.The practical interest of this algorithm for the implementation of functional languages is deminstrated by several applications and/or extensions: conditional rewriting, equations between constructors,….  相似文献   

8.
Asim Ali Shah   《Knowledge》2006,19(8):681-686
Planning allows one to sequence a series of actions to achieve a certain goal. In this paper, we present a short overview of the Disjunctive logic programming under the answer set semantics Then we use the proposed DLVK planning system against a well-known blocks world planning domain for fully and partially specified initial state facts and the system performance is checked in the run time CPU seconds while increasing the plan length which are the number of steps presenting the plan quality together with security check feature for each plan to know whether the plan generated is secure. Blocks world planning has been widely investigated by planning researchers, primarily due to its simplicity and also because it captures several of the relevant difficulties that are involved in a typical planning domain. The work presented in this paper contributed mainly as a technique compatible with the extensions and improvements of the existing system rather than as a concrete planning system.  相似文献   

9.
This paper presents the results of an experiment in security evaluation. The system is modeled as a privilege graph that exhibits its security vulnerabilities. Quantitative measures that estimate the effort an attacker might expend to exploit these vulnerabilities to defeat the system security objectives are proposed. A set of tools has been developed to compute such measures and has been used in an experiment to monitor a large real system for nearly two years. The experimental results are presented and the validity of the measures is discussed. Finally, the practical usefulness of such tools for operational security monitoring is shown and a comparison with other existing approaches is given  相似文献   

10.
11.
The evolution of a new technology depends upon a good theoretical basis for developing the technology, as well as upon its experimental validation. In order to provide for this experimentation, we have investigated the creation of a software testbed and the feasibility of using the same testbed for experimenting with a broad set of technologies. The testbed is a set of programs, data, and supporting documentation that allows researchers to test their new technology on a standard software platform. An important component of this testbed is the Unified Model of Dependability (UMD), which was used to elicit dependability requirements for the testbed software. With a collection of seeded faults and known issues of the target system, we are able to determine if a new technology is adept at uncovering defects or providing other aids proposed by its developers. In this paper, we present the Tactical Separation Assisted Flight Environment (TSAFE) testbed environment for which we modeled and evaluated dependability requirements and defined faults to be seeded for experimentation. We describe two completed experiments that we conducted on the testbed. The first experiment studies a technology that identifies architectural violations and evaluates its ability to detect the violations. The second experiment studies model checking as part of design for verification. We conclude by describing ongoing experimental work studying testing, using the same testbed. Our conclusion is that even though these three experiments are very different in terms of the studied technology, using and re-using the same testbed is beneficial and cost effective.
Daniel HirschbachEmail:
  相似文献   

12.
A growing number of promising applications requires recognizing human posture and motion. Conventional techniques require us to attach foreign objects to the body, which in some applications is disturbing or even impossible. New, nonintrusive motion capture approaches are called for. The well-known shape-from-silhouette technique for understanding 3D shapes could also be effective for human bodies. We present a novel technique for model-based motion capture that uses silhouettes extracted from multiple views. A 3D reconstruction of the performer can be computed from a silhouette with a technique known as volume intersection. We can recover the posture by fitting a model of the human body to the reconstructed volume. The purpose of this work is to test the effectiveness of this approach in a virtual environment by investigating the precision of the posture and motion obtained with various numbers and arrangements of stationary cameras. An average 1% position error has been obtained with five cameras.  相似文献   

13.
14.
This paper describes the design and implementation of a shared virtual memory (SVM) system for the nCUBE 2 machine. The SVM system provides the user a single coherent address space across all nodes. It is implemented at the user level in a C programming environment using high level constructs to support data sharing. Shared variables are treated as objects rather than pages. We have improved upon an existing algorithm for maintaining coherency in the SVM system, thus achieving a reduction in the number of internode messages required in coherency maintenance. Detailed timing analysis is conducted to analyze the feasibility of this shared environment. Experimental results indicate that parallel programs running under an SVM system show linear speedup, suggesting that SVM systems could provide an effective programming environment for the next generation of distributed memory parallel computers. The bottleneck of this implementation is associated with the expensive interrupt handling capability of the nCUBE 2.  相似文献   

15.
樊爱京  杨照峰 《计算机应用》2011,31(11):2961-2964
针对新一代网络入侵检测系统(NIDS)的创建需要先进的模式匹配引擎,提出一种模式匹配的新方案,利用基于硬件的可编程状态机技术(B-FSM)来实现确定性处理过程。该技术可以在一个输入流中同时获取大量模式,并高效地映射成转换规则。通过对网络入侵检测系统中普遍采用的规则集(Snort)进行实验,实验结果表明该方法具有存储高效、执行速度快、动态可更新等特点,可以满足NIDS的需要。  相似文献   

16.
In the past few years, the increase in interest usage has been substantial. The high network bandwidth speed and the large amount of threats pose challenges to current network intrusion detection systems, which manage high amounts of network traffic and perform complicated packet processing. Pattern matching is a computationally intensive process included in network intrusion detection systems. In this paper, we present an efficient graphics processing unit (GPU)-based network packet pattern-matching algorithm by leveraging the computational power of GPUs to accelerate pattern-matching operations and subsequently increase the overall processing throughput. According to the experimental results, the proposed algorithm achieved a maximal traffic processing throughput of over 2 Gbit/s. The results demonstrate that the proposed GPU-based algorithm can effectively enhance the performance of network intrusion detection systems.  相似文献   

17.
Processing data streams requires new demands not existent on static environments. In online learning, the probability distribution of the data can often change over time (concept drift). The prequential assessment methodology is commonly used to evaluate the performance of classifiers in data streams with stationary and non‐stationary distributions. It is based on the premise that the purpose of statistical inference is to make sequential probability forecasts for future observations, rather than to express information about the past accuracy achieved. This article empirically evaluates the prequential methodology considering its three common strategies used to update the prediction model, namely, Basic Window, Sliding Window, and Fading Factors. Specifically, it aims to identify which of these variations is the most accurate for the experimental evaluation of the past results in scenarios where concept drifts occur, with greater interest in the accuracy observed within the total data flow. The prequential accuracy of the three variations and the real accuracy obtained in the learning process of each dataset are the basis for this evaluation. The results of the carried‐out experiments suggest that the use of Prequential with the Sliding Window variation is the best alternative.  相似文献   

18.
A growing number of location-based applications are based on indoor positioning, and much of the research effort in this field has focused on the pattern-matching approach. This approach relies on comparing a pre-trained database (or radio map) with the received signal strength (RSS) of a mobile device. However, such methods are highly sensitive to environmental dynamics. A number of solutions based on added anchor points have been proposed to overcome this problem. This paper proposes an approach using existing beacons to measure the RSS from other beacons as a reference, which we call inter-beacon measurement, for the calibration of radio maps on the fly. This approach is feasible because most current beacons (such as Wi-Fi and ZigBee stations) have both transmitting and receiving capabilities. This approach would relieve the need for additional anchor points that deal with environmental dynamics. Simulation and experimental results are presented to verify our claims.  相似文献   

19.
《Information Sciences》1986,40(1):53-66
We propose two new classes of two-dimensional array languages, called existential matching languages (EXMLs) and universal matching languages (UNMLs). These languages are closely related to two-dimensional pattern matching, and thus suited for studying them formally. In this paper, basic properties of these languages and decidability (or undecidability) of several problems about them are investigated. We show that, because of the two-dimensionality, several decision problems, such as the emptiness problem for UNMLs, the universe problem for EXMLs, and the equivalence problems for both languages, are undecidable. Thus we cannot decide, for example, whether two two-dimensional pattern-matching tasks are equivalent.  相似文献   

20.
One of the major problems in natural language understanding by computer is the frequent use of patterned or idiomatic phrases in colloquial English dialogue. Traditional parsing methods typically cannot cope with a significant number of idioms. A more general problem is the tendency of a speaker to leave the meaning of an utterance ambiguous or partially implicit, to be filled in by the hearer from a shared mental context which includes linguistic, social, and physical knowledge. The appropriate representation for this knowledge is a formidable and unsolved problem. We present here an approach to natural language understanding which addresses itself to these problems. Our program uses a series of processing stages which progressively transform an English input into a form usable by our computer simulation of paranoia. Most of the processing stages involve matching the input to an appropriate stored pattern and performing the associated transformation. However, a few key stages perform aspects of traditional parsing which greatly facilitate the overall language recognition process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号