首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The sequential Prolog machine PEK currently under development is described. PEK is an experimental machine designed for high speed execution of Prolog programs. The PEK machine is controlled by horizontal-type microinstructions. The machine includes bit slice microprocessor elements comprising a microprogram sequencer and ALU, and possesses hardware circuits for unification and backtracking. The PEK machine consists of a host processor (MC68000) and a backend processor (PEK engine). A Prolog interpreter has been developed on the machine and the machine performance evaluated. A single inference can be executed in 89 microinstructions, and execution speed is approximately 60–70 KLIPS.  相似文献   

2.
CARMEL-2 is a high performance VLSI uniprocessor, tuned forFlat Concurrent Prolog (FCP). CARMEL-2 shows almost 5-fold speedup over its predecessor, CARMEL-1, and it achieves 2,400 KLIPS executingappend. This high execution rate was gained as a result of an optimized design, based on an extensive architecture-oriented execution analysis of FCP, and the lessons learned with CARMEL-1. CARMEL-2 is a RISC processor in its character and performance. The instruction set includes only 29 carefully selected instructions. The 10 special instructions, the prudent implementation and pipeline scheme, as well as sophisticated mechanisms such as intelligent dereference, distinguish CARMEL-2 as a RISC processor for FCP.  相似文献   

3.
This paper presents an alternative approach to implementation of DCGs. In contrast to the standard implementation we use the well-known bottom-up SLR(1)-parsing technique. An experimental system, AID, based on this technique is presented and discussed. The aim of this work is twofold. Firstly, we describe an alternative principle of implementation of DCGs, which makes it possible to cope with left-recursive DCGs and to avoid unnecessary backtracking when parsing. Secondly, we demonstrate that Prolog can be used for implementation of table driven parsers that generalize deterministic LR(k)-parsing techniques to the case of ambiguous grammars. The parsers generated by our system are deterministic whenever the submitted grammar is SLR(1). Otherwise the shift/reduce or reduce/reduce conflicts are stored in the generalized table and are used as Prolog backtrack points.  相似文献   

4.
This paper begins by describing BSL, a new logic programming language fundamentally different from Prolog. BSL is a nondeterministic Algol-class language whose programs have a natural translation to first order logic; executing a BSL program without free variables amounts to proving the corresponding first order sentence. A new approach is proposed for parallel execution of logic programs coded in BSL, that relies on advanced compilation techniques for extracting fine grain parallelism from sequential code. We describe a new “Very Long Instruction Word” (VLIW) architecture for parallel execution of BSL programs. The architecture, now being designed at the IBM Thomas J. Watson Research Center, avoids the synchronization and communication delays (normally associated with parallel execution of logic programs on multiprocessors), by determining data dependences between operations at compile time, and by coupling the processing elements very tightly, via a single central shared register file. A simulator for the architecture has been implemented and some simulation results are reported in the paper, which are encouraging.  相似文献   

5.
6.
Current anti-malware tools have proved to be insufficient in combating ever-evolving malware attacks and vulnerability exploits due to inevitable vulnerabilities present in the complex software used today. In addition, the performance penalty incurred by anti-malware tools is magnified when security approaches designed for desktops are migrated to modern mobile devices, such as tablets and laptops, due to their relatively limited processing capabilities and battery capacities. In this paper, we propose a fine-grained anomaly detection defense framework that offers a cost-efficient way to detect malicious behavior and prevent vulnerability exploits in resource-constrained computing platforms. In this framework, a trusted third party (e.g., the publisher) first tests a new application by running it in a heavily monitored testing environment that emulates the target system and extracts a behavioral model from its execution paths. Extensive security policies are enforced during this process. In case of a violation, the program is denied release to the user. If the application passes the tests, the user can download the behavioral model along with the tested application binary. At run-time, the application is monitored against the behavioral model. In the unlikely event that a new execution path is encountered, conservative but lightweight security policies are applied. To reduce overhead at the user end, the behavioral model may be further reduced by the publisher through static analysis. We have implemented the defense framework using a netbook with the Intel Atom processor and evaluated it with a suite of 51 real-world Linux viruses and malware. Experiments demonstrate that our tool achieves a very high coverage (98 %) of considered malware and security threats. The four antivirus tools we compare our tool against were found to have poor virus coverage, especially of obfuscated viruses. By removing safe standard library blocks from the behavioral model, we reduce the model size by 8.4 \(\times \) and the user’s run-time overhead by 23 %.  相似文献   

7.
8.
9.
MapReduce has been demonstrated to be a promising alternative to simplify parallel programming with high performance on single multicore machine. Compared to the cluster version, MapReduce does not have bottlenecks in disk and network I/O on single multicore machine, and it is more sensitive to characteristics of workloads. A single execution flow may be inefficient for many classes of workloads. For example, the fixed execution flow of the MapReduce program structure can impose significant overheads for workloads that inherently have only one emitted value per key, which are mainly caused by the unnecessary reduce phase. In this paper, we refine the workload characterization from Phoenix++ according to the attributes of key-value pairs, and give a demonstration that the refined workload characterization model covers all classes of MapReduce workloads. Based on the model, we propose a new MapReduce system with workload-customizable execution flow. The system, namely Peacock, is implemented on top of Phoenix++. Experiments with four different classes of benchmarks on a 16-core Intel-based server show that Peacock achieves better performance than Phoenix++ for workloads that inherently have only one emitted value per key (up to a speedup of \(3.6\times \) ) while identical for other classes of workloads.  相似文献   

10.
Piranha is a execution model for Linda4 developed at Yale(1) to reclaim idle cycles from networked workstations for use in executing parallel programs. Piranha has proven to be an effective system for harnessing large amounts of computing power. Most Piranha research to this point has concentrated on efficiently executing a single application at a time. In this paper we evaluate strategies for scheduling multiple Piranha applications. We examine methods for predicting idle periods and the effectiveness of scheduling strategies that make use of these predictions. We present a prototype scheduler for the Piranha system implemented using the process trellis software architecture for networks of workstations. This work was supported by AASERT Grant F49620-92-J-0240. AFOSR-91-0098 and NASA Training Grant NGT-50719.  相似文献   

11.
With the rapid growth of the video surveillance applications, the storage energy consumption of video surveillance is more noticeable, but existed energy-saving methods for massive storage system most concentrate on the data centers mainly with random accesses. The storage of video surveillance has inherent access pattern, and requires special energy-saving approach to save more energy. An energy-efficient data layout for video surveillance, Semi-RAID is proposed. It adopts partial-parallelism strategy, which partitions disk data into different groups, and implements parallel accesses in each group. Grouping benefits to realize only partial disks working and the rest ones idle, and inner-group parallelism provides the performance guarantee. In addition, greedy strategy for address allocation is adopted to effectively prolong the idle period of the disks; particular Cache strategies are used to filter the small amount of random accesses. The energy-saving efficiency of Semi-RAID is verified by a simulated video surveillance consisting of 32 cameras with D1 resolution. The experiment shows: Semi-RAID can save 45 % energy than Hibernator; 80 % energy than PARAID; 33 % energy than MAID; 79 % energy than eRAID-5, while providing single disk fault tolerance and meeting the performance requirement, such as throughput.  相似文献   

12.
This paper presents an agent-based simulator for environmental land change that includes efficient and parallel auto-tuning. This simulator extends the Multi-Agent System for Environmental simulation (MASE) by introducing rationality to agents using a mentalistic approach—the Belief-Desire-Intention (BDI) model—and is thus named MASE-BDI. Because the manual tuning of simulation parameters is an error-prone, labour and computing intensive task, an auto-tuning approach with efficient multi-objective optimization algorithms is also introduced. Further, parallelization techniques are employed to speed up the auto-tuning process by deploying it in parallel systems. The MASE-BDI is compared to the MASE using the Brazilian Cerrado biome case. The MASE-BDI reduces the simulation execution times by at least 82 × and slightly improves the simulation quality. The auto-tuning algorithms, by evaluating less than 0.00115 % of a search space with 6 million parameter combinations, are able to quickly tune the simulation model, regardless of the objective used. Moreover, the experimental results show that executing the tuning in parallel leads to speedups of approximately 11 × compared to sequential execution in a hardware setting with 16-CPU cores.  相似文献   

13.
The Andorra model is a parallel execution model of logic programs which exploits the dependent and-parallelism and or-parallelism inherent in logic programming. We present a flat subset of a language based on the Andorra model, henceforth called Andorra Prolog, that is intended to subsume both Prolog and the committed choice languages. Flat Andorra, in addition todon’t know anddon’t care nondeterminism, supports control of or-parallel split, synchronisation on variables, and selection of clauses. We show the operational semantics of the language, and its applicability in the domain of committed choice languages. As an examples of the expressiveness of the language, we describe a method for communication between objects by time-stamped messages, which is suitable for expressing distributed discrete event simulation applications. This method depends critically on the ability to expressdon’t know nondeterminism and thus cannot easily be expressed in a committed choice language.  相似文献   

14.
15.
We present a method for preprocessing Prolog programs so that their operational semantics will be given by the first-order predicate calculus. Most Prolog implementations do not use a full unification algorithm, for efficiency reasons. The result is that it is possible to create terms having loops in them, whose semantics is not adequately described by first-order logic. Our method finds places where such loops may be created, and adds tests to detect them. This should not appreciably slow down the execution of most Prolog programs.  相似文献   

16.
Clark’s query evaluation procedure for computing negative information in deductive databases using a “negation as failure” inference rule requires a safe computation rule which may only select negative literals if they are ground. This is a very restrictive condition, which weakens the usefulness of negation as failure in a query evaluation procedure. This paper studies the definition and properties of the “not” predicate defined in most Prolog systems which do not enforce the above mentioned condition of a safe computation rule. We show that the negation in clauses and the “not” Predicate of Prolog are not the same. In fact a Prolog program may not be in clause form. An extended query evaluation procedure with an extended safe computation rule is proposed to evaluate queries which involve the “not” predicate. The soundness and completeness of this extended query evaluation procedure with respect to a class of logic programs are proved. The implementation of such an extended query evaluation procedure in a Prolog system can be implemented by a preprocessor for executing range restricted programs and requires no modification to the interpreter/compiler of an existing Prolog system. We compare this proposed extended query evaluation procedure with the extended program proposed by Lloyd and Topor, and the negation constructs in NU-Prolog. The use of the “not” predicate for integrity constraint checking in deductive databases is also presented.  相似文献   

17.
Several attempts have been made to design a production system using Prolog. To construct a forward reasoning system, the rule interpreter is often written in Prolog, but its execution is slow. To develop an efficient production system, we propose a rule translation method where production rules are translated into a Prolog program and forward reasoning is done by the translated program. To translate the rules, we adopted the technique developed in BUP, the bottom-up parsing system in Prolog. Man-machine dialogue functions were added to the production system and showed the potential of our method to be applied to expert systems.  相似文献   

18.
The secure OS has been the focus of several studies. However, CPU resources, which are important resources for executing a program, are not the object of access control in secure OS. For preventing the abuse of CPU resources, we had earlier proposed a new type of execution resource that controls the maximum CPU usage (Tabata et al. in Int. J. Smart Home 1(2):109–128, 2007). The previously proposed mechanism can control only one process at a time. Because most services involve multiple processes, the mechanism should control all the processes in each service. In this paper, we propose an improved mechanism that helps to achieve a bound on the execution performance of a process group in order to limit unnecessary processor usage. We report the results of an evaluation of our proposed mechanism.  相似文献   

19.
To scale up to high-end configurations, shared-memory multiprocessors are evolving towards Non Uniform Memory Access (NUMA) architectures. In this paper, we address the central problem of load balancing during parallel query execution in NUMA multiprocessors. We first show that an execution model for NUMA should not use data partitioning (as shared-nothing systems do) but should strive to exploit efficient shared-memory strategies like Synchronous Pipelining (SP). However, SP has problems in NUMA, especially with skewed data. Thus, we propose a new execution strategy which solves these problems. The basic idea is to allow partial materialization of intermediate results and to make them progressivly public, i.e., able to be processed by any processor, as needed to avoid processor idle times. Hence, we call this strategy Progressive Sharing (PS). We conducted a performance comparison using an implementation of SP and PS on a 72-processor KSR1 computer, with many queries and large relations. With no skew, SP and PS have both linear speed-up. However, the impact of skew is very severe on SP performance while it is insignificant on PS. Finally, we show that, in NUMA, PS can also be beneficial in executing several pipeline chains concurrently.  相似文献   

20.
Logic programming is expected to make knowledge information processing feasible. However, conventional Prolog systems lack both processing power and flexibility for solving large problems. To overcome these limitations, an approach is developed in which natural execution features of logic programs can be represented using Proof Diagrams. AND/ OR parallel processing based on a goal-rewriting model is examined. Then the abstract architecture of a highly parallel inference engine (PIE) is described. PIE makes it possible to achieve logic/control separation in machine architecture. The architecture proposed here is discussed from the viewpoint of its high degree of parallelism and flexibility in problem solving in comparison with other approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号