首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Induced Graph Matching problem asks to find \(k\) disjoint induced subgraphs isomorphic to a given graph  \(H\) in a given graph \(G\) such that there are no edges between vertices of different subgraphs. This problem generalizes the classical Independent Set and Induced Matching problems, among several other problems. We show that Induced Graph Matching is fixed-parameter tractable in \(k\) on claw-free graphs when \(H\) is a fixed connected graph, and even admits a polynomial kernel when  \(H\) is a complete graph. Both results rely on a new, strong, and generic algorithmic structure theorem for claw-free graphs. Complementing the above positive results, we prove \(\mathsf {W}[1]\) -hardness of Induced Graph Matching on graphs excluding \(K_{1,4}\) as an induced subgraph, for any fixed complete graph \(H\) . In particular, we show that Independent Set is \(\mathsf {W}[1]\) -hard on \(K_{1,4}\) -free graphs. Finally, we consider the complexity of Induced Graph Matching on a large subclass of claw-free graphs, namely on proper circular-arc graphs. We show that the problem is either polynomial-time solvable or \(\mathsf {NP}\) -complete, depending on the connectivity of \(H\) and the structure of \(G\) .  相似文献   

2.
Structured parallel programming is recognised as a viable and effective means of tackling parallel programming problems. Recently, a set of simple and powerful parallel building blocks ( \(\mathsf{RISC\text{- }pb^2l}\) ) has been proposed to support modelling and implementation of parallel frameworks. In this work we demonstrate how that same parallel building block set may be used to model both general purpose parallel programming abstractions, not usually listed in classical skeleton sets, and more specialized domain specific parallel patterns. We show how an implementation of \(\mathsf{RISC\text{- }pb^2l}\) can be realised via the FastFlow framework and present experimental evidence of the feasibility and efficiency of the approach.  相似文献   

3.
The debuggers of Ref. 11) and most of their derivatives are of themeta-interpreter type. The computation of the debugger tracks the computation of the program to be diagnosed at the level of procedure call. This is adequate if the intuitive understanding of the programmer is in terms of procedure calls; as is indeed the case in Prolog. InLDL however, while the semantics of the language are described in a bottom-up, fixpoint model of computation,8) theactual execution of a program is a complex sequence of low-level procedure calls determined (and optimized) by the compiler. Consequently, a trace of these procedure calls is of little use to the programmer. Further, one cannot “execute” anLDL program as if it was a Prolog program; the program may simply not terminate in its Prolog reading and severalLDL constructs have no obvious Prolog counterparts. We identify the origin of a fault in theLDL program by a top-down, query/subquery approach. The basic debugger, implemented in Prolog, is a shell program between the programmer and theLDL program: it poses queries and uses the results to drive the interaction with the user. It closely resembles the one presented in Ref. 11). The core of a more sophisticated debugger is presented as well. Several concepts are introduced in order to quantify debugging abilities. One is that of agenerated interpretation, in which the structureless intended interpretation of Ref. 11) is augmented with causality. Another is the (idealized) concept of areliable oracle. We show that given an incorrect program and a reliable oracle which uses a generated interpretation, a cause for the fault will be found in finitely many steps. This result carries over to the more sophisticated debugger.  相似文献   

4.
Multicore processors can provide sufficient computing power and flexibility for complex streaming applications, such as high-definition video processing. For less hardware complexity and power consumption, the distributed scratchpad memory architecture is considered, instead of the cache memory architecture. However, the distributed design poses new challenges to programming. It is difficult to exploit all available capabilities and achieve maximal throughput, due to the combined complexity of inter-processor communication, synchronization, and workload balancing. In this study, we developed an efficient design flow for parallelizing multimedia applications on a distributed scratchpad memory multicore architecture. An application is first partitioned into streaming components and then mapped onto multicore processors. Various hardware-dependent factors and application-specific characteristics are involved in generating efficient task partitions and allocating resources appropriately. To test and verify the proposed design flow, three popular multimedia applications were implemented: a full-HD motion JPEG decoder, an object detector, and a full-HD H.264/AVC decoder. For demonstration purposes, SONY PlayStation \(^{\circledR }\) 3 was selected as the target platform. Simulation results show that, on PS3, the full-HD motion JPEG decoder with the proposed design flow can decode about 108.9 frames per second (fps) in the 1080p format. The object detection application can perform real-time object detection at 2.84 fps at \(1280 \times 960\) resolution, 11.75 fps at \(640 \times 480\) resolution, and 62.52 fps at \(320 \times 240\) resolution. The full-HD H.264/AVC decoder applications can achieve nearly 50 fps.  相似文献   

5.
This paper introduces three refinement patterns for algebraic state-transition diagrams (astds): state refinement, transition refinement and loop-transition refinement. These refinement patterns are derived from practice in using astds for specifying information systems and security policies in two industrial research projects. Two refinement relations used in these patterns are formally defined. For each pattern, proof obligations are proposed to ensure preservation of behaviour through refinement. The proposed refinement relations essentially consist in preserving scenarios by replacing abstract events with concrete events, or by introducing new events. Deadlocks cannot be introduced; divergence over new events is allowed in one of the refinement relation. We prove congruence-like properties for these three patterns, in order to show that they can be applied to a subpart of a specification while preserving global properties. These three refinement patterns are illustrated with a simple case study of a complaint management system.  相似文献   

6.
The Andorra model is a parallel execution model of logic programs which exploits the dependent and-parallelism and or-parallelism inherent in logic programming. We present a flat subset of a language based on the Andorra model, henceforth called Andorra Prolog, that is intended to subsume both Prolog and the committed choice languages. Flat Andorra, in addition todon’t know anddon’t care nondeterminism, supports control of or-parallel split, synchronisation on variables, and selection of clauses. We show the operational semantics of the language, and its applicability in the domain of committed choice languages. As an examples of the expressiveness of the language, we describe a method for communication between objects by time-stamped messages, which is suitable for expressing distributed discrete event simulation applications. This method depends critically on the ability to expressdon’t know nondeterminism and thus cannot easily be expressed in a committed choice language.  相似文献   

7.
This paper presents some benchmark timings from an optimising Prolog compiler using global analysis for a RISC workstation, the MIPS R2030. These results are extremely promising. For example, the infamous naive reverse benchmark runs at 2 mega LIPS. We compare these timings with those for other Prolog implementations running on the same workstation and with published timings for the KCM, a recent piece of special purpose Prolog hardware. The comparison suggests that global analysis is a fruitful source of information for an optimising Prolog compiler and that the performance of special purpose Prolog hardware can be at least matched by the code from a compiler using such information. We include some analysis of the sources of the improvement global analysis yields. An overview of the compiler is given and some implementation issues are discussed. This paper is an extended version of Ref. 15)  相似文献   

8.
This is the first report of surface-enhanced Raman scattering (SERS) substrate fabrication using a combination of imprinted hydrogen silsesquioxane (HSQ: HSiO3/2) patterns and self-assembly of gold nanoparticles (AuNPs). To assemble the AuNPs inside the imprinted HSQ pattern, it is important to understand the interactions between AuNPs and AuNPs, and those between AuNPs and HSQ. The authors investigated the effects HSQ surface charges on the self-assembly of AuNPs. It was found that the negatively charged AuNPs were successfully assembled according to the geometry of the negatively charged HSQ pattern. In addition, it was shown that the SERS substrate fabricated from an HSQ consisting of an inorganic polymer was suitable for organic chemical analysis, by comparing it with a substrate fabricated using an organic polymer.  相似文献   

9.
This paper describes a reliable method for fabrication of stable gold patterns embedded in polydimethylsiloxane (PDMS) using a direct peel-off process. Two different surface modifications with self-assembled monolayers were carried out for easy and reliable transfer of Au micro-patterns to the PDMS: (1) perfluorodecyltrichlorosilane on a Si substrate for easy release of the Au patterns from the Si substrate, and (2) (3-mercaptopropyl)trimethoxysilane on the Au patterns to promote the adhesion between the Au patterns and PDMS. Au features as small as 2 μm, in shapes of line and dots, were successfully transferred from the Si substrate to the PDMS over a 3-inch wafer. Transfer of Au patterns to PDMS using the dry peel-off process did not cause any contamination of PDMS, typically seen in wet chemical methods. Finally, the stability of the Au patterns embedded in PDMS was confirmed by the Scotch-tape adhesion test.  相似文献   

10.
In this paper, a compressed membership problem for finite automata, both deterministic (DFAs) and non-deterministic (NFAs), with compressed transition labels is studied. The compression is represented by straight-line programs (SLPs), i.e. context-free grammars generating exactly one string. A novel technique of dealing with SLPs is employed: the SLPs are recompressed, so that substrings of the input word are encoded in SLPs labelling the transitions of the NFA (DFA) in the same way, as in the SLP representing the input text. To this end, the SLPs are locally decompressed and then recompressed in a uniform way. Furthermore, in order to reflect the recompression in the NFA, we need to modify it only a little, in particular its size stays polynomial in the input size. Using this technique it is shown that the compressed membership for NFA with compressed labels is in NP, thus confirming the conjecture of Plandowski and Rytter (Jewels Are Forever, pp. 262–272, Springer, Berlin, 1999) and extending the partial result of Lohrey and Mathissen (in CSR, LNCS, vol. 6651, pp. 275–288, Springer, Berlin, 2011); as this problem is known to be NP-hard (in Plandowski and Rytter, Jewels Are Forever, pp. 262–272, Springer, Berlin, 1999), we settle its exact computational complexity. Moreover, the same technique applied to the compressed membership for DFA with compressed labels yields that this problem is in P, and this problem is known to be P-hard (in Markey and Schnoebelen, Inf. Process. Lett. 90(1):3–6, 2004; Beaudry et al., SIAM J. Comput. 26(1):138–152, 1997).  相似文献   

11.
Aurora is a prototype or-parallel implementation of the full Prolog language for shared-memory multiprocessors, developed as part of an informal research collaboration known as the “Gigalips Project”. It currently runs on Sequent and Encore machines. It has been constructed by adapting Sicstus Prolog, a fast, portable, sequential Prolog system. The techniques for constructing a portable multiprocessor version follow those pioneered in a predecessor system, ANL-WAM. The SRI model was adopted as the means to extend the Sicstus Prolog engine for or-parallel operation. We describe the design and main implementation features of the current Aurora system, and present some experimental results. For a range of benchmarks, Aurora on a 20-processor Sequent Symmetry is 4 to 7 times faster than Quintus Prolog on a Sun 3/75. Good performance is also reported on some large-scale Prolog applications.  相似文献   

12.
In this paper we investigate the relation between architectural support for Prolog and performance. We will show that partial support for tags does perform as well as full support, but it only reduces the execution time by approximately 10%. With respect to special addressing modes, auto address modification (post/pre increment, decrement on loads and stores) only yields a cycle reduction of approximately 6% and the introduction of a single shadow register set yields around 8%. Combining these optimizations, a performance gain of 20 to 25% can be achieved, depending on the memory system. Usingvliw techniques, which exploit instruction-level parallelism, the performance can be doubled, using three processing elements. Two processing elements already provide a significant speedup, but the use of four processing elements is not justified if we compare the gain in performance with the cost of the extra hardware. In general we observe only a small performance improvement (around 20%) when moving fromrisc to special-purposerisc architectures, an improvement which can also be achieved by applying advanced compiler technology, such as compiler optimization, optimizations forwam, and optimal scheduling techniques forvliw architectures. Unfortunately these hardware and software effects do not add up, as a better compiler reduces the effect of hardware support. Finally, the cycle time is essential for comparing the performance of different (micro)-architectures, but it is not always clear what the effects of the different tradeoffs are on the maximum achievable cycle time.  相似文献   

13.
Soft-UV-NIL as replication technique was used to replicate sub-100 nm structures. The aim of this work is the stamp production and the replication of structures with dimensions smaller than 100 nm in a simple manner. Composite stamps composed of two layers, a thin hard PDMS layer supported by a thick soft PDMS (s-PDMS) layer are compared to common s-PDMS stamps regarding the resolution by using a Siemens star (star burst pattern) as test structure. The master is fabricated by electron beam lithography in a 140 nm thick PMMA resist layer. The stamp is molded directly from the structured resist, without any additional anti sticking treatment. Therefore the resist thickness determines the aspect ratio, which is 1.5 at the resolution limit. The replication is done in a UV-curing cycloaliphatic epoxy material. The employed test structure provides good comparability, the resolution limit at a glance, and it integrates a smooth transition from micro- to nanostructures. Therefore it is a capable structure to characterize the UV-NIL.  相似文献   

14.
We study the hardness of approximation of clause minimum and literal minimum representations of pure Horn functions in n Boolean variables. We show that unless P=NP, it is not possible to approximate in polynomial time the minimum number of clauses and the minimum number of literals of pure Horn CNF representations to within a factor of \(2^{\log^{1-o(1)} n}\) . This is the case even when the inputs are restricted to pure Horn 3-CNFs with O(n 1+ε ) clauses, for some small positive constant ε. Furthermore, we show that even allowing sub-exponential time computation, it is still not possible to obtain constant factor approximations for such problems unless the Exponential Time Hypothesis turns out to be false.  相似文献   

15.
By utilizing the high gas permeability of polydimethylsiloxane (PDMS), a simple syringe-assisted pumping method was introduced. A dead-end microfluidic channel was partially surrounded by an embedded microchamber, with a thin PDMS wall isolating the dead-end channel and the embedded microchamber. A syringe was connected with the microchamber port by a short tube, and the syringe plunger was manually pulled out to generate low pressure inside the microchamber. When sample liquid was loaded in the inlet port, air trapped in the dead-end channel would diffuse into the surrounding microchamber through the PDMS wall, creating an instantaneous pumping of the liquid inside the dead-end channel. By only pulling the syringe manually, a constant low flow with a rate ranging from 0.089 to 4 nl/s was realized as functions of two key parameters: the PDMS wall thickness and the overlap area between the dead-end channel and the surrounded microchamber. This method enabled point-of-care pumping without pre-evacuating the PDMS devices in a bulky vacuum chamber.  相似文献   

16.
CARMEL-2 is a high performance VLSI uniprocessor, tuned forFlat Concurrent Prolog (FCP). CARMEL-2 shows almost 5-fold speedup over its predecessor, CARMEL-1, and it achieves 2,400 KLIPS executingappend. This high execution rate was gained as a result of an optimized design, based on an extensive architecture-oriented execution analysis of FCP, and the lessons learned with CARMEL-1. CARMEL-2 is a RISC processor in its character and performance. The instruction set includes only 29 carefully selected instructions. The 10 special instructions, the prudent implementation and pipeline scheme, as well as sophisticated mechanisms such as intelligent dereference, distinguish CARMEL-2 as a RISC processor for FCP.  相似文献   

17.
In this paper, we study two new spline methods for the Birkhoff interpolation problem proposed in Costabile and Longo (Calcolo 47:49–63, 2010). Our methods are based on cubic spline and quintic spline respectively. The new methods are easy to be implemented. They are effective not only in approximating the original function but also in approximating its derivatives. The respective remainders and error bounds are given. Numerical tests are performed. Numerical results confirm the theoretical analysis and show that our methods are very practical.  相似文献   

18.
This paper proposes a distributed address configuration scheme for a Mobile Ad hoc Network (MANET). The architecture for a MANET, the algorithm of constructing a MANET, and the algorithm for calculating a unique address space for the assignment are proposed. In the architecture, a common node has a unique address space for the assignment, and an isolated node can acquire a unique address from a neighboring common node without performing duplicate address detection. In this way, the address configuration task is distributed around common nodes. In this scheme, the control packets used for address configuration are exchanged within a one-hop scope, so the scalability is enhanced. This paper also presents an address recovery algorithm that can effectively retrieve the address resources released by failed nodes and the MANET merging/partitioning algorithm that can ensure a node’s address uniqueness. This paper compares the performance parameters of the proposed scheme and the existing schemes, including strong duplicate address detection and prime dynamic host configuration protocol, and the comparative results show that the address configuration cost of the proposed scheme is lower and the delay is shorter.  相似文献   

19.
20.
Inductive logic programming   总被引:3,自引:0,他引:3  
A new research area, Inductive Logic Programming, is presently emerging. While inheriting various positive characteristics of the parent subjects of Logic Programming and Machine Learning, it is hoped that the new area will overcome many of the limitations of its forebears. The background to present developments within this area is discussed and various goals and aspirations for the increasing body of researchers are identified. Inductive Logic Programming needs to be based on sound principles from both Logic and Statistics. On the side of statistical justification of hypotheses we discuss the possible relationship between Algorithmic Complexity theory and Probably-Approximately-Correct (PAC) Learning. In terms of logic we provide a unifying framework for Muggleton and Buntine’s Inverse Resolution (IR) and Plotkin’s Relative Least General Generalisation (RLGG) by rederiving RLGG in terms of IR. This leads to a discussion of the feasibility of extending the RLGG framework to allow for the invention of new predicates, previously discussed only within the context of IR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号