首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Solving the subset-sum problem with a light-based device   总被引:2,自引:2,他引:0  
We propose an optical computational device which uses light rays for solving the subset-sum problem. The device has a graph-like representation and the light is traversing it by following the routes given by the connections between nodes. The nodes are connected by arcs in a special way which lets us to generate all possible subsets of the given set. To each arc we assign either a number from the given set or a predefined constant. When the light is passing through an arc it is delayed by the amount of time indicated by the number placed in that arc. At the destination node we will check if there is a ray whose total delay is equal to the target value of the subset sum problem (plus some constants). The proposed optical solution solves a NP-complete problem in time proportional with the target sum, but requires an exponential amount of energy.  相似文献   

2.
The computational problems that scientists face are rapidly escalating in size and scope. Moreover, the computer systems used to solve these problems are becoming significantly more complex than the familiar, well-understood sequential model on their desktops. While it is possible to re-train scientists to use emerging high-performance computing (HPC) models, it is much more effective to provide them with a higher-level programming environment that has been specialized to their particular domain. By fostering interaction between HPC specialists and the domain scientists, problem-solving environments (PSEs) provide a collaborative environment. A PSE environment allows scientists to focus on expressing their computational problem while the PSE and associated tools support mapping that domain-specific problem to a high-performance computing system.This article describes Arches, an object-oriented framework for building domain-specific PSEs. The framework was designed to support a wide range of problem domains and to be extensible to support very different high-performance computing targets. To demonstrate this flexibility, two PSEs have been developed from the Arches framework to solve problem in two different domains and target very different computing platforms. The Coven PSE supports parallel applications that require large-scale parallelism found in cost-effective Beowulf clusters. In contrast, RCADE targets FPGA-based reconfigurable computing and was originally designed to aid NASA Earth scientists studying satellite instrument data.  相似文献   

3.
Watson-Crick L systems are language generating devices making use ofWatson-Crick complementarity, a fundamental concept of DNA computing.These devices are Lindenmayer systems enriched with a trigger forcomplementarity transition: if a ``bad' string is obtained, then thederivation continues with its complement which is always a ``good'string. Membrane systems or P systems are distributed parallel computingmodels which were abstracted from the structure and the way offunctioning of living cells. In this paper, we first interpret theresults known about the computational completeness of Watson-Crick E0Lsystems in terms of membrane systems, then we introduce a related way ofcontrolling the evolution in P systems, by using the triggers not in theoperational manner (i.e., turning to the complement in a ``bad'configuration), but in a ``Darwinian' sense: if a ``bad' configurationis reached, then the system ``dies', that is, no result is obtained.The triggers (actually, the checkers) are given as finite state multisetautomata. We investigate the computational power of these P systems.Their computational completeness is proved, even for systems withnon-cooperative rules, working in the non-synchronized way, andcontrolled by only two finite state checkers; if the systems work in thesynchronized mode, then one checker for each system suffices to obtainthe computational completeness.  相似文献   

4.
In this paper we propose a genetic algorithm (GA) for solving the DNA fragment assembly problem in a computational grid. The algorithm, which is named GrEA, is a steady-state GA which uses a panmitic population, and it is based on computing parallel function evaluations in an asynchronous way. We have implemented GrEA on top of the Condor system, and we have used it to solve the DNA assembly problem. This is an NP-hard combinatorial optimization problem which is growing in importance and complexity as more research centers become involved on sequencing new genomes. While previous works on this problem have usually faced 30 K base pairs (bps) long instances, we have tackled here a 77 K bps long one to show how a grid system can move research forward. After analyzing the basic grid algorithm, we have studied the use of an improvement method to still enhance its scalability. Then, by using a grid composed of up to 150 computers, we have achieved time reductions from tens of days down to a few hours, and we have obtained near optimal solutions when solving the 77 K bps long instance (773 fragments). We conclude that our proposal is a promising approach to take advantage of a grid system to solve large DNA fragment assembly problem instances and also to learn more about grid metaheuristics as a new class of algorithms for really challenging problems.  相似文献   

5.
Summary.  In this paper, we prove a lower bound on the number of rounds required by a deterministic distributed protocol for broadcasting a message in radio networks whose processors do not know the identities of their neighbors. Such an assumption captures the main characteristic of mobile and wireless environments [3], i.e., the instability of the network topology. For any distributed broadcast protocol Π, for any n and for any Dn/2, we exhibit a network G with n nodes and diameter D such that the number of rounds needed by Π for broadcasting a message in G is Ω(D log n). The result still holds even if the processors in the network use a different program and know n and D. We also consider the version of the broadcast problem in which an arbitrary number of processors issue at the same time an identical message that has to be delivered to the other processors. In such a case we prove that, even assuming that the processors know the network topology, Ω(n) rounds are required for solving the problem on a complete network (D=1) with n processors. Received: August 1994 / Accepted: August 1996  相似文献   

6.
The formal verification of a Spiking Neural P System (SN P Systems, for short) designed for solving a given problem is usually a hard task. Basically, the verification process consists of the search of invariant formulae such that, once proved their validity, show the right answer to the problem. Even though there does not exist a general methodology for verifying SN P Systems, in (Păun et al., Int J Found Comput Sci 17(4):975–1002, 2006) a new tool based on the transition diagram of the P system has been developed for helping the researcher in the search of invariant formulae. In this paper we show a software tool which allows to generate the transition diagram of an SN P System in an automatic way, so it can be considered as an assistant for the formal verification of such computational devices.
Daniel Ramírez-MartínezEmail:
  相似文献   

7.
This paper describes a new approach towards the detection of metamorphic computer viruses through the algebraic specification of an assembly language. Metamorphic computer viruses are computer viruses that apply a variety of syntax-mutating, behaviour-preserving metamorphoses to their code in order to defend themselves against static analysis based detection methods. An overview of these metamorphoses is given. Then, in order to identify behaviourally equivalent instruction sequences, the syntax and semantics of a subset of the IA-32 assembly language instruction set is specified formally using OBJ – an algebraic specification formalism and theorem prover based on order-sorted equational logic. The concepts of equivalence and semi-equivalence are given formally, and a means of proving equivalence from semi-equivalence is given. The OBJ specification is shown to be useful for proving the equivalence or semi-equivalence of IA-32 instruction sequences by applying reductions – sequences of equational rewrites in OBJ. These proof methods are then applied to fragments of two different metamorphic computer viruses, Win95/Bistro and Win9x.Zmorph.A, in order to prove their (semi-)equivalence. Finally, the application of these methods to the detection of metamorphic computer viruses in general is discussed.  相似文献   

8.
The task of balancing of assembly lines is of considerable industrial importance. It consists of assigning operations to workstations in a production line in such a way that (1) no assembly precedence constraint is violated, (2) no workstations in the line takes longer than a predefined cycle time to perform all tasks assigned to it, and (3) as few workstations as possible are needed to perform all the tasks in the set. This paper presents a new multiple objective simulated annealing (SA) algorithm for simple (line) and U type assembly line balancing problems with the aim of maximizing “smoothness index” and maximizing the “line performance” (or minimizing the number of workstations). The proposed algorithm makes use of task assignment rules in constructing feasible solutions. The proposed algorithm is tested and compared with literature test problems. The proposed algorithm found the optimal solutions for each problem in short computational times. A detailed performance analysis of the selected task assignment rules is also given in the paper.  相似文献   

9.
This paper generalizes the widely used Nelder and Mead (Comput J 7:308–313, 1965) simplex algorithm to parallel processors. Unlike most previous parallelization methods, which are based on parallelizing the tasks required to compute a specific objective function given a vector of parameters, our parallel simplex algorithm uses parallelization at the parameter level. Our parallel simplex algorithm assigns to each processor a separate vector of parameters corresponding to a point on a simplex. The processors then conduct the simplex search steps for an improved point, communicate the results, and a new simplex is formed. The advantage of this method is that our algorithm is generic and can be applied, without re-writing computer code, to any optimization problem which the non-parallel Nelder–Mead is applicable. The method is also easily scalable to any degree of parallelization up to the number of parameters. In a series of Monte Carlo experiments, we show that this parallel simplex method yields computational savings in some experiments up to three times the number of processors.  相似文献   

10.
Laboratory investigations have shown that a formal theory of fault-tolerance will be essential to harness nanoscale self-assembly as a medium of computation. Several researchers have voiced an intuition that self-assembly phenomena are related to the field of distributed computing. This paper formalizes some of that intuition. We construct tile assembly systems that are able to simulate the solution of the wait-free consensus problem in some distributed systems. (For potential future work, this may allow binding errors in tile assembly to be analyzed, and managed, with positive results in distributed computing, as a “blockage” in our tile assembly model is analogous to a crash failure in a distributed computing model.) We also define a strengthening of the “traditional” consensus problem, to make explicit an expectation about consensus algorithms that is often implicit in distributed computing literature. We show that solution of this strengthened consensus problem can be simulated by a two-dimensional tile assembly model only for two processes, whereas a three-dimensional tile assembly model can simulate its solution in a distributed system with any number of processes.  相似文献   

11.
We formulate the problem of constructing a tree which is the nearest on average to a given set of trees. The notion of “nearest” is formulated based on a conception of events such that counting their number makes it possible to distinguish each of the given trees from the desired one. These events are called divergence, duplication, loss, and transfer; other lists of events can also be considered. We propose an algorithm that solves this problem in cubic time with respect to the input data size. We prove correctness of the algorithm and a cubic estimate for its complexity.  相似文献   

12.
We investigate cellular automata whose internal inter-cell communication is bounded. The communication is quantitatively measured by the number of uses of the links between cells. Bounds on the sum of all communications of a computation as well as bounds on the maximal number of communications that may appear between each two cells are considered. It is shown that even the weakest non-trivial device in question, that is, one-way cellular automata where each two neighboring cells may communicate constantly often only, accept rather complicated languages. We investigate the computational capacity of the devices in question and prove an infinite strict hierarchy depending on the bound on the total number of communications during a computation. Despite their sparse communication even for the weakest devices, by reduction of Hilbert’s tenth problem the undecidability of several problems is derived. Finally, the question of whether a given real-time one-way cellular automaton belongs to the weakest class is shown to be undecidable. This result can be used to answer an open question.  相似文献   

13.
Part Batching and Scheduling in a Flexible Cell to Minimize Setup Costs   总被引:1,自引:0,他引:1  
In this paper we consider the problem of batching parts and scheduling their operations in flexible manufacturing cells. We consider the case in which there is only one processor and no more than k parts may be present in the system at the same time. The objective is to minimize the total number of setups, given that each part requires a sequence of operations, and each operation requires a given tool. We prove that even for k=3 the problem is NP-hard and we develop a branch-and-price scheme for its solution. Moreover, we present an extensive computational experience. Finally, we analyze some special cases and related problems.  相似文献   

14.
《国际计算机数学杂志》2012,89(12):2371-2386
ABSTRACT

This paper introduces a kind of parallel multigrid method for solving Steklov eigenvalue problem based on the multilevel correction method. Instead of the common costly way of directly solving the Steklov eigenvalue problem on some fine space, the new method contains some boundary value problems on a series of multilevel finite element spaces and some steps of solving Steklov eigenvalue problems on a very low dimensional space. The linear boundary value problems are solved by some multigrid iteration steps. We will prove that the computational work of this new scheme is truly optimal, the same as solving the corresponding linear boundary value problem. Besides, this multigrid scheme has a good scalability by using parallel computing technique. Some numerical experiments are presented to validate our theoretical analysis.  相似文献   

15.
16.
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.  相似文献   

17.
We consider the problem of controlling the state of a two-level quantum system (quantum bit) via an externally applied electro-magnetic field. The describing model is a bilinear right-invariant system whose state varies on the Lie group of 2×2 special unitary matrices. We study the topological structure of the reachable sets. If two or more independent controls are used, then every state can be achieved in arbitrary time. However, this is no longer true if only one control is available and, in this case, we give an exact characterization of states reachable in arbitrary time. We prove small time local controllability for any state and the existence of a critical time which is the smallest time after which every transfer of state is possible. We provide upper and lower bounds for such a time. The mathematical development is motivated by the problem of manipulating the state of a quantum bit. Every transfer of state may be interpreted as a quantum logic operation and not every logic operation can be obtained in arbitrary time. The analysis we present provides information about the feasibility of a given operation as well as estimates for the speed of a quantum computer.  相似文献   

18.
The process of gene assembly in ciliates, an ancient group of organisms, is one of the most complex instances of DNA manipulation known in any organisms. This process is fascinating from the computational point of view, with ciliates even using the linked lists data structure. Three molecular operations (ld, hi, and dlad) have been postulated for the gene assembly process. We initiate here the study of parallelism in this process, raising several natural questions, such as: when can a number of operations be applied in parallel to a gene pattern; or how many steps are needed to assemble (in parallel) a micronuclear gene. In particular, this gives rise to a new measure of complexity for the process of gene assembly in ciliates. “One of the oldest forms of life on Earth has been revealed as a natural born computer programmer.”  相似文献   

19.
Heterogeneous computing (HC) systems composed of interconnected machines with varied computational capabilities often operate in environments where there may be inaccuracies in the estimation of task execution times. Makespan (defined as the completion time for an entire set of tasks) is often the performance feature that needs to be optimized in such systems. Resource allocation is typically performed based on estimates of the computation time of each task on each class of machines. Hence, it is important that makespan be robust against errors in computation time estimates. In this research, the problem of finding a static mapping of tasks to maximize the robustness of makespan against the errors in task execution time estimates given an overall makespan constraint is studied. Two variations of this basic problem are considered: (1) where there is a given, fixed set of machines, (2) where an HC system is to be constructed from a set of machines within a dollar cost constraint. Six heuristic techniques for each of these variations of the problem are presented and evaluated.  相似文献   

20.
The Permutation Flowshop Scheduling Problem with Makespan objective (PFSP-M) is known to be NP-hard for more than two machines, and literally hundreds of works in the last decades have proposed exact and approximate algorithms to solve it. These works—of computational/experimental nature—show that the PFSP-M is also empirically hard, in the sense that optimal or quasi-optimal sequences statistically represent a very small fraction of the space of feasible solutions, and that there are big differences among the corresponding makespan values. In the vast majority of these works, it has been assumed that (a) processing times are not job- and/or machine-correlated, and (b) all machines are initially available. However, some works have found that the problem turns to be almost trivial (i.e. almost every sequence yields an optimal or quasi-optimal solution) if one of these assumptions is dropped. To the best of our knowledge, no theoretical or experimental explanation has been proposed by this rather peculiar fact.Our hypothesis is that, under certain conditions of machine availability, or correlated processing times, the performance of a given sequence in a flowshop is largely determined by only one stage, thus effectively transforming the flowshop layout into a single machine. Since the single machine scheduling problem with makespan objective is a trivial problem where all feasible sequences are optimal, it would follow that, under these conditions, the equivalent PFSP-M is almost trivial. To address this working hypothesis from a general perspective, we investigate some conditions that allow reducing a permutation flowshop scheduling problem to a single machine scheduling problem, focusing on the two most common objectives in the literature, namely makespan and flowtime. Our work is a combination of theoretical and computational analysis, therefore several properties are derived to prove the conditions for an exact (theoretical) equivalence, together with an extensive computational evaluation to establish an empirical equivalence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号