首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents algorithms for multiterminal net channel routing where multiple interconnect layers are available. Major improvements are possible if wires are able to overlap, and our generalized main algorithm allows overlap, but only on everyKth (K 2) layer. Our algorithm will, for a problem with densityd onL layers,L K + 3,provably use at most three tracks more than optimal: (d + 1)/L/K + 2 tracks, compared with the lower bound of d/L/K. Our algorithm is simple, has few vias, tends to minimize wire length, and could be used if different layers have different grid sizes. Finally, we extend our algorithm in order to obtain improved results for adjacent (K = 1) overlap: (d + 2)/2L/3 + 5 forL 7.This work was supported by the Semiconductor Research Corporation under Contract 83-01-035, by a grant from the General Electric Corporation, and by a grant at the University of the Saarland.  相似文献   

2.
It is shown that the translation of an open default into a modal formula x(L(x)LM 1 (x)...LM m (x)w(x)) gives rise to an embedding of open default systems into non-monotonic logics.  相似文献   

3.
Companies that provide crane-lorry services are faced with the daily need to perform vehicle and driver allocation and scheduling. Many companies still do this manually due to the lack of suitable technologies. This manual approach is both time consuming and inaccurate and most probably will not lead to an optimized plan that can reduce operational costs. In this paper, we describe the design of a system called Crane Lorry Scheduling System (CLSS) that we have developed for the largest crane lorry company in Hong Kong. A crane lorry company is a company that provides lorries with different types of mounted crane equipment and drivers to service different types of moving and lifting jobs. CLSS is a Web-based application that streamlines communication with customers, subcontractors and employees/lorry drivers. We modeled the lorry-assignment problem as a constraint-satisfaction problem (CSP) algorithm, which we call the Crane Lorry Optimizing Engine (CLOE). CLOE was designed to be easily customizable to match the needs and requirements of different crane lorry companies. We experimented with two versions of CLOE, regular CLOE that finds best solutions and X-CLOE that finds optimal solutions. Results from our tests show that CLOE is faster and generates better quality plans than the manual approach.  相似文献   

4.
Many real-time embedded systems process event streams that are composed of a finite number of different event types. Each different event type on the stream would typically impose a different workload to the system, and thus the knowledge of possible correlations and dependencies between the different event types could be exploited to get tighter analytic performance bounds of the complete system. We propose an abstract stream model to characterize such an event stream. The model captures the needed information of all possible traces of a class of event streams. Hence, it can be used to obtain hard bounded worst-case and best-case analysis results of a system. We show how the proposed abstract stream model can be obtained from a concrete stream specification, and how it can be used for performance analysis. The applicability of our approach and its advantages over traditional worst-case performance analysis are shown in a case study of a multimedia application.Ernesto Wandeler is a Ph.D. student at the Computer Engineering and Networks Laboratory of the Swiss Federal Institute of Technology, Zurich. His research interests include models, methods and tools for system-level performance analysis of real-time embedded systems. He holds a Dipl. El.-Ing. degree from ETH Zurich. In 2003, he received the Willi Studer Price and the ETH Medal, both from the Swiss Federal Institute of Technology, Zurich. He is a student member of the IEEE and the ACM.Alexander Maxiaguine is a Ph.D. student at the Computer Engineering and Networks Laboratory of the Swiss Federal Institute of Technology, Zurich. His research interests include models and methods for system-level performance analysis and scheduling of embedded multiprocessor architectures, especially for real-time multimedia applications. Maxiaguine has an M.S. in electrical engineering from the Moscow Technical University of Communications and Informatics. He is a member of the IEEE and the ACM.Lothar Thiele is a full professor of computer engineering at the Swiss Federal Institute of Technology, Zurich. His research interests include models, methods and software tools for the design of embedded systems, embedded software and bioinspired optimization techniques. In 1986 he received the Dissertation Award of the Technical University of Munich, in 1987, the Outstanding Young Author Award of the IEEE Circuits and Systems Society, in 1988, the Browder J. Thompson Memorial Award of the IEEE, and in 2000–2001, the IBM Faculty Partnership Award. In 2004, he joined the German Academy of Natural Scientists Leopoldina.  相似文献   

5.
Workflow Management Systems (WFMSs) are often used in context of B2B integration as a base technology to implement business-to-business (B2B) integration processes across enterprises. In this context the notion of distributed inter-organizational workflows is introduced to indicate the collaboration of enterprises on a process level. This notion requires a thorough examination presented in this article since WFMSs were not designed with inter-enterprise distribution as one of the design goals. At a closer look, the proposed use of WFMSs in context of B2B integration is often very naïve and inappropriate. Consequently it does not address the real requirements found in enterprises. Enterprises do not share common workflow definitions, let alone common workflow instance execution state and have no intent to do so due to competitive knowledge protection. Furthermore, trading partner specific business rules within enterprises are not accounted for leading to an unwanted explosion of workflow definitions. This article clarifies the notion of distributed inter-organizational workflows as well as private and public processes. Based on this definition, the appropriate use of WFMSs is shown in context of an overall B2B integration solution that allows enterprises to protect their competitive knowledge while participating in B2B integration.  相似文献   

6.
In this paper, a novel neural network approach to real-time collision-free path planning of robot manipulators in a nonstationary environment is proposed, which is based on a biologically inspired neural network model for dynamic trajectory generation of a point mobile robot. The state space of the proposed neural network is the joint space of the robot manipulators, where the dynamics of each neuron is characterized by a shunting equation or an additive equation. The real-time robot path is planned through the varying neural activity landscape that represents the dynamic environment. The proposed model for robot path planning with safety consideration is capable of planning a real-time comfortable path without suffering from the too close nor too far problems. The model algorithm is computationally efficient. The computational complexity is linearly dependent on the neural network size. The effectiveness and efficiency are demonstrated through simulation studies.  相似文献   

7.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

8.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

9.
Indecomposable local maps of one-dimensional tessellation automata are studied. The main results of this paper are the following. (1) For any alphabet containing two or more symbols and for anyn 1, there exist indecomposable scope-n local maps over . (2) If is a finite field of prime order, then a linear scope-n local map over is indecomposable if and only if its associated polynomial is an irreducible polynomial of degreen – 1 over , except for a trivial case. (3) Result (2) is no longer true if is a finite field whose order is not prime.  相似文献   

10.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

11.
A representation-less model for genetic programming is presented. The model is intended to examine the mechanisms that lead to bloat in genetic programming (GP). We discuss two hypotheses (fitness causes bloat and neutral code is protective) and perform simulations to examine the predictions deduced from these hypotheses. Our observation is that predictions from both hypotheses are realized in the simulated model.  相似文献   

12.
Summary We examine long unavoidable patterns, unavoidable in the sense of Bean, Ehrenfeucht, McNulty. Zimin and independently Schmidt have shown that there is only one unavoidable pattern of length 2 n -1 on an alphabet with n letters; this pattern is a quasi-power in the sense of Schützenberger. We characterize the unavoidable words of length 2 n -2 and 2 n -3. Finally we show that every sufficiently long unavoidable word has a certain quasi-power as a subword.This work was done while the author stayed at LITP, Université Paris 6, France  相似文献   

13.
This paper discusses terms which are of mutual importance to the fields of information science and computer science. Specifically we discuss the notions of information and knowledge: their interrelationships as well as their differences, and the concept of value-adding. Concrete examples are used in the discussion.Rainer Kuhlen is professor of Information Science at the University of Konstanz.  相似文献   

14.
Exact upper bounds are obtained for the probability F() - F(u), 0 < u < < , on the set of distribution functions F(x) of nonnegative random variables with unimodal density with an arbitrary mode m 0 and one or two fixed first moments.Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 72–83, September–October 2004.  相似文献   

15.
Contemporary design process requires the development of a new computational intelligence or soft computing methodology that involves intelligence integration and hybrid intelligent systems for design, analysis and evaluation, and optimization. This paper first presents a discussion of the need to incorporate intelligence into an automated design process and the various constraints that designers face when embarking on industrial design projects. Then, it presents the design problem as optimizing the design output against constraints and the use of soft computing and hybrid intelligent systems techniques. In this paper, a soft-computing-integrated intelligent design framework is developed. A hybrid dual cross-mapping neural network (HDCMNN) model is proposed using the hybrid soft computing technique based on cross-mapping between a back-propagation network (BPNN) and a recurrent Hopfield network (HNN) for supporting modeling, analysis and evaluation, and optimization tasks in the design process. The two networks perform different but complementary tasks—the BPNN decides if the design problem is a type 0 (rational) or type 1 (non-rational) problem, and the output layer weights are then used as the energy function for the HNN. The BPNN is used for representing design patterns, training classification boundaries, and outputting network weight values to the HNN, and then the HNN uses the calculated network weight values to evaluate and modify or re-design the design patterns. The developed system provides a unified soft-computing-integrated intelligent design framework with both symbolic and computational intelligence. The system has self-modifying and self-learning functions. Within the system, only one network training is needed for accomplishing the evaluation, rectification/modification, and optimization tasks in the design process. Finally, two case studies are provided to illustrate and validate the developed model and system.  相似文献   

16.
Summary Many reductions among combinatorial problems are known in the context of NP-completeness. These reductions preserve the optimality of solutions. However, they may change the relative error of approximative solutions dramatically. In this paper, we apply a new type of reductions, called continuous reductions. When one problem is continuously reduced to another, any approximation algorithm for the latter problem can be transformed into an approximation algorithm for the former. Moreover, the performance ratio is preserved up to a constant factor. We relate the problem Minimum Number of Inverters in CMOS-Circuits, which arises in the context of logic synthesis, to several classical combinatorial problems such as Maximum Independent Set and Deletion of a Minimum Number of Vertices (Edges) in Order to Obtain a Bipartite (Partial) Subgraph.  相似文献   

17.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

18.
For the equation x(t) = x(t) (1-(1/) t-- t- x(u)du), > 0, > 0, > 0, conditions for the stability of a nonzero stationary solution under small perturbations are determined.  相似文献   

19.
Sharon Daniel 《AI & Society》2000,14(2):196-213
This paper will discuss interactive on-line artworks modelled on cellular automata that employ various types of agents, both algorithmic and human, to assist in the evolution of their databases. These works constitute what will here be referred to as Collaborative Systems systems that evolve through the practice of inter-authorship.  相似文献   

20.
In recent years, constraint satisfaction techniques have been successfully applied to disjunctive scheduling problems, i.e., scheduling problems where each resource can execute at most one activity at a time. Less significant and less generally applicable results have been obtained in the area of cumulative scheduling. Multiple constraint propagation algorithms have been developed for cumulative resources but they tend to be less uniformly effective than their disjunctive counterparts. Different problems in the cumulative scheduling class seem to have different characteristics that make them either easy or hard to solve with a given technique. The aim of this paper is to investigate one particular dimension along which problems differ. Within the cumulative scheduling class, we distinguish between highly disjunctive and highly cumulative problems: a problem is highly disjunctive when many pairs of activities cannot execute in parallel, e.g., because many activities require more than half of the capacity of a resource; on the contrary, a problem is highly cumulative if many activities can effectively execute in parallel. New constraint propagation and problem decomposition techniques are introduced with this distinction in mind. This includes an O(n2) edge-finding algorithm for cumulative resources (where n is the number of activities requiring the same resource) and a problem decomposition scheme which applies well to highly disjunctive project scheduling problems. Experimental results confirm that the impact of these techniques varies from highly disjunctive to highly cumulative problems. In the end, we also propose a refined version of the edge-finding algorithm for cumulative resources which, despite its worst case complexity in O(n3) , performs very well on highly cumulative instances.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号