首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The theorem of Dimensional Analysis, usually applied to the inference of physical laws, is for the first time applied to the derivation of interpolation curves of numerical data, leading to a simplified dependence on a reduced number of arguments , dimensionless combination of variables. In particular, Monte Carlo modelling of electron beam lithography is considered and the backscattering coefficient addressed, in case of a general substrate layer, in the elastic regime and in the energy range 5 to 100 keV. The many variables involved (electron energy, substrate physical constants and thickness) are demonstrated to ultimately enter in determining through asingle dimensionless parameter 0. Thus, a scaling law is determined, an important guide in microsystem designing, indicating, if any part of the configuration is modified, how the other parameters should change (or scale) without affecting the result. Finally, a simple law =83 0 is shown to account for all variations of the parameters over all substrates of the periodic table.  相似文献   

2.
Semantics connected to some information based metaphor are well-known in logic literature: a paradigmatic example is Kripke semantic for Intuitionistic Logic. In this paper we start from the concrete problem of providing suitable logic-algebraic models for the calculus of attribute dependencies in Formal Contexts with information gaps and we obtain an intuitive model based on the notion of passage of information showing that Kleene algebras, semi-simple Nelson algebras, three-valued ukasiewicz algebras and Post algebras of order three are, in a sense, naturally and directly connected to partially defined information systems. In this way wecan provide for these logic-algebraic structures a raison dêetre different from the original motivations concerning, for instance, computability theory.  相似文献   

3.
Companies that provide crane-lorry services are faced with the daily need to perform vehicle and driver allocation and scheduling. Many companies still do this manually due to the lack of suitable technologies. This manual approach is both time consuming and inaccurate and most probably will not lead to an optimized plan that can reduce operational costs. In this paper, we describe the design of a system called Crane Lorry Scheduling System (CLSS) that we have developed for the largest crane lorry company in Hong Kong. A crane lorry company is a company that provides lorries with different types of mounted crane equipment and drivers to service different types of moving and lifting jobs. CLSS is a Web-based application that streamlines communication with customers, subcontractors and employees/lorry drivers. We modeled the lorry-assignment problem as a constraint-satisfaction problem (CSP) algorithm, which we call the Crane Lorry Optimizing Engine (CLOE). CLOE was designed to be easily customizable to match the needs and requirements of different crane lorry companies. We experimented with two versions of CLOE, regular CLOE that finds best solutions and X-CLOE that finds optimal solutions. Results from our tests show that CLOE is faster and generates better quality plans than the manual approach.  相似文献   

4.
Summary Many reductions among combinatorial problems are known in the context of NP-completeness. These reductions preserve the optimality of solutions. However, they may change the relative error of approximative solutions dramatically. In this paper, we apply a new type of reductions, called continuous reductions. When one problem is continuously reduced to another, any approximation algorithm for the latter problem can be transformed into an approximation algorithm for the former. Moreover, the performance ratio is preserved up to a constant factor. We relate the problem Minimum Number of Inverters in CMOS-Circuits, which arises in the context of logic synthesis, to several classical combinatorial problems such as Maximum Independent Set and Deletion of a Minimum Number of Vertices (Edges) in Order to Obtain a Bipartite (Partial) Subgraph.  相似文献   

5.
A new space subdivision method for ray tracing CSG modelled scenes   总被引:4,自引:4,他引:0  
A new algorithm for space tracing with CSG modelled scenes is presented. Space is divided in an irregular fashion to fit the objects as closely as possible. For this reason, primitive minimal bounding boxes are computed. Space subdivision is achieved in two steps: partitioning in projection plane and depth partitioning. A set of 3D regions named cells are then created. A Boolean CSG tree is distributed into the cell structure to form in each cell the minimal boolean CSG tree using the relevant primitives. The searching process for the next cell along the ray path is performed by using a local data structure associated with each cell. The goal of this structure is to link the cells together. An improvement, named mailbox, for all space tracing algorithms is detailed. Results are presented for two scenes to compare this new algorithm with Roth's algorithm.  相似文献   

6.
The plane with parallel coordinates   总被引:15,自引:0,他引:15  
By means ofParallel Coordinates planar graphs of multivariate relations are obtained. Certain properties of the relationship correspond tothe geometrical properties of its graph. On the plane a point line duality with several interesting properties is induced. A new duality betweenbounded and unbounded convex sets and hstars (a generalization of hyperbolas) and between Convex Unions and Intersections is found. This motivates some efficient Convexity algorithms and other results inComputational Geometry. There is also a suprising cusp inflection point duality. The narrative ends with a preview of the corresponding results inR N .  相似文献   

7.
I discuss the attitude of Jewish law sources from the 2nd–:5th centuries to the imprecision of measurement. I review a problem that the Talmud refers to, somewhat obscurely, as impossible reduction. This problem arises when a legal rule specifies an object by referring to a maximized (or minimized) measurement function, e.g., when a rule applies to the largest part of a divided whole, or to the first incidence that occurs, etc. A problem that is often mentioned is whether there might be hypothetical situations involving more than one maximal (or minimal) value of the relevant measurement and, given such situations, what is the pertinent legal rule. Presumption of simultaneous occurrences or equally measured values are also a source of embarrassment to modern legal systems, in situations exemplified in the paper, where law determines a preference based on measured values. I contend that the Talmudic sources discussing the problem of impossible reduction were guided by primitive insights compatible with fuzzy logic presentation of the inevitable uncertainty involved in measurement. I maintain that fuzzy models of data are compatible with a positivistic epistemology, which refuses to assume any precision in the extra-conscious world that may not be captured by observation and measurement. I therefore propose this view as the preferred interpretation of the Talmudic notion of impossible reduction. Attributing a fuzzy world view to the Talmudic authorities is meant not only to increase our understanding of the Talmud but, in so doing, also to demonstrate that fuzzy notions are entrenched in our practical reasoning. If Talmudic sages did indeed conceive the results of measurements in terms of fuzzy numbers, then equality between the results of measurements had to be more complicated than crisp equations. The problem of impossible reduction could lie in fuzzy sets with an empty core or whose membership functions were only partly congruent. Reduction is impossible may thus be reconstructed as there is no core to the intersection of two measures. I describe Dirichlet maps for fuzzy measurements of distance as a rough partition of the universe, where for any region A there may be a non-empty set of - _A (upper approximation minus lower approximation), where the problem of impossible reduction applies. This model may easily be combined with probabilistic extention. The possibility of adopting practical decision standards based on -cuts (and therefore applying interval analysis to fuzzy equations) is discussed in this context. I propose to characterize the uncertainty that was presumably capped by the old sages as U-uncertainty, defined, for a non-empty fuzzy set A on the set of real numbers, whose -cuts are intervals of real numbers, as U(A) = 1/h(A) 0 h(A) log [1+(A)]d, where h(A) is the largest membership value obtained by any element of A and (A) is the measure of the -cut of A defined by the Lebesge integral of its characteristic function.  相似文献   

8.
Experiment 1 explored the impact of physically touching a virtual object on how realistic the virtual environment (VE) seemed to the user. Subjects in a no touch group picked up a 3D virtual image of a kitchen plate in a VE, using a traditional 3D wand. See and touch subjects physically picked up a virtual plate possessing solidity and weight, using a technique called tactile augmentation. Afterwards, subjects made predictions about the properties of other virtual objects they saw but did not interact with in the VE. See and touch subjects predicted these objects would be more solid, heavier, and more likely to obey gravity than the no touch group. In Experiment 2 (a pilot study), subjects physically bit a chocolate bar in one condition, and imagined biting a chocolate bar in another condition. Subjects rated the event more fun and realistic when allowed to physically bite the chocolate bar. Results of the two experiments converge with a growing literature showing the value of adding physical qualities to virtual objects. This study is the first to empirically demonstrate the effectiveness of tactile augmentation as a simple, safe, inexpensive technique with large freedom of motion for adding physical texture, force feedback cues, smell and taste to virtual objects. Examples of practical applications are discussed.Based in part on Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments' by Hunter Hoffman which appeared in the Proceedings of the IEEE Virtual Reality Annual International Symposium '98, Atlanta GA, pp 59–63. ¢ 1998 IEEE.  相似文献   

9.
The temporal property to-always has been proposed for specifying progress properties of concurrent programs. Although the to-always properties are a subset of the leads-to properties for a given program, to-always has more convenient proof rules and in some cases more accurately describes the desired system behavior. In this paper, we give a predicate transformerwta, derive some of its properties, and use it to define to-always. Proof rules for to-always are derived from the properties ofwta. We conclude by briefly describing two application areas, nondeterministic data flow networks and self-stabilizing systems where to-always properties are useful.  相似文献   

10.
In many applications one has a set of discrete points at which some variable such as pressure or velocity is measured. In order to graphically represent and display such data (say, as contours of constant pressure), the discrete data must be represented by a smooth function. This continuous surface can then be evaluated at any point for graphical display. Sometimes data are arbitrarily located except that they occur along non-intersecting lines, an example occurring in wind tunnel tests where data are recorded at plug taps on an aircraft body. An algorithm is developed for this type of structured data problem and illustrated by means of color computer graphics.  相似文献   

11.
It is shown that the translation of an open default into a modal formula x(L(x)LM 1 (x)...LM m (x)w(x)) gives rise to an embedding of open default systems into non-monotonic logics.  相似文献   

12.
The notion of obvious inference in predicate logic is discussed from the viewpoint of proof-checker applications in logic and mathematics education. A class of inferences in predicate logic is defined and it is proposed to identify it with the class of obvious logical inferences. The definition is compared with other approaches. The algorithm for implementing the obviousness decision procedure follows directly from the definition.  相似文献   

13.
This paper presents generated enhancements for robust two and three-quarter dimensional meshing, including: (1) automated interval assignment by integer programming for submapped surfaces and volumes, (2) surface submapping, and (3) volume submapping. An introduction to the simplex method, an optimization technique of integer programming, is presented. Simplification of complex geometry is required for the formulation of the integer programming problem. A method of i-j unfolding is defined which explains how irregular geometry can be realigned into a simplified form that is suitable for submap interval assignment solutions. Also presented is the processes by which submapping eliminates the decomposition of surface geometry, through a pseudodecomposition process, producing suitable mapped meshes. The process of submapping involves the creation of interpolated virtual edges, user defined vertex types and i-j-k space traversals. The creation of interpolated virtual edges is the method by which submapping automatically subdivides surface geometry. The interpolated virtual edge is formulated according to an interpolation scheme using the node discretization of curves on the surface. User defined vertex types allow direct user control of surface decomposition and interval assignment by modifying i-j-k space traversals. Volume submapping takes the geometry decomposition to a higher level by using mapped virtual surfaces to eliminate decomposition of complex volumes.  相似文献   

14.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

15.
ART: A Hybrid Classification Model   总被引:1,自引:0,他引:1  
This paper presents a new family of decision list induction algorithms based on ideas from the association rule mining context. ART, which stands for Association Rule Tree, builds decision lists that can be viewed as degenerate, polythetic decision trees. Our method is a generalized Separate and Conquer algorithm suitable for Data Mining applications because it makes use of efficient and scalable association rule mining techniques.  相似文献   

16.
This paper presents algorithms for multiterminal net channel routing where multiple interconnect layers are available. Major improvements are possible if wires are able to overlap, and our generalized main algorithm allows overlap, but only on everyKth (K 2) layer. Our algorithm will, for a problem with densityd onL layers,L K + 3,provably use at most three tracks more than optimal: (d + 1)/L/K + 2 tracks, compared with the lower bound of d/L/K. Our algorithm is simple, has few vias, tends to minimize wire length, and could be used if different layers have different grid sizes. Finally, we extend our algorithm in order to obtain improved results for adjacent (K = 1) overlap: (d + 2)/2L/3 + 5 forL 7.This work was supported by the Semiconductor Research Corporation under Contract 83-01-035, by a grant from the General Electric Corporation, and by a grant at the University of the Saarland.  相似文献   

17.
The calculation of slope (downhill gradient) for a point in a digital elevation model (DEM) is a common procedure in the hydrological, environmental and remote sensing sciences. The most commonly used slope calculation algorithms employed on DEM topography data make use of a three-by-three search window or kernel centered on the grid point (grid cell) in question in order to calculate the slope at that point. A comparison of eight such slope calculation algorithms has been carried out using an artificial DEM, consisting of a smooth synthetic test surface with various amounts of added Gaussian noise. Morrison's Surface III, a trigonometrically defined surface, was used as the synthetic test surface. Residual slope grids were calculated by subtracting the slope grids produced by the algorithms on test from true/reference slope grids derived by analytic partial differentiation of the synthetic surface. The resulting residual slope grids were used to calculate root-mean-square (RMS) residual error estimates that were used to rank the slope algorithms from best (lowest value of RMS residual error) to worst (largest value of RMS residual error). Fleming and Hoffers method gave the best results for slope estimation when values of added Gaussian noise were very small compared to the mean smooth elevation difference (MSED) measured within three-by-three elevation point windows on the synthetic surface. Horns method (used in ArcInfo GRID) performed better than Fleming and Hoffers as a slope estimator when the noise amplitude was very much larger than the MSED. For the large noise amplitude situation the best overall performing method was that of Sharpnack and Akin. The popular Maximum Downward Gradient Method (MDG) performed poorly coming close to last in the rankings, for both situations, as did the Simple Method. A nonogram was produced in terms of standard deviation of the Gaussian noise and MSED values that gave the locus of the trade-off point between Fleming and Hoffers and Horns methods.  相似文献   

18.
Many real-time embedded systems process event streams that are composed of a finite number of different event types. Each different event type on the stream would typically impose a different workload to the system, and thus the knowledge of possible correlations and dependencies between the different event types could be exploited to get tighter analytic performance bounds of the complete system. We propose an abstract stream model to characterize such an event stream. The model captures the needed information of all possible traces of a class of event streams. Hence, it can be used to obtain hard bounded worst-case and best-case analysis results of a system. We show how the proposed abstract stream model can be obtained from a concrete stream specification, and how it can be used for performance analysis. The applicability of our approach and its advantages over traditional worst-case performance analysis are shown in a case study of a multimedia application.Ernesto Wandeler is a Ph.D. student at the Computer Engineering and Networks Laboratory of the Swiss Federal Institute of Technology, Zurich. His research interests include models, methods and tools for system-level performance analysis of real-time embedded systems. He holds a Dipl. El.-Ing. degree from ETH Zurich. In 2003, he received the Willi Studer Price and the ETH Medal, both from the Swiss Federal Institute of Technology, Zurich. He is a student member of the IEEE and the ACM.Alexander Maxiaguine is a Ph.D. student at the Computer Engineering and Networks Laboratory of the Swiss Federal Institute of Technology, Zurich. His research interests include models and methods for system-level performance analysis and scheduling of embedded multiprocessor architectures, especially for real-time multimedia applications. Maxiaguine has an M.S. in electrical engineering from the Moscow Technical University of Communications and Informatics. He is a member of the IEEE and the ACM.Lothar Thiele is a full professor of computer engineering at the Swiss Federal Institute of Technology, Zurich. His research interests include models, methods and software tools for the design of embedded systems, embedded software and bioinspired optimization techniques. In 1986 he received the Dissertation Award of the Technical University of Munich, in 1987, the Outstanding Young Author Award of the IEEE Circuits and Systems Society, in 1988, the Browder J. Thompson Memorial Award of the IEEE, and in 2000–2001, the IBM Faculty Partnership Award. In 2004, he joined the German Academy of Natural Scientists Leopoldina.  相似文献   

19.
This paper is a study of the existence of polynomial time Boolean connective functions for languages. A languageL has an AND function if there is a polynomial timef such thatf(x,y) L x L andy L. L has an OR function if there is a polynomial timeg such thatg(x,y) xL oryL. While all NP complete sets have these functions, Graph Isomorphism, which is probably not complete, is also shown to have both AND and OR functions. The results in this paper characterize the complete sets for the classes Dp and pSAT[O(logn)] in terms of AND and OR and relate these functions to the structure of the Boolean hierarchy and the query hierarchies. Also, this paper shows that the complete sets for the levels of the Boolean hierarchy above the second level cannot have AND or OR unless the polynomial hierarchy collapses. Finally, most of the structural properties of the Boolean hierarchy and query hierarchies are shown to depend only on the existence of AND and OR functions for the NP complete sets.The first author was supported in part by NSF Research Grants DCR-8520597 and CCR-88-23053, and by an IBM Graduate Fellowship.  相似文献   

20.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号