首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper investigates the problem of inserting new rush orders into a current schedule of a real world job shop floor. Effective rescheduling methods must achieve reasonable levels of performance, measured according to a certain cost function, while preserving the stability of the shop floor, i.e. introducing as few changes as possible to the current schedule. This paper proposes new and effective match-up strategies which modify only a part of the schedule in order to accommodate the arriving jobs. The proposed strategies are compared with other rescheduling methods such as “right shift” and “insertion in the end”, which are optimal with respect to stability but poor with respect to performance, and with “total rescheduling” which is optimal with respect to performance but poor with respect to stability. Our results and statistical analysis reveal that the match-up strategies are comparable to the “right shift” and “insertion in the end” with respect to stability and as good as “total rescheduling” with respect to performance.  相似文献   

2.
Some results on the Collatz problem   总被引:1,自引:1,他引:0  
The paper refers to the Collatz's conjecture. In the first part, we present some equivalent forms of this conjecture and a slight generalization of a former result from [1]. Then, we present the notion of “chain subtrees” in Collatz's tree followed by a characterization theorem and some subclass of numbers which are labels for some chain subtrees. Next, we define the notion of “fixed points” and using this, we give another conjecture similar to Collatz's conjecture. Some new infinite sets of numbers for which the Collatz's conjecture holds are given. Finally, we present some interesting results related to the number of “even” and “odd” branches in the Collatz's tree. Received: 15 September 1999 / 2 June 2000  相似文献   

3.
Consideration was given to the controllable system of ordinary linear differential equations with the matrix at the derivative of the desired vector function that is identically degenerate in the domain of definition. For the one-input systems, the questions of stabilizability and solvability of the control problem by the Lyapunov indices were studied for the stationary and nonstationary cases. Analysis was based on the assumptions providing existence of the so-called “equivalent form” where the “algebraic” and “differential” parts are separated. An arbitrarily high index of insolvability and variable rank of the matrix at the derivative were admitted.  相似文献   

4.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of “crypt-equivalence” is introduced and studied w.r.t. two “loose” approaches to the semantics of an algebraic specificationT: the class of all first-order models ofT and the class of all term-generated models ofT. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of “first-order crypt-equivalence” if this holds for all first-order models, and of “inductive crypt-equivalence” if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that firstorder crypt-equivalence is equivalent to the existence of explicit definitions and that in case of “positive definability” two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than “abstract semantic equivalence” and that inductive crypt-equivalence is strictly finer than “inductive simulation equivalence” and “implementation equivalence”.  相似文献   

5.
In this paper, two soft computing approaches, which are known as artificial neural networks and Gene Expression Programming (GEP) are used in strength prediction of basalts which are collected from Gaziantep region in Turkey. The collected basalts samples are tested in the geotechnical engineering laboratory of the University of Gaziantep. The parameters, “ultrasound pulse velocity”, “water absorption”, “dry density”, “saturated density”, and “bulk density” which are experimentally determined based on the procedures given in ISRM (Rock characterisation testing and monitoring. Pergamon Press, Oxford, 1981) are used to predict “uniaxial compressive strength” and “tensile strength” of Gaziantep basalts. It is found out that neural networks are quite effective in comparison to GEP and classical regression analyses in predicting the strength of the basalts. The results obtained are also useful in characterizing the Gaziantep basalts for practical applications.  相似文献   

6.
We compare two physical systems: polarization degrees of freedom of a macroscopic light beam and the Josephson junction (JJ) in the “charge qubit regime”. The first system obviously cannot carry genuine quantum information and we show that the maximal entanglement which could be encoded into polarization of two light beams scales like 1/(photon number). Two theories of JJ, one leading to the picture of “JJ-qubit” and the other based on the mean-field approach are discussed. The later, which seems to be more appropriate, implies that the JJ system is, essentially, mathematically equivalent to the polarization of a light beam with the number of photons replaced by the number of Cooper pairs. The existing experiments consistent with the “JJ-qubit” picture and the theoretical arguments supporting, on the contrary, the classical model are briefly discussed. The Franck-Hertz-type experiment is suggested as an ultimate test of the JJ nature.  相似文献   

7.
 We introduce two definitions of an averaged system for a time-varying ordinary differential equation with exogenous disturbances (“strong average” and “weak average”). The class of systems for which the strong average exists is shown to be strictly smaller than the class of systems for which the weak average exists. It is shown that input-to-state stability (ISS) of the strong average of a system implies uniform semi-global practical ISS of the actual system. This result generalizes the result of [TPA] which states that global asymptotic stability of the averaged system implies uniform semi-global practical stability of the actual system. On the other hand, we illustrate by an example that ISS of the weak average of a system does not necessarily imply uniform semi-global practical ISS of the actual system. However, ISS of the weak average of a system does imply a weaker semi-global practical “ISS-like” property for the actual system when the disturbances w are absolutely continuous and . ISS of the weak average of a system is shown to be useful in a stability analysis of time-varying cascaded systems. Date received: April 6, 1999. Date revised: April 11, 2000.  相似文献   

8.
About the Collatz conjecture   总被引:1,自引:0,他引:1  
This paper refers to the Collatz conjecture. The origin and the formalization of the Collatz problem are presented in the first section, named “Introduction”. In the second section, entitled “Properties of the Collatz function”, we treat mainly the bijectivity of the Collatz function. Using the obtained results, we construct a (set of) binary tree(s) which “simulate(s)”– in a way that will be specified – the computations of the values of the Collatz function. In the third section, we give an “efficient” algorithm for computing the number of iterations (recursive calls) of the Collatz function. A comparison between our algorithm and the standard one is also presented, the first being at least 2.25 “faster” (3.00 in medium). Finally, we describe a class of natural numbers for which the conjecture is true. Received 28 April 1997 / 10 June 1997  相似文献   

9.
 To deal with circulated bills in various conditions regarding size, shape, and rigidity, we have developed a new bill-stacking cassette that can accept and re-dispense various-sized bills. This cassette stacks the bills upright by using deforming guides and sheet rollers under our newly developed “phase control”. It was experimentally verified by stacking tests that the cassette can handle bills in various conditions; thus, we conclude that the new cassette is practical for stacking circulated bills and re-dispensing them securely. Received: 14 August 2001/Accepted: 11 December 2001  相似文献   

10.
Deep proton irradiation in poly (methyl methacrylate) (PMMA) is a fabrication method for monolithic integrated micro optics which offers high stability and interesting autoalignment features. The process consists of three basic steps: irradiation of a PMMA substrate followed by either a development of the irradiated regions or a swelling of the irradiated regions by organic vapor or both applied to different regions. With this technique a variety of elementary refractive microoptical components and monolithically integrated combinations can be fabricated: microlenses, microprisms, beam splitters, fiber connectors with selfaligned microlenses on top of each fiber. This work was funded by IUAP24 “Optoelectronic Information Technology” and IUAP47 “Nonlinear Optics”, NFWO, Concerted Research Action “Photonic in Computing”.  相似文献   

11.
For nonlinear systems described by algebraic differential equations (in terms of “state” or “latent” variables) we examine the converse to realization,elimination, which consists of deriving an externally equivalent representation not containing the state variables. The elimination in general yields not only differential equations but also differentialinequations. We show that the application of differential algebraic elimination theory (which goes back to J.F. Ritt and A. Seidenberg) leads to aneffective method for deriving the equivalent representation. Examples calculated by a computer algebra program are shown. This paper was written while the author was with the Systems Division of the Laboratoire des Signaux et Systèmes in Gif-Sur-Yvette and was supported by the University of Orléans, France.  相似文献   

12.
Most existing studies of 2D problems in structural topology optimization are based on a given (limit on the) volume fraction or some equivalent formulation. The present note looks at simultaneous optimization with respect to both topology and volume fraction, termed here “extended optimality”. It is shown that the optimal volume fraction in such problems — in extreme cases — may be unity or may also tend to zero. The proposed concept is used for explaining certain “quasi-2D” solutions and an extension to 3D systems is also suggested. Finally, the relevance of Voigt’s bound to extended optimality is discussed.  相似文献   

13.
In this article, several boosting methods are discussed, which are notable implementations of the ensemble learning. Starting from the firstly introduced “boosting by filter” which is an embodiment of the proverb “Two heads are better than one”, more advanced versions of boosting methods “AdaBoost” and “U-Boost” are introduced. A geometrical structure and some statistical properties such as consistency and robustness of boosting algorithms are discussed, and then simulation studies are presented for confirming discussed behaviors of algorithms.  相似文献   

14.
In the process of extending the UML metamodel for a specific domain, the metamodel specifier introduces frequently some metaassociations at MOF level M2 with the aim that they induce some specific associations at MOF level M1. For instance, if a metamodel for software process modelling states that a “Role” is responsible for an “Artifact”, we can interpret that its specifier intended to model two aspects: (1) the implications of this metaassociation at level M1 (e.g., the specific instance of Role “TestEngineer” is responsible for the specific instance of Artifact “TestPlans”); and (2) the implications of this metaassociation at level M0 (e.g., “John Doe” is the responsible test engineer for elaborating the test plans for the package “Foo”). Unfortunately, the second aspect is often not enforced by the metamodel and, as a result, the models which are defined as its instances may not incorporate it. This problem, consequence of the so-called “shallow instantiation” in Atkinson and Kühne (Procs. UML’01, LNCS 2185, Springer, 2001), prevents these models from being accurate enough in the sense that they do not express all the information intended by the metamodel specifier and consequently do not distinguish metaassociations that induce associations at M1 from those that do not. In this article we introduce the concept of induced association that may come up when an extension of the UML metamodel is developed. The implications that this concept has both in the extended metamodel and in its instances are discussed. We also present a methodology to enforce that M1 models incorporate the associations induced by the metamodel which they are instances from. Next, as an example of application we present a quality metamodel for software artifacts which makes intensive use of induced associations. Finally, we introduce a software tool to assist the development of quality models as correct instantiations of the metamodel, assuring the proper application of the induced associations as required by the metamodel.  相似文献   

15.
Using few very general axioms which should be satisfied by any reasonable theory consistent with the Second Law of Thermodynamics we argue that: a) “no-cloning theorem” is meaningful for a very general theoretical scheme including both quantum and classical models, b) in order to describe self-replication, Wigner’s “cloning” process should be replaced by a more general “broadcasting”, c) “separation of species” is possible only in a non-homogeneous environment, d) “parent” and “offspring” must be strongly correlated. Motivated by the existing results on broadcasting which show that only classical information can self-replicate perfectly we discuss briefly a classical toy model with “quantum features” — overlapping pure states and “entangled states” for composite systems.  相似文献   

16.
The capability-based distributed layout approach was first proposed by Baykasoğlu (Int J Prod Res 41, 2597–2618, 2003) for job shops which are working under highly volatile manufacturing environments in order to avoid high reconfiguration costs. It was argued that the capability-based distributed layout can also be a valid (or better) option for “classical functional layouts” which are generally operating under “high variety” and “low-stable demand”. In this paper first the capability-based distributed layout approach and related issues are reviewed and discussed afterwards the performance of “Capability Based Distributed Layout: (CB-DL)” is tested via extensive simulation experiments. After the simulation experiments, it is observed that capability-based distributed layout has a big potential and can also be considered as an alternative to classical process types of layouts.  相似文献   

17.
With the recent trend to model driven engineering a common understanding of basic notions such as “model” and “metamodel” becomes a pivotal issue. Even though these notions have been in widespread use for quite a while, there is still little consensus about when exactly it is appropriate to use them. The aim of this article is to start establishing a consensus about generally acceptable terminology. Its main contributions are the distinction between two fundamentally different kinds of model roles, i.e. “token model” versus “type model” (The terms “type” and “token” have been introduced by C.S. Peirce, 1839–1914.), a formal notion of “metaness”, and the consideration of “generalization” as yet another basic relationship between models. In particular, the recognition of the fundamental difference between the above mentioned two kinds of model roles is crucial in order to enable communication among the model driven engineering community that is free of both unnoticed misunderstandings and unnecessary disagreement.  相似文献   

18.
FGSPEC is a wide spectrum specification language intended to facilitate the software specification and the expression of transformation process from the functional specification whic describes “what to do ”to the corresponding design(perational)specification whic describer“how to do ”.The design emphasizes the coherence of multi-level specification mechanisms and a tree structure model is provided whic unifies the wide spectrum specification styles from“what”to“how”.  相似文献   

19.
In recent years, on-demand transport systems (such as a demand-bus system) are focused as a new transport service in Japan. An on-demand vehicle visits pick-up and delivery points by door-to-door according to the occurrences of requests. This service can be regarded as a cooperative (or competitive) profit problem among transport vehicles. Thus, a decision-making for the problem is an important factor for the profits of vehicles (i.e., drivers). However, it is difficult to find an optimal solution of the problem, because there are some uncertain risks, e.g., the occurrence probability of requests and the selfishness of other rival vehicles. Therefore, this paper proposes a transport policy for on-demand vehicles to control the uncertain risks. First, we classify the profit of vehicles as “assured profit” and “potential profit”. Second, we propose a “profit policy” and “selection policy” based on the classification of the profits. Moreover, the selection policy can be classified into “greed”, “mixed”, “competitive”, and “cooperative”. These selection policies are represented by selection probabilities of the next visit points to cooperate or compete with other vehicles. Finally, we report simulation results and analyze the effectiveness of our proposal policies.  相似文献   

20.
We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term “irregular” depends on the context in which the “regular” or “valid” are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (“the query”) using chunks of data (“pieces of puzzle”) extracted from previous visual examples (“the database”). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely/suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance. Patent Pending  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号