首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In batch manufacturing, a fraction of the batch, or “lot,” may require reworking because its members fail to conform to standards. A rework station “undoes” the previous operation, so that the nonconforming members can go through the same operation additional times. This paper explores how policies dealing with these nonconforming members affect the cycle time of a facility.

Two different operating policies can be followed. In one, the “mother” lot is held back while the “child” sub-lots are reworked, after which all members are reunited for the next operation. In the other, the mother lot is allowed to proceed to the next operation while the child is held back; reworked members are then introduced by one of three methods: In the first, the reworked members of each mother lot are introduced by themselves. In the second, a minimum order quantity of reworked members is designated and a new lot introduced when this level is reached. In the third, the reworked members are added to the next mother lot that visits the operation.

In this paper, queuing models are developed for these policies, and the policies are simulated with regard to cycle time. Simulation of a wafer fabrication model is used to determine the effectiveness of these policies and their impact on cycle time.  相似文献   


2.
We discuss a “binary” algorithm for solving systems of linear equations with integer coefficients. So-called “binary” algorithms differ from ordinary ones in that there is no roundoff error, but only overflow, and the underlying analysis is p-adic analysis rather than conventional real analysis. The advantages of this algorithm are especially apparent when extremely large numbers are involved and no roundoff error can be tolerated.

VLSI implementation of this and other “binary” algorithms is very appealing because of the extreme regularity of the circuits involved.  相似文献   


3.
Constrained multibody system dynamics an automated approach   总被引:1,自引:0,他引:1  
The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. Specifically, the “closed loop” problem of multibody chain systems is addressed.

The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. This modification, which is based upon a solution of the constraint equations obtained through a “zero eigenvalues theorem,” is, in effect, a contraction of the dynamical equations.

It is observed that, for a system with n generalized coordinates and m constraint equations, the coefficients in the constraint equations may be viewed as “constraint vectors” in n-dimensional space. Then, in this setting the system itself is free to move in the nm directions which are “orthogonal” to the constraint vectors.  相似文献   


4.
Knowledge-base V&V primarily addresses the question: “Does my knowledge-base contain the right answer and can I arrive at it?” One of the main goals of our work is to properly encapsulate the knowledge representation and allow the expert to work with manageable-sized chunks of the knowledge-base. This work develops a new methodology for the verification and validation of Bayesian knowledge-bases that assists in constructing and testing such knowledge-bases. Assistance takes the form of ensuring that the knowledge is syntactically correct, correcting “imperfect” knowledge, and also identifying when the current knowledge-base is insufficient as well as suggesting ways to resolve this insufficiency. The basis of our approach is the use of probabilistic network models of knowledge. This provides a framework for formally defining and working on the problems of uncertainty in the knowledge-base.

In this paper, we examine the project which is concerned with assisting a human expert to build knowledge-based systems under uncertainty. We focus on how verification and validation are currently achieved in .  相似文献   


5.
It is hard to implement the ADI method in an efficient way on distributed-memory parallel computers. We propose “P-scheme” which parallelizes a tridiagonal linear system of equations for the ADI method, but its effectiveness is limited to the cases where the problem size is large enough mainly because of the communication cost of the propagation phase of the scheme.

In order to overcome this difficulty, we propose an improved version of the P-scheme with “message vectorization” which aggregates several communication messages into one and alleviates the communication cost. Also we evaluate the effectiveness of message vectorization for the ADI method and show that the improved version of the P-scheme works well even for smaller problems and linear and super-linear speedups can be achieved for 8194 × 8194 and 16,386 × 16,386 problems, respectively.  相似文献   


6.
Donnel type stability equations for buckling of stringer stiffened cylindrical panels under combined axial compression and hydrostatic pressure are solved by the displacement approach of [6], The solution is employed for a parametric study over a wide range of panel and stringer geometries to evaluate the combined influence of panel configurations and boundary conditions along the straight edges on the buckling behavior of the panel relative to a complete “counter” cylinder (i.e. a cylinder with identical skin and stiffener parameters).

The parametric studies reveal a “sensitivity” to the “weak in shear”, Nx = Nxφ = 0, along the straight edges, SS1 boundary conditions type where the panel buckling loads are always smaller than those predicted for a complete “counter” cylinder. In the case of “classical”, SS3 B.Cs., there always exist values of panel width, 2φ0, for which ρ = 1, i.e. the panel buckling load equals that of the complete “counter” cylinder. For SS2 and SS4 B.Cs. types, the nature by which the panel critical load approaches that of the complete cylinder appears to be panel configuration dependent.

Utilization of panels for the experimental determination of a complete cylinder buckling load is found to be satisfactory for relatively very lightly and heavily stiffened panels, as well as for short panels, (L/R) = 0.2 and 0.5. Panels of moderate length and stiffening have to be debarred, since they lead to nonconservative buckling load predictions.  相似文献   


7.
The complexity of performing matrix computations, such as solving a linear system, inverting a nonsingular matrix or computing its rank, has received a lot of attention by both the theory and the scientific computing communities. In this paper we address some “nonclassical” matrix problems that find extensive applications, notably in control theory. More precisely, we study the matrix equations AX + XAT = C and AXXB = C, the “inverse” of the eigenvalue problem (called pole assignment), and the problem of testing whether the matrix [B ABAn−1 B] has full row rank. For these problems we show two kinds of PRAM algorithms: on one side very fast, i.e. polylog time, algorithms and on the other side almost linear time and processor efficient algorithms. In the latter case, the algorithms rely on basic matrix computations that can be performed efficiently also on realistic machine models.  相似文献   

8.
We present particle simulations of natural convection of a symmetrical, nonlinear, three-dimensional cavity flow problem. Qualitative studies are made in an enclosure with localized heating. The assumption is that particles interact locally by means of a compensating Lennard-Jones type force F, whose magnitude is given by −G/rp + H/rq.

In this formula, the parameters G, H, p, q depend upon the nature of the interacting particles and r is the distance between two particles. We also consider the system to be under the influence of gravity. Assuming that there are n particles, the equations relating position, velocity and acceleration at time tk = kΔt, K = 0, 1, 2, …, are solved simultaneously using the “leap-frog” formulas. The basic formulas relating force and acceleration are Newton's dynamical equations Fi,k = miai,k, I = 1, 2, 3, …, n, where mi is the mass of the ith particle.

Extensive and varied computations on a CRAY X - MP/24 are described and discussed, and comparisons are made with the results of others.  相似文献   


9.
Existing search engines––with Google at the top––have many remarkable capabilities; but what is not among them is deduction capability––the capability to synthesize an answer to a query from bodies of information which reside in various parts of the knowledge base.

In recent years, impressive progress has been made in enhancing performance of search engines through the use of methods based on bivalent logic and bivalent-logic-based probability theory. But can such methods be used to add nontrivial deduction capability to search engines, that is, to upgrade search engines to question-answering systems? A view which is articulated in this note is that the answer is “No.” The problem is rooted in the nature of world knowledge, the kind of knowledge that humans acquire through experience and education.

It is widely recognized that world knowledge plays an essential role in assessment of relevance, summarization, search and deduction. But a basic issue which is not addressed is that much of world knowledge is perception-based, e.g., “it is hard to find parking in Paris,” “most professors are not rich,” and “it is unlikely to rain in midsummer in San Francisco.” The problem is that (a) perception-based information is intrinsically fuzzy; and (b) bivalent logic is intrinsically unsuited to deal with fuzziness and partial truth.

To come to grips with fuzziness of world knowledge, new tools are needed. The principal new tool––a tool which is briefly described in this note––is Precisiated Natural Language (PNL). PNL is based on fuzzy logic and has the capability to deal with partiality of certainty, partiality of possibility and partiality of truth. These are the capabilities that are needed to be able to draw on world knowledge for assessment of relevance, and for summarization, search and deduction.  相似文献   


10.
The first half is a tutorial on orderings, lattices, Boolean algebras, operators on Boolean algebras, Tarski's fixed point theorem, and relation algebras.

In the second half, elements of a complete relation algebra are used as “meanings” for program statements. The use of relation algebras for this purpose was pioneered by de Bakker and de Roever in [10–12]. For a class of programming languages with program schemes, single μ-recursion, while-statements, if-then-else, sequential composition, and nondeterministic choice, a definition of “correct interpretation” is given which properly reflects the intuitive (or operational) meanings of the program constructs. A correct interpretation includes for each program statement an element serving as “input/output relation” and a domain element specifying that statement's “domain of nontermination”. The derivative of Hitchcock and Park [17] is defined and a relation-algebraic version of the extension by de Bakker [8, 9] of the Hitchcock-Park theorem is proved. The predicate transformers wps(-) and wlps(-) are defined and shown to obey all the standard laws in [15]. The “law of the excluded miracle” is shown to hold for an entire language if it holds for that language's basic statements (assignment statements and so on). Determinism is defined and characterized for all the program constructs. A relation-algebraic version of the invariance theorem for while-statements is given. An alternative definition of intepretation, called “demonic”, is obtained by using “demonic union” in place of ordinary union, and “demonic composition” in place of ordinary relational composition. Such interpretations are shown to arise naturally from a special class of correct interpretations, and to obey the laws of wps(-).  相似文献   


11.
Numerical software development tends to struggle with an increasing complexity. This is, on the one hand, due to the integration of numerical models, and on the other hand, due to change of hardware. Parallel computers seem to fulfill the need for more and more computer resources, but they are more complex to program.

The article shows how abstraction is used to combat complexity. It motivates that separating a specification, “what,” its realisation, “how,” and its implementation, “when, where,” is of vital importance in software development. The main point is that development steps and levels of abstraction are identified, such that the obtained software has a clear and natural structure.

Development steps can be cast into a formal, i.e., mathematical framework, which leads to rigourous software development. This way of development leads to accurate and unambiguous recording of development steps, which simplifies maintenance, extension and porting of software. Portability is especially important in the field of parallel computing where no universal parallel computer model exists.  相似文献   


12.
PFL is a functional database language in which functions are defined equationally and bulk data is stored using a special class of functions called selectors. It is a lazy language, supports higher-order functions, has a strong polymorphic type inference system, and allows new user-defined data types and values to be declared. All functions, types and values persist in a database. Functions can be written which update all aspects of the database: by adding data to selectors, by defining new equations, and by introducing new data types and values. PFL is “semi-referentially transparent”, in the sense that whilst updates are referentially opaque and are executed destructively, all evaluation is referentially transparent. Similarly, type checking is “semi-static” in the sense that whilst updates are dynamically type checked at run time, expressions are type checked before they are evaluated and no type errors can occur during their evaluation.

In this paper we examine the expressiveness of PFL with respect to updates, and illustrate the language by developing a number of general purpose update functions, including functions for restructuring selectors, for memoisation, and for generating unique system identifiers. We also provide a translation mechanism between Datalog programs and equations, and show how different Datalog evaluation strategies can be supported.  相似文献   


13.
Eleven Spectralon1 (a sintered polytetrafluoroethylene-based material) and 16 BaSO4 reference reflectance panels were calibrated using a field calibration technique. The Spectralon panels differed both in their directional/hemispherical and directional/directional reflectance. However, the differences were sufficiently small that “general” calibration equations were developed. For panels constructed of the same material and with the same methods as those used in these experiments, the directional/directional reflectance may be within ± 0.020 at 10°, ± 0.015 at 45°, and ± 0.041 at 80° of that predicted by the “general” equations. For field measurements, these values are considerably better than those that would be obtained using a value of the directional/hemispherical reflectance. The directional/directional reflectance of the 16 BaSO4 panels varied considerably among panels, so much so that it was not feasible to develop “general” calibration equations. Apparently, the nonlambertian properties of BaSO4 panels are dependent upon the method of applying the barium sulfate coating.  相似文献   

14.
15.
A new topomer-based method for 3D searching of conventional structural databases is described, according to which 3D molecular structures are compared as sets of fragments or topomers, in single rule-generated conformations oriented by superposition of their fragmentation bonds. A topomer is characterized by its CoMFA-like steric shape and now also by its pharmacophoric features, in some novel ways that are detailed and discussed.

To illustrate the behavior of topomer similarity searching, a new dbtop program was used to generate a topomer distance matrix for a diverse set of 26 PDE4 inhibitors and 15 serotonin receptor modulators. With the best of three parameter settings tried, within the 210 shortest topomer distances (of 1460), 94.7% involved pairs of compounds having the same biological activity, and the nearest neighbor to every compound also shared its activity. The standard similarity metric, Tanimoto coefficients of “2D fingerprints”, could achieve a similar selectivity performance only for the 108 shortest distances, and three Tanimoto nearest neighbors had a different biological activity. Topomer similarity also allowed “lead-hopping” among 22 of the 26 PDE4 inhibitors, notably between rolipram and cipamfylline, while “2D fingerprints” Tanimotos recognized similarity only within generally recognized structural classes.

In 370 searches of authentic high-throughput screening (HTS) data sets, the typical topomer similarity search rate was about 200 structures per s.  相似文献   


16.
There are two distinct types of MIMD (Multiple Instruction, Multiple Data) computers: the shared memory machine, e.g. Butterfly, and the distributed memory machine, e.g. Hypercubes, Transputer arrays. Typically these utilize different programming models: the shared memory machine has monitors, semaphores and fetch-and-add; whereas the distributed memory machine uses message passing. Moreover there are two popular types of operating systems: a multi-tasking, asynchronous operating system and a crystalline, loosely synchronous operating system.

In this paper I firstly describe the Butterfly, Hypercube and Transputer array MIMD computers, and review monitors, semaphores, fetch-and-add and message passing; then I explain the two types of operating systems and give examples of how they are implemented on these MIMD computers. Next I discuss the advantages and disadvantages of shared memory machines with monitors, semaphores and fetch-and-add, compared to distributed memory machines using message passing, answering questions such as “is one model ‘easier’ to program than the other?” and “which is ‘more efficient‘?”. One may think that a shared memory machine with monitors, semaphores and fetch-and-add is simpler to program and runs faster than a distributed memory machine using message passing but we shall see that this is not necessarily the case. Finally I briefly discuss which type of operating system to use and on which type of computer. This of course depends on the algorithm one wishes to compute.  相似文献   


17.
Micro-level incentive Systems correlate operator speed & quality to compensate in a piece-rate environment. Since quality and speed are difficult to measure in hospitals, micro-level incentive schemes are difficult to develop. However, due to a high turnover and low productivity level of coders in the Medical Records Department at Georgetown University Hospital, it became apparent that an incentive system would be the only feasible solution to the problem.

In the Incentive System, quality of each chart is tested using 25 indicators, each of which have been assigned weights. The weights were developed using a software package called “Expert Choice”. A Double Sampling Plan is used to determine the size of the sample to be inspected for quality.

The speed requirements for the System were assessed via a survey conducted in the Coding section of the Medical Records Department to collect relevant information on each chart which included the financial class, length of stay (LOS) and time to code. This data was analyzed using a SAS package to identify any significant relations between the experience of the coder at GUH, LOS of patient/chart and time to code a chart for the two financial classes third party payors -- federal and non- Federal (Medicare & Medicaid and all others resp.) It was determined tat the time to code a chart varies with the LOS of the chart in each financial class and the experience of the coder at GUH.

An initial analysis identified a “pure” Incentive System--one in which the speed varied proportionately with operator's pay. However, it was decided to develop a hybrid plan. This plan would pay a coder 100% of base pay if the performance of the coder was within acceptable levels of the prescribed standards; 100% of base pay plus incentive pay if the performance was above prescribed standards and 100% of base pay with disciplinary action for performance below acceptable levels of standards

It is hoped that the proposed Incentive System will not only attract and retain qualified Coders to GUH, but also enhance the overall productivity of the Coding Section. A microcomputer based software package as been developed to accurately administer the Incentive System.  相似文献   


18.
Interactive graphic methods have the potential to significantly reduce the cost associated with pre- and post-processing of finite element analyses. One area of particular importance is the creation and modification of part geometry.

This paper describes a powerful method for modification of geometry for finite element analysis pre-processors. The method, called “Variational Geometry”, uses a single representation to describe the entire family of geometries that share a generic shape.

A solid geometric model of a component is defined with respect to a set of scalar parameters. Dimensions, such as those which appear on a mechanical drawing, are treated as constraints on the permissible values of these parameters. Constraints on the geometry are expressed as a set of non-linear algebraic equations. The values of the parameters and hence the geometry may be determined by solving the set of non-linear constraint equations.

A procedure for minimizing the computational requirements is presented. For a part with n degrees of freedom, the solution time is shown to be O(n).  相似文献   


19.
In this paper we analyze a fundamental issue which directly impacts the scalability of current theoretical neural network models to applicative embodiments, in both software as well as hardware. This pertains to the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the consequent chaotic manifestations in the absence of proper conditioning. The latter concern is particularly significant since the computational inertia of neural networks in general and our dynamical learning formalisms manifests itself substantially, only in massively parallel hardward—optical, VLSI or opto-electronic. We introduce a mathematical framework for systematically reconditioning additive-type models and derive a neuro-operator, based on the chaotic relaxation paradigm whose resulting dynamics are neither “concurrently” synchronous nor “sequentially” asynchronous. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are also computed to characterize the network dynamics and to ensure that throughput-limiting “emergent computational chaos” behavior in models reconditioned with concurrently asynchronous algorithms was eliminated.  相似文献   

20.
In this paper a theory of delegation is presented. There are at least three reasons for developing such a theory. First, one of the most relevant notions of “agent” is based on the notion of “task” and of “on behalf of”. In order to found this notion a theory of delegation among agents is needed. Second, the notion of autonomy should be based on different kinds and levels of delegation. Third, the entire theory of cooperation and collaboration requires the definition of the two complementary attitudes of goal delegation and adoption linking collaborating agents.

After motivating the necessity for a principled theory of delegation (and adoption) the paper presents a plan-based approach to this theory. We analyze several dimensions of the delegation/adoption (on the basis of the interaction between the agents, of the specification of the task, of the possibility to subdelegate, of the delegation of the control, of the help levels). The agent's autonomy and levels of agency are then deduced. We describe the modelling of the client from the contractor's point of view and vice versa, with their differences, and the notion of trust that directly derives from this modelling.

Finally, a series of possible conflicts between client and contractor are considered: in particular collaborative conflicts, which stem from the contractor's intention to help the client beyond its request or delegation and to exploit its own knowledge and intelligence (reasoning, problem solving, planning, and decision skills) for the client itself.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号