首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper investigates the field of manufacturing system control. The addressed subject is indeed very fascinating, due to the importance that it has reached in the last decades both at research and industrial level. On the other hand, it seems to the author that most of the complexity intrinsic to the subject itself relies on the different meanings or levels of abstraction that both the terms “manufacturing system” and “control” may symbolize. The presented research aims to face the topic in a concrete fashion, i.e., by developing a control software system for a specific, although easy to be generalized, robotized manufacturing cell. Two different development methodologies, from the conceptual design to the actual implementation, of a cell control system are presented and compared. The former, based on ladder logic diagrams, for a PLC controlled manufacturing cell; the latter, based on object-oriented modeling and programming techniques, for a PC controlled manufacturing cell. The analysis has been conducted considering the internal and external requirements of the manufacturing system, mostly driven by the contemporary industrial need of reconfigurable control systems, largely acknowledged as the critical key to succeed in the new era of mass customization.  相似文献   

2.
Many artificial intelligence tasks, such as automated question answering, reasoning, or heterogeneous database integration, involve verification of a semantic category (e.g. “coffee” is a drink, “red” is a color, while “steak” is not a drink and “big” is not a color). In this research, we explore completely automated on-the-fly verification of a membership in any arbitrary category which has not been expected a priori. Our approach does not rely on any manually codified knowledge (such as WordNet or Wikipedia) but instead capitalizes on the diversity of topics and word usage on the World Wide Web, thus can be considered “knowledge-light” and complementary to the “knowledge-intensive” approaches. We have created a quantitative verification model and established (1) what specific variables are important and (2) what ranges and upper limits of accuracy are attainable. While our semantic verification algorithm is entirely self-contained (not involving any previously reported components that are beyond the scope of this paper), we have tested it empirically within our fact seeking engine on the well known TREC conference test questions. Due to our implementation of semantic verification, the answer accuracy has improved by up to 16% depending on the specific models and metrics used.  相似文献   

3.
“Fuzzy Functions” are proposed to be determined by the least squares estimation (LSE) technique for the development of fuzzy system models. These functions, “Fuzzy Functions with LSE” are proposed as alternate representation and reasoning schemas to the fuzzy rule base approaches. These “Fuzzy Functions” can be more easily obtained and implemented by those who are not familiar with an in-depth knowledge of fuzzy theory. Working knowledge of a fuzzy clustering algorithm such as FCM or its variations would be sufficient to obtain membership values of input vectors. The membership values together with scalar input variables are then used by the LSE technique to determine “Fuzzy Functions” for each cluster identified by FCM. These functions are different from “Fuzzy Rule Base” approaches as well as “Fuzzy Regression” approaches. Various transformations of the membership values are included as new variables in addition to original selected scalar input variables; and at times, a logistic transformation of non-scalar original selected input variables may also be included as a new variable. A comparison of “Fuzzy Functions-LSE” with Ordinary Least Squares Estimation (OLSE)” approach show that “Fuzzy Function-LSE” provide better results in the order of 10% or better with respect to RMSE measure for both training and test cases of data sets.  相似文献   

4.
Stability and performance analysis of mixed product run-to-run control   总被引:1,自引:1,他引:1  
Run-to-run control has been widely used in batch manufacturing processes to reduce variations. However, in batch processes, many different products are fabricated on the same set of process tool with different recipes. Two intuitive ways of defining a control scheme for such a mixed production mode are (i) each run of different products is used to estimate a common tool disturbance parameter, i.e., a “tool-based” approach, (ii) only a single disturbance parameter that describe the combined effect of both tool and product is estimated by results of runs of a particular product on a specific tool, i.e., a “product-based” approach. In this study, a model two-product plant was developed to investigate the “tool-based” and “product-based” approaches. The closed-loop responses are derived analytically and control performances are evaluated. We found that a “tool-based” approach is unstable when the plant is non-stationary and the plant-model mismatches are different for different products. A “product-based” control is stable but its performance will be inferior to single product control when the drift is significant. While the controller for frequent products can be tuned in a similar manner as in single product control, a more active controller should be used for the infrequent products which experience a larger drift between runs. The results were substantiated for a larger system with multiple products, multiple plants and random production schedule.  相似文献   

5.
Clinical decision support system (CDSS) and their logic syntax include the coding of notifications (e.g., Arden Syntax). The following paper will describe the rationale for segregating policies, user preferences and clinical monitoring rules into “advanced notification” and” clinical” components, which together form a novel and complex CDSS. Notification rules and hospital policies are respectively abstracted from care-provider roles and alerting mechanisms. User-defined preferences determine which devices are to be used for receiving notifications. Our design differs from previous notification systems because it integrates a versatile notification platform supporting a wide range of mobile devices with a XML/HL-7 compliant communication protocol.  相似文献   

6.
Extraction of geometric characteristics for manufacturability assessment   总被引:1,自引:0,他引:1  
One of the advantages of feature-based design is that it provides data which are defined as parameters of features in readily available forms for tasks from design through manufacturing. It can thus facilitate the integration of CAD and CAM. However, not all design features are features required in down stream applications and not all parameters or data can be predefined in the features. One of the significant examples is property that is formed by feature interactions. For example, the interaction of a positive feature and a negative feature induces a wall thickness change that might cause defects in a part. Therefore, the identification of the wall thickness change by detecting the feature interaction is required in the moldability assessment.The work presented in this paper deals with the extraction of geometric characteristics in feature-based design for manufacturability assessment. We focus on the manufacturability assessment of discrete parts with emphasis on a net shape process—injection molding. The definition, derivation and representation of the spatial relationships between features are described. The geometric characteristics formed by feature interactions are generalized as significant items, such as “depth”, “thickness”, “height” etc. based on the generalization of feature shapes. Reasoning on feature interactions and extraction of geometric characteristics is treated as a refinement procedure. High-level spatial relationships—“is_in”, “adjacent_to” and “coplanar” as well as their geometric details are first derived. The significant items formed from feature interactions are then computed based on the detailed spatial relationships. This work was implemented in a computer-aided concurrent product and process development environment to support molding product design assessment.  相似文献   

7.
In order to remove the cell size limitation and to make cellular manufacturing systems more flexible, a manufacturing cell has been equipped with a mobile robot which moves from machine to machine, much like its human counterpart would do. To further acquire a portion of the attributes (intelligence and adaptability of the human worker) lost in robotizing the traditional manned manufacturing cell, a “limited” knowledge-based cell control algorithm is developed. This algorithm maintains standard pull control logic as its underlying basis while utilizing heuristically, knowledge of the ongoing processes and their current status. In this paper, an analysis of this ‘new’ predictive pull-based controller is presented. The simulation results from the predictive pull-based controller are evaluated and compared with the standard pull control logic, first come first serve and shortest distance. A test-bed of an unmanned manufacturing cell with a self-propelled mobile robot was used for experimental verification. Simulation and experimental results show that the degree of improvements gained in the cell performance over the standard pull control method is dependent on the robot’s mobility velocity and how much part inspection/part rejection is performed within the cell. In conclusion, the predictive pull-based algorithm is relatively simple variation of pull control which has the ability of enhancing the “near optimal” pull cell control performance.  相似文献   

8.
A crucial step in the modeling of a system is to determine the values of the parameters to use in the model. In this paper we assume that we have a set of measurements collected from an operational system, and that an appropriate model of the system (e.g., based on queueing theory) has been developed. Not infrequently proper values for certain parameters of this model may be difficult to estimate from available data (because the corresponding parameters have unclear physical meaning or because they cannot be directly obtained from available measurements, etc.). Hence, we need a technique to determine the missing parameter values, i.e., to calibrate the model.As an alternative to unscalable “brute force” technique, we propose to view model calibration as a non-linear optimization problem with constraints. The resulting method is conceptually simple and easy to implement. Our contribution is twofold. First, we propose improved definitions of the “objective function” to quantify the “distance” between performance indices produced by the model and the values obtained from measurements. Second, we develop a customized derivative-free optimization (DFO) technique whose original feature is the ability to allow temporary constraint violations. This technique allows us to solve this optimization problem accurately, thereby providing the “right” parameter values. We illustrate our method using two simple real-life case studies.  相似文献   

9.
The complete structure of an AGV control system is described in the first part of this paper. The AGV control system is hierarchical and consists of five levels. The structure of one level does not depend on the structures of the other levels. This means that the control system depends on the design of the AGV at the lowest level only, at the actuator servo-control level and its coordination in realizing AGV primitive functions.The second part of the paper describes rules applicable to AGV steering. The structure of these rules depends on two groups of factors. The first group is dependent on information groups fed to the AGV processor by the position sensor. The second group of factors represents aims and conditions and AGV steering such as positioning accuracy, positioning time, allowed room for maneuver, the shape of the given trajectory, etc. The AGV steering rules contain sequences of primitive functions. These primitive functions are of such types as “turn left”, “straighten” (correct), “go straight on”, etc. Trajectory, as one of the basic factors, is defined at the level of controlling an elementary movement. The term “to control an elementary movement” means to select a transport road throughout the transport network and to code it using “elementary movement” such as “go straight” (relating to road section), “turn left” (relating to turning at a crossroad) etc.The results of the AGV steering simulation are presented in the third part of the paper. An exact kinematic AGV model used for stimulating control models is also presented.  相似文献   

10.
This research contributes to the theoretical basis for appropriate design of computer-based, integrated planning information systems. The research provides a framework for integrating relevant knowledge, theory, methods, and technology. Criteria for appropriate system design are clarified. The requirements for a conceptual system design are developed based on “diffusion of innovation” theory, lessons learned in the adoption and use of existing planning information systems, current information-processing technology (including expert system technology), and methodology for evaluation of mitigation strategies for disaster events. Research findings focus on the assessment of new information systems technology. Chief among these findings is the utility of case-based reasoning for discovering and formalizing the meta rules needed by expert systems, the role of the “diffusion of innovation” theory in establishing design criteria, and the definition of client interests served by integrated planning information systems. The work concludes with the selection of a prototyping exercise. The prototype is developed in a forthcoming technical paper (Masri & Moore, 1994).  相似文献   

11.
In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.  相似文献   

12.
Conflict situations do not only arise from misunderstandings, erroneous perceptions, partial knowledge, false beliefs, etc., but also from differences in “opinions” and in the different agents' value systems. It is not always possible, and maybe not even desirable, to “solve” this kind of conflict, as the sources are subjective. The communicating agents can, however, use knowledge of the opponent's preferences, to try and convince the partner of a point of view which they wish to promote. To deal with these situations requires an argumentative capacity, able to handle not only “demonstrative” arguments but also “dialectic” ones, which may not necessarily be based on rationality and valid premises. This paper presents a formalization of a theory of informal argumentation, focused on techniques to change attitudes of the interlocutor, in the domain of health promotion.  相似文献   

13.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

14.
The bisimulation “up-to-…” technique provides an effective way to relieve the amount of work in proving bisimilarity of two processes. This paper develops a fresh and direct approach to generalize this set-theoretic “up-to-...” principle to the setting of coalgebra theory. The notion of consistent function is introduced, as a generalization of Sangiorgi's sound function. Then, in order to prove that there are only bisimilar pairs in a relation, it is sufficient to find a morphism from it to the “lifting” of its image under some consistent function. One example is given showing that every self-bisimulation in normed BPA is just such a relation. What's more, we investigate the connection between span-bisimulation and ref-bisimulation. As a result, λ-bisimulation turns out to be covered by our new principle.  相似文献   

15.
Tolerancing is an important issue in product and manufacturing process designs. The allocation of design tolerances between the components of a mechanical assembly and manufacturing tolerances in the intermediate machining steps of component fabrication can significantly affect the quality, robustness and life-cycle of a product. Stimulated by the growing demand for improving the reliability and performance of manufacturing process designs, the tolerance design optimization has been receiving significant attention from researchers in the field. In recent years, a broad class of meta-heuristics algorithms has been developed for tolerance optimization. Recently, a new class of stochastic optimization algorithm called self-organizing migrating algorithm (SOMA) was proposed in literature. SOMA works on a population of potential solutions called specimen and it is based on the self-organizing behavior of groups of individuals in a “social environment”. This paper introduces a modified SOMA approach based on Gaussian operator (GSOMA) to solve the machining tolerance allocation of an overrunning clutch assembly. The objective is to obtain optimum tolerances of the individual components for the minimum cost of manufacturing. Simulation results obtained by the SOMA and GSOMA approaches are compared with results presented in recent literature using geometric programming, genetic algorithm, and particle swarm optimization.  相似文献   

16.
On the complemented disk algebra   总被引:1,自引:0,他引:1  
The importance of relational methods in temporal and spatial reasoning has been widely recognised in the last two decades. A quite large part of contemporary spatial reasoning is concerned with the research of relation algebras generated by the “part of” and “connection” relations in various domains. This paper is devoted to the study of one particular relation algebra appeared in the literature, viz. the complemented disk algebra. This algebra was first described by Düntsch [I. Düntsch, A tutorial on relation algebras and their application in spatial reasoning, Given at COSIT, August 1999, Available from: <http://www.cosc.brocku.ca/~duentsch/papers/relspat.html>] and then, Li et al. [Y. Li, S. Li, M. Ying, Relational reasoning in the Region Connection Calculus, Preprint, 2003, Available from: http://arxiv.org/abs/cs/0505041] showed that closed disks and their complements provides a representation. This set of regions is rather restrictive and, thus, of limited practical values. This paper will provide a general method for generating representations of this algebra in the framework of Region Connection Calculus. In particular, connected regions bounded by Jordan curves and their complements is also such a representation.  相似文献   

17.
Discovery of unapparent association rules based on extracted probability   总被引:1,自引:0,他引:1  
Association rule mining is an important task in data mining. However, not all of the generated rules are interesting, and some unapparent rules may be ignored. We have introduced an “extracted probability” measure in this article. Using this measure, 3 models are presented to modify the confidence of rules. An efficient method based on the support-confidence framework is then developed to generate rules of interest. The adult dataset from the UCI machine learning repository and a database of occupational accidents are analyzed in this article. The analysis reveals that the proposed methods can effectively generate interesting rules from a variety of association rules.  相似文献   

18.
This paper shows how an innovative “communicative” technique in teaching foreign languages—Conversation Rebuilding (CR)—readily lends itself to implementation in an Intelligent Tutoring System (ITS). Classroom language teachers using CR get students to formulate acceptable utterances in a foreign idiom by starting from rough approximations (using words the students know) and gradually zeroing in on the utterance which a native speaker of that idiom might produce in a similar setting. The ITS presented here helps students do the “zeroing in” optimally. It lets them express themselves temporarily in an “interlingua” (i.e., in their own kind of French or English or whatever they are studying), as long as they make something of their communicative intent clear, that is, as long as the System can find a semantic starting point on which to build. The ITS then prods the students to express themselves more intelligibly, starting from the “key” elements (determined by a heuristic based on how expert classroom teachers proceed) and taking into consideration the students' past successful or unsuccessful attempts at communication. To simplify system design and programming, however, conversations are “constrained”: students playact characters in set dialogs and aim at coming up with what the characters actually say (not what they could possibly say). While most Intelligent Computer Assisted Language Learning (ICALL) focuses the attention of students on norms to acquire, the ICALL implementation of CR presented in this paper focuses the attention of students on saying something—indeed, almost anything—to keep the conversation going and get some kind of meaning across to the other party. It sees successful language acquisition primarily as the association of forms with intent, not simply as the conditioning of appropriate reflexes or the elaboration/recall of conceptualized rules (which are the by-products of successful communication). Thus, in espousing this hard-line communicative approach, the present paper makes a first, non-trivial point: ICALL researchers might usefully begin by investigating what the more able teachers are doing in the classroom, rather than by building elaborate computer simulations of out-dated practices, as happens all too often. The paper then goes on to describe the architecture of a prototype ITS based on CR—one that the authors have actually implemented and tested—for the acquisition of English as a foreign language. A sample learning session is transcribed to illustrate the man-machine interaction. Concluding remarks show how the present-day limits of ICALL (and Artificial Intelligence in general) can be partially circumvented by the strategy implemented in the program, i.e. by making the students feel they are creatively piloting an interaction rather than being tested by an unimaginative machine.  相似文献   

19.
This paper addresses a novel actuator for manufacturing applications, the “electrostatic artificial muscle.” Artificial muscle is composed of a dense array of small linear actuators. Its promise lies in the prospect of high performance (e.g. higher force-to-weight ratio and peak acceleration than a comparable magnetic motor), clean, quiet operation, and design versatility (especially the elimination of transmissions in many applications). The characteristics of artificial muscle are particularly appealing for applications in robotics and high-speed automation.A model of a linear electrostatic induction motor is presented to illustrate the potential for high performance as well as the difficulty of “gap maintenance.” Gap maintenance refers to the demanding task of preserving a uniform, narrow gap between “slider” and stator in the presence of destabilizing electrostatic forces. A novel approach to gap maintenance, the use of dielectric fluid bearings, is presented. Analysis of a simple, 2-D motor model shows that gap maintenance and motor efficiency may be characterized by two nondimensional parameters: a levitation number, and a gap aspect ratio. It is shown that achieving both low-speed levitation and high efficiency requires long, narrow gaps (high aspect ratio). The results of this analysis are extended to a more complex model featuring an unconstrained, rigid slider. An experimental study of fluid bearings is also presented.  相似文献   

20.
Digitization for sharing knowledge on the shop floor in the machinery industry has been given much attention recently. To help engineers use digitization practically and efficiently, this paper proposes a method based on manufacturing case data that has a direct relation to manufacturing operations. The data are represented in XML schema, as it can be easily applied to Web-based systems on the shop floor. The definitions were made for eight manufacturing methods including machining and welding. The derived definitions consist of four divisions of metadata, work-piece, process and evaluation. Three divisions except for the “process” division are common to the manufacturing methods. The average number of elements for a manufacturing method is about 200. The represented schema is also used to convey knowledge such as operation standards and manufacturing troubleshooting on the shop floor. Using the definitions, a data management system is developed. It is a Web-based Q&A system, in which the engineers specify the manufacturing case data mainly by selecting from the candidates. Then, the system fills in the blank portions and/or shows messages to help complete the case data. The proposed method is evaluated through practical scenarios of arc welding and machining.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号