首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The purpose of my research was to develop a novel voice control system for the use in the robotized manufacturing cells as well as to create tools providing its simple integration into manufacturing. A comprehensive study of existing problems and their possible solutions has been performed. Unlike some other works, it focused on the specific requirements that should be fulfilled by industrially oriented voice control systems. Analysis of existing solutions related to the natural language processing and those related to various voice control applications has been performed. Its goal was to establish the optimal method of voice command analysis for industrially oriented systems. Finally, a voice control system for manufacturing cells has been developed, implemented and practically verified in the laboratory. Unlike many other solutions, it takes into consideration almost all aspects of voice command processing (speech recognition, syntactic and semantic analysis and spontaneous speech effects) and – most importantly – their mutual influence. To provide the simple system customization (integration into any particular manufacturing cells), a special format for quasi-natural sublanguage syntax definition has been developed. A novel algorithm for semantic analysis, using specific features of voice commands used for controlling industrial devices and machines, has been incorporated into the system. Successful implementation in the educational robotized machining cell shows that industrial applications should be possible in the very next future.  相似文献   

2.
Gian Piero Zarri 《Knowledge》2011,24(7):989-1003
This paper describes an experimental work carried out in the framework of an important European project to create and make use of a wide-ranging knowledge base in the gas/oil domain. In the context of this work, “knowledge base” means a collection of formal statement relating, with a negligible loss of information, the inner content (the ‘meaning’) of “complex events” included in two different “storyboards”. These events – originally presented under the form of unstructured natural language information – concern some general activities proper to the management of gas/oil facilities, like recognizing and monitoring gas leakage alarms in a gas processing plant or triggering the different steps needed to activate a gas turbine. To express this sort of information and to set up the knowledge base, the NKRL (Narrative Knowledge Representation Language) formalism has been used. NKRL is a conceptual meta-model and Computer Science environment expressly created to deal, in an ‘intelligent’ and complete way, with complex and content-rich ‘narrative’ data sources. The final knowledge base has been firstly tested in depth using the standard NKRL querying and information retrieval tools. High-level inference procedures have then been used, both “transformation rules” – unsuccessful queries are ‘transformed’ to produce results that are ‘semantically similar’ to those searched for initially – and “hypothesis rules” – information in the knowledge base is automatically aggregated to supply a sort of ‘causal’ explanation of some retrieved events.  相似文献   

3.
Human–computer dialogue systems interact with human users using natural language. We used the ALICE/AIML chatbot architecture as a platform to develop a range of chatbots covering different languages, genres, text-types, and user-groups, to illustrate qualitative aspects of natural language dialogue system evaluation. We present some of the different evaluation techniques used in natural language dialogue systems, including black box and glass box, comparative, quantitative, and qualitative evaluation. Four aspects of NLP dialogue system evaluation are often overlooked: “usefulness” in terms of a user’s qualitative needs, “localizability” to new genres and languages, “humanness” or “naturalness” compared to human–human dialogues, and “language benefit” compared to alternative interfaces. We illustrated these aspects with respect to our work on machine-learnt chatbot dialogue systems; we believe these aspects are worthwhile in impressing potential new users and customers.  相似文献   

4.
Universal Access in the Information Society - The natural language processing (NLP) of sign language aims to make human sign language “understandable” to computers. In achieving this...  相似文献   

5.
We present Stratosphere, an open-source software stack for parallel data analysis. Stratosphere brings together a unique set of features that allow the expressive, easy, and efficient programming of analytical applications at very large scale. Stratosphere’s features include “in situ” data processing, a declarative query language, treatment of user-defined functions as first-class citizens, automatic program parallelization and optimization, support for iterative programs, and a scalable and efficient execution engine. Stratosphere covers a variety of “Big Data” use cases, such as data warehousing, information extraction and integration, data cleansing, graph analysis, and statistical analysis applications. In this paper, we present the overall system architecture design decisions, introduce Stratosphere through example queries, and then dive into the internal workings of the system’s components that relate to extensibility, programming model, optimization, and query execution. We experimentally compare Stratosphere against popular open-source alternatives, and we conclude with a research outlook for the next years.  相似文献   

6.
The Chlor-Alkali production is one of the largest industrial scale electro-synthesis in the world. Plants with more than 1000 individual reactors are common, where chlorine and hydrogen are only separated by 0.2 mm thin membranes. Wrong operating conditions can cause explosions and highly toxic gas releases, but also irreversible damages of very expensive cell components with dramatic maintenance costs and production loss. In this paper, a Multi-Expert System based on first-order logic rules and Decision Forests is proposed to detect any abnormal operating conditions of membrane cell electrolyzers and to advice the operator accordingly. Robustness to missing data – which represents an important issue in industrial applications in general – is achieved by means of a Dynamic Selection strategy. Experiments performed with real-world electrolyzer data indicate that the proposed system can significantly detect the different operating modes, even in the presence of high levels of missing data – or “wrong” data, as a consequence of maloperation –, which is essential for precise fault detection and advice generation.  相似文献   

7.
《Artificial Intelligence》1987,32(2):259-267
Predicate calculus has long served as the basis of mathematical logic and has more recently achieved widespread use in artificial intelligence. This system of logic expresses propositions in terms of quantifications, restricting itself to the universal and existential quantifiers “all” and “some,” which appear to be adequate for formalizing mathematics. Systems that aspire to deal with natural language or everyday reasoning, however, must attempt to deal with the full range of quantifiers that occur in such language and reasoning, including, in particular, plurality quantifiers, such as “most,” “many,” and “few.” The logic of such quantifiers forces an extension of the predicate-calculus framework to a system of representation that involves more than one predicate in each quantification. In this paper, we prove this result for the specific case of “most.” Unlike some other arguments that attempt to establish the inadequacy of standard predicate calculus on the basis of intuitive plausibility judgements as to the likely character of human reasoning [11, 19], our result is a theorem of logic itself.  相似文献   

8.
9.
词义消歧是自然语言处理领域的一个重要研究课题。词义标注的一致性将直接影响语料库的建设质量,进而直接或间接影响到其相关的应用领域。由于语言本身的复杂性与发展性以及算法设计的难点和缺陷,目前各种词义标注的算法与模型还不能百分之百正确地标注词义,即不能保证词义消歧的正确性与一致性。而人工校验在时间、人力方面的投入是个难题。该文在对《人民日报》语料、语句相似度算法和语义资源《知网》研究的基础上,提出了对《人民日报》语料词义标注进行一致性检验的方法。实验结果表明,此方法是有效的。  相似文献   

10.
“网球问题”指怎样把racquet(网球拍)、ball(网球)和net(球网)之类具有情境联想关系的词汇概念联系起来、发现它们之间的语义和推理关系。这是一个自然语言处理和相关的语言知识资源建设的世界性难题。该文以求解“网球问题”为目标,对目前比较主流的几种语言词汇和概念知识库系统(包括WordNet、VerbNet、FrameNet、ConceptNet等)进行检讨,指出它们在解决“网球问题”上还都存在一定的局限性,着重分析它们为什么不能解决“网球问题”。进而指出基于生成词库论的名词物性结构知识描写体系可以解决“网球问题”,主张用名词的物性结构知识和相关的句法组合知识来构建一种以名词(实体)为核心的词汇概念网络,以弥补上述几种知识库系统的不足,为自然语言处理提供一种可资参考的词汇概念知识库体系。  相似文献   

11.
Recent advances in computing devices push researchers to envision new interaction modalities that go beyond traditional mouse and keyboard input. Typical examples are large displays for which researchers hope to create more “natural” means of interaction by using human gestures and body movements as input. In this article, we reflect about this goal of designing gestures that people can easily understand and use and how designers of gestural interaction can capitalize on the experience of 30 years of research on visual languages to achieve it. Concretely, we argue that gestures can be regarded as “visual expressions to convey meaning” and thus are a visual language. Based on what we have learned from visual language research in the past, we then explain why the design of a generic gesture set or language that spans many applications and devices is likely to fail. We also discuss why we recommend using gestural manipulations that enable users to directly manipulate on-screen objects instead of issuing commands with symbolic gestures whose meaning varies among different users, contexts, and cultures.  相似文献   

12.
《Computers in Industry》1986,7(3):257-262
The aim of this paper is to present a first order formalization of the cognitive mechanisms which allow a listener to form a “mental image” of a scene incompletely and ambiguously described (for instance by means of natural language sentences). In particular we are faced with the problem of imagining all the objects (probably) present in an inferred environment till a three-dimensional representation of the space enveloped by the current (inferred) environment. In this paper, for lack of space, the formal system is not entirely described (a technical report with the complete formalization is forthcoming), but we are more focused on those mechanisms of reasoning “by default” which allow humans to reach conclusions even if their knowledge is largely incomplete.  相似文献   

13.
A common claim in the literature on Information Systems' implementation in the context of less developed economies or so-called “developing countries” is that the “Western” technology is at odds with the local cultural context, in particular it is believed to mismatch local rationality in the sense of the accepted ways of doing things. In this paper we investigate IS implementation in a company based in a “non-Western” context compared with IS adoption in another company in a “Western” country context. Seen as a particular form of decision-making, the adoption and implementation processes are analysed drawing on the literature on decision-making, rationality in “Western” and “non-Western” contexts. Presenting evidence from these two contexts we argue that multiple forms of rationality exist in any context and that national culture is only one aspect of actors' as well as researchers' sense-making of activities in any given context. Linking the cases back the literature we reflect on the implications of our findings for cross-cultural research of IT implementation.  相似文献   

14.
With the recent developments in robotic process automation (RPA) and artificial intelligence (AI), academics and industrial practitioners are now pursuing robust and adaptive decision making (DM) in real-life engineering applications and automated business workflows and processes to accommodate context awareness, adaptation to environment and customisation. The emerging research via RPA, AI and soft computing offers sophisticated decision analysis methods, data-driven DM and scenario analysis with regard to the consideration of decision choices and provides benefits in numerous engineering applications. The emerging intelligent automation (IA) – the combination of RPA, AI and soft computing – can further transcend traditional DM to achieve unprecedented levels of operational efficiency, decision quality and system reliability. RPA allows an intelligent agent to eliminate operational errors and mimic manual routine decisions, including rule-based, well-structured and repetitive decisions involving enormous data, in a digital system, while AI has the cognitive capabilities to emulate the actions of human behaviour and process unstructured data via machine learning, natural language processing and image processing. Insights from IA drive new opportunities in providing automated DM processes, fault diagnosis, knowledge elicitation and solutions under complex decision environments with the presence of context-aware data, uncertainty and customer preferences. This sophisticated review attempts to deliver the relevant research directions and applications from the selected literature to the readers and address the key contributions of the selected literature, IA’s benefits, implementation considerations, challenges and potential IA applications to foster the relevant research development in the domain.  相似文献   

15.
Anomaly detection is a crucial aspect for both safety and efficiency of modern process industries.This paper proposes a two-steps methodology for anomaly detection in industrial processes, adopting machine learning classification algorithms. Starting from a real-time collection of process data, the first step identifies the ongoing process phase, the second step classifies the input data as “Expected”, “Warning”, or “Critical”. The proposed methodology is extremely relevant where machines carry out several operations without the evidence of production phases. In this context, the difficulty of attributing the real-time measurements to a specific production phase affects the success of the condition monitoring. The paper proposes the comparison of the anomaly detection step with and without the process phase identification step, validating its absolute necessity. The methodology applies the decision forests algorithm, as a well-known anomaly detector from industrial data, and decision jungle algorithm, never tested before in industrial applications. A real case study in the pharmaceutical industry validates the proposed anomaly detection methodology, using a 10 months database of 16 process parameters from a granulation process.  相似文献   

16.
现状和设想——试论中文信息处理与现代汉语研究   总被引:14,自引:0,他引:14  
本文介绍了中文信息处理技术发展的现状及面临的主要困难,指出:关键在于对现代汉语研究的滞后。到目前为止,中文信息处理主要依赖于对大规模语料的统计,根据概率,对词与词的关系作出界定。多年来中文信息处理技术徘徊难进的现实说明,这一方法已经难以突破“瓶颈”,要使计算机对现代汉语进行自动化的处理,即使之真正“智能化”,就必须把人的语言知识“教”给计算机。这就需要根据计算机的要求加强对现代汉语的研究,特别是对语义的研究。文中介绍了当前朝此方向努力并已有较大进展的三个流派,并分别指出其不足;参考作者主持国家“九五”重点项目“信息处理用现代汉语词汇研究”的经验,提出了统一使用资源、携手并进、共同攻关的设想。  相似文献   

17.
18.
19.
Transportability has perpetually been the nemesis of natural language processing systems, in both the research and commercial sectors. During the last 20 years, the technology has not moved much closer to providing robust coverage of everyday language, and has failed to produce commercial successes beyond a few specialized interfaces and application programs. the redesign required for each application has limited the impact of natural language systems. Trump (TRansportable Understanding Mechanism Package) is a natural language analyzer that functions in a variety of domains, in both interfaces and text processing. While other similar efforts have treated transportability as a problem in knowledge engineering, Trump instead relies mainly on a “core” of knowledge about language and a set of techniques for applying that knowledge within a domain. the information about words, word meanings, and linguistic relations in this generic knowledge base guides the conceptual framework of language interpretation in each domain. Turmp uses this core knowledge to piece together a conceptual representation of a natural language input by combining generic and specialized inforamtion. the result has been a language processing system that is capable of performing fairly extensive analysis with a minimum of customization for each application.  相似文献   

20.
Fault detection in industrial processes is a field of application that has gaining considerable attention in the past few years, resulting in a large variety of techniques and methodologies designed to solve that problem. However, many of the approaches presented in literature require relevant amounts of prior knowledge about the process, such as mathematical models, data distribution and pre-defined parameters. In this paper, we propose the application of TEDA – Typicality and Eccentricity Data Analytics – , a fully autonomous algorithm, to the problem of fault detection in industrial processes. In order to perform fault detection, TEDA analyzes the density of each read data sample, which is calculated based on the distance between that sample and all the others read so far. TEDA is an online algorithm that learns autonomously and does not require any previous knowledge about the process nor any user-defined parameters. Moreover, it requires minimum computational effort, enabling its use for real-time applications. The efficiency of the proposed approach is demonstrated with two different real world industrial plant data streams that provide “normal” and “faulty” data. The results shown in this paper are very encouraging when compared with traditional fault detection approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号