首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent development in the Wire arc additive manufacturing (WAAM) provides a promising alternative for fabricating high value-added medium to large metal components for many industries such as aerospace and maritime industry. However, challenges stemming from the demand for increasingly complex and high-quality products, hinder the widespread adoption of the conventional WAAM method for manufacturing industries. The development of artificial intelligence (AI) techniques may provide new opportunities to upgrade WAAM to the next generation. Hence, this paper provides a comprehensive review of the state-of-the-art research on AI techniques in WAAM. Firstly, we proposed a novel concept of intelligent wire arc additive manufacturing (IWAAM) and revealed the challenges of developing IWAAM. Secondly, an overview of the research progress of applying AI techniques to several aspects of the WAAM process chain, including fabrication process pre-design, online deposition control and offline parameter optimization is provided. Thirdly, the relevant machine learning algorithms, and the knowledge of corresponding AI techniques, are also reviewed in detail. Through reviewing the current research articles, issues of applying AI techniques to the WAAM process are presented and analysed. Finally, future research perspectives in terms of novel AI technique applications and AI technique enhancement are discussed. Through this systematic review, it is expected that WAAM may gradually develop into a smart/intelligent manufacturing technology in the context of Industry 4.0 through the adoption of AI techniques.  相似文献   

2.
The application of pattern recognition techniques, expert systems, artificial neural networks, fuzzy systems and nowadays hybrid artificial intelligence (AI) techniques in manufacturing can be regarded as consecutive elements of a process started two decades ago. The paper outlines the most important steps of this process and introduces some new results with special emphasis on hybrid AI and multistrategy machine learning approaches. Agent-based (holonic) systems are highlighted as promising tools for managing complexity, changes and disturbances in production systems. Further integration of approaches is predicted.  相似文献   

3.
网络时代的人工智能   总被引:1,自引:0,他引:1  
五十多年来,人工智能在模式识别、知识工程、机器人等领域已经取得重大成就,但是离真正的人类智能还相差甚远。本文强调 在当今的网络时代,作为信息技术的先导,人工智能科学有着非常值得关注的研究方向,要在学科交叉研究中实现人工智能的发展与创新。要关注认知科学、脑科学、生物智能、物理学、复杂网络、计算机科学与人工智能之间的交叉渗透,尤其是重视认知物理学的研究;自然语言是人类思维活动的载体,是人工智能研究知识表示无法回避的直接对象,要对语言中的概念建立起能够定量表示的不确定性转换模型,发展不确定性人工智能;要利用现实生活中复杂网络的小世界模型和无尺度特性,把网络拓扑作为知识表示的一种新方法,研究网络拓扑的演化与网络动力学行为,研究网络化了的智能,从而适应信息时代数据挖掘的普遍要求,迎接人工智能科学与应用新的辉煌。  相似文献   

4.
AI has been an exporter of ideas to computing in general (neural networks, agents, though robotics is more complex). But AI is now embracing ideas from elsewhere that were initially scorned because they were thought to have nothing to do with modeling intelligence and, especially, human intelligence. These are the statistical and probabilistic approaches to information capture and use that have become particularly prominent in machine learning but have spread all over AI in the last two decades. Pattern recognition was accepted in particular areas, like machine vision, as a kind of technological fix. But statistical and probabilistic approaches are now mainstream.  相似文献   

5.
A survey of modern knowledge modeling techniques   总被引:16,自引:0,他引:16  
A major characteristic regarding developments in the broad field of artificial intelligence (AI) during the 1990s has been an increasing integration of AI with other disciplines. A number of other computer science fields and technologies have been used in developing intelligent systems, starting from traditional information systems and databases, to modern distributed systems and the Internet. This paper surveys the knowledge modeling techniques that have received most attention in recent years among developers of intelligent systems, AI practitioners and researchers. The techniques are described from two perspectives, theoretical and practical. Hence the first part of the paper presents major theoretical and architectural concepts, design approaches, and research issues. The second part deals with several practical systems, applications, and ongoing projects that use and implement the techniques described in the first part.  相似文献   

6.
The availability of huge structured and unstructured data, advanced highly dense memory and high performance computing machines have provided a strong push for the development in artificial intelligence (AI) and machine learning (ML) domains. AI and machine learning has rekindled the hope of efficiently solving complex problems which was not possible in the recent past. The generation and availability of big-data is a strong driving force for the development of AI/ML applications, however, several challenges need to be addressed, like processing speed, memory requirement, high bandwidth, low latency memory access, and highly conductive and flexible connections between processing units and memory blocks. The conventional computing platforms are unable to address these issues with machine learning and AI. Deep neural networks (DNNs) are widely employed for machine learning and AI applications, like speech recognition, computer vison, robotics, and so forth, efficiently and accurately. However, accuracy is achieved at the cost of high computational complexity, sacrificing energy efficiency and throughput like performance measuring parameters along with high latency. To address the problems of latency, energy efficiency, complexity, power consumption, and so forth, a lot of state of the art DNN accelerators have been designed and implemented in the form of application specific integrated circuits (ASICs) and field programmable gate arrays (FPGAs). This work provides the state of the art of all these DNN accelerators which have been developed recently. Various DNN architectures, their computing units, emerging technologies used in improving the performance of DNN accelerators will be discussed. Finally, we will try to explore the scope for further improvement in these accelerator designs, various opportunities and challenges for the future research.  相似文献   

7.
The Internet based cyber-physical world has profoundly changed the information environment for the development of artificial intelligence (AI), bringing a new wave of AI research and promoting it into the new era of AI 2.0. As one of the most prominent characteristics of research in AI 2.0 era, crowd intelligence has attracted much attention from both industry and research communities. Specifically, crowd intelligence provides a novel problem-solving paradigm through gathering the intelligence of crowds to address challenges. In particular, due to the rapid development of the sharing economy, crowd intelligence not only becomes a new approach to solving scientific challenges, but has also been integrated into all kinds of application scenarios in daily life, e.g., online-to-offline (O2O) application, real-time traffic monitoring, and logistics management. In this paper, we survey existing studies of crowd intelligence. First, we describe the concept of crowd intelligence, and explain its relationship to the existing related concepts, e.g., crowdsourcing and human computation. Then, we introduce four categories of representative crowd intelligence platforms. We summarize three core research problems and the state-of-the-art techniques of crowd intelligence. Finally, we discuss promising future research directions of crowd intelligence.  相似文献   

8.
The long-term goal of artificial intelligence (AI) is to make machines learn and think like human beings. Due to the high levels of uncertainty and vulnerability in human life and the open-ended nature of problems that humans are facing, no matter how intelligent machines are, they are unable to completely replace humans. Therefore, it is necessary to introduce human cognitive capabilities or human-like cognitive models into AI systems to develop a new form of AI, that is, hybrid-augmented intelligence. This form of AI or machine intelligence is a feasible and important developing model. Hybrid-augmented intelligence can be divided into two basic models: one is human-in-the-loop augmented intelligence with human-computer collaboration, and the other is cognitive computing based augmented intelligence, in which a cognitive model is embedded in the machine learning system. This survey describes a basic framework for human-computer collaborative hybrid-augmented intelligence, and the basic elements of hybrid-augmented intelligence based on cognitive computing. These elements include intuitive reasoning, causal models, evolution of memory and knowledge, especially the role and basic principles of intuitive reasoning for complex problem solving, and the cognitive learning framework for visual scene understanding based on memory and reasoning. Several typical applications of hybrid-augmented intelligence in related fields are given.  相似文献   

9.
Content-Based Image Retrieval (CBIR) systems are powerful search tools in image databases that have been little applied to hyperspectral images. Relevance feedback (RF) is an iterative process that uses machine learning techniques and user’s feedback to improve the CBIR systems performance. We pursued to expand previous research in hyperspectral CBIR systems built on dissimilarity functions defined either on spectral and spatial features extracted by spectral unmixing techniques, or on dictionaries extracted by dictionary-based compressors. These dissimilarity functions were not suitable for direct application in common machine learning techniques. We propose to use a RF general approach based on dissimilarity spaces which is more appropriate for the application of machine learning algorithms to the hyperspectral RF-CBIR. We validate the proposed RF method for hyperspectral CBIR systems over a real hyperspectral dataset.  相似文献   

10.
在大数据时代,人工智能得到了蓬勃发展,尤其以机器学习、深度学习为代表的技术更是取得了突破性进展.随着人工智能在实际场景中的广泛应用,人工智能的安全和隐私问题也逐渐暴露出来,并吸引了学术界和工业界的广泛关注.以机器学习为代表,许多学者从攻击和防御的角度对模型的安全问题进行了深入的研究,并且提出了一系列的方法.然而,当前对机器学习安全的研究缺少完整的理论架构和系统架构.从训练数据逆向还原、模型结构反向推演、模型缺陷分析等角度进行了总结和分析,建立了反向智能的抽象定义及其分类体系.同时,在反向智能的基础上,将机器学习安全作为应用对其进行简要归纳.最后探讨了反向智能研究当前面临的挑战以及未来的研究方向.建立反向智能的理论体系,对于促进人工智能健康发展极具理论意义.  相似文献   

11.
Much of the research work into artificial intelligence (AI) has been focusing on exploring various potential applications of intelligent systems with successful results in most cases. In our attempts to model human intelligence by mimicking the brain structure and function, we overlook an important aspect in human learning and decision making: the emotional factor. While it currently sounds impossible to have “machines with emotions,” it is quite conceivable to artificially simulate some emotions in machine learning. This paper presents a modified backpropagation (BP) learning algorithm, namely, the emotional backpropagation (EmBP) learning algorithm. The new algorithm has additional emotional weights that are updated using two additional emotional parameters: anxiety and confidence. The proposed “emotional” neural network will be implemented to a facial recognition problem, and the results will be compared to a similar application using a conventional neural network. Experimental results show that the addition of the two novel emotional parameters improves the performance of the neural network yielding higher recognition rates and faster recognition time.   相似文献   

12.
Operational research (OR) and artificial intelligence (AI) models are primary contributors to the area of intelligent decision support systems (IDSS). Constraint logic programming (CLP) has been used successfully to substantiate the integration of OR and AI. We present a meta-level modular representation for integrating OR and AI models using CLP in an IDSS framework. The use of this representation is illustrated using a CLP-like meta-language, and the potential usefulness of this language is demonstrated using an example from the dairy industry.  相似文献   

13.
《Knowledge》2000,13(2-3):81-92
In recent years, the application of artificial intelligence (AI) based techniques to a wide range of industrial processes has become increasingly common. One reason for this development is the level of maturity of both theory of AI concepts and its implementation into application tools for commercial use. Another very important reason is the persistent drive of many industries to increase efficiencies and the realisation that this requires more effective processing of gained knowledge and information. In the oil and gas industry, due to high saturation levels of many production fields and the complex nature of processes, the need for increased efficiencies and highly effective processing of a large amount of information is particularly evident. Some organisations have recognised the opportunities offered by AI-based techniques and started exploiting them in order to improve knowledge and information handling and process efficiencies. This paper discusses the application of two AI-based techniques, fuzzy logic and artificial neural networks (ANNs), to specific problems related to the operation of oil and gas transport facilities. The background for the work, which is carried out in a co-operation between a university and a leading engineering service provider, is described firstly. This is followed by a brief summary of the fundamentals of the AI techniques considered with respect to their use for industrial purposes. Then, two case studies are presented. The first case study demonstrates the application of fuzzy logic to the control of a pump station in a pipeline system whilst the second case study shows the use of an ANN for the determination of important pipeline characteristics. Problem backgrounds, design procedures and outlines for the implementation of the used AI techniques are given. Finally, benefits of the adopted approaches are highlighted and the wider impact on both industry and research community is discussed.  相似文献   

14.
机器翻译是指利用计算机将一种语言文本转换成具有相同语义的另一种语言文本的过程。它是人工智能领域的一项重要研究课题。近年来,随着深度学习研究和应用的快速发展,神经网络机器翻译成为机器翻译领域的重要发展方向。该文首先简要介绍近一年神经网络机器翻译在学术界和产业界的影响,然后对当前的神经网络机器翻译的研究进展进行分类综述,最后对后续的发展趋势进行展望。  相似文献   

15.
Multiagent Systems: A Survey from a Machine Learning Perspective   总被引:27,自引:0,他引:27  
Distributed Artificial Intelligence (DAI) has existed as a subfield of AI for less than two decades. DAI is concerned with systems that consist of multiple independent entities that interact in a domain. Traditionally, DAI has been divided into two sub-disciplines: Distributed Problem Solving (DPS) focuses on the information management aspects of systems with several components working together towards a common goal; Multiagent Systems (MAS) deals with behavior management in collections of several independent entities, or agents. This survey of MAS is intended to serve as an introduction to the field and as an organizational framework. A series of general multiagent scenarios are presented. For each scenario, the issues that arise are described along with a sampling of the techniques that exist to deal with them. The presented techniques are not exhaustive, but they highlight how multiagent systems can be and have been used to build complex systems. When options exist, the techniques presented are biased towards machine learning approaches. Additional opportunities for applying machine learning to MAS are highlighted and robotic soccer is presented as an appropriate test bed for MAS. This survey does not focus exclusively on robotic systems. However, we believe that much of the prior research in non-robotic MAS is relevant to robotic MAS, and we explicitly discuss several robotic MAS, including all of those presented in this issue.  相似文献   

16.

With the advancement of telecommunications, sensor networks, crowd sourcing, and remote sensing technology in present days, there has been a tremendous growth in the volume of data having both spatial and temporal references. This huge volume of available spatio-temporal (ST) data along with the recent development of machine learning and computational intelligence techniques has incited the current research concerns in developing various data-driven models for extracting useful and interesting patterns, relationships, and knowledge embedded in such large ST datasets. In this survey, we provide a structured and systematic overview of the research on data-driven approaches for spatio-temporal data analysis. The focus is on outlining various state-of-the-art spatio-temporal data mining techniques, and their applications in various domains. We start with a brief overview of spatio-temporal data and various challenges in analyzing such data, and conclude by listing the current trends and future scopes of research in this multi-disciplinary area. Compared with other relevant surveys, this paper provides a comprehensive coverage of the techniques from both computational/methodological and application perspectives. We anticipate that the present survey will help in better understanding various directions in which research has been conducted to explore data-driven modeling for analyzing spatio-temporal data.

  相似文献   

17.
The authors review and categorize the research in applications of artificial intelligence (AI) and expert systems (ES) in new product development (NPD) activities. A brief overview of NPD process and AI is presented. This is followed by a literature survey in regard to AI and ES applications in NPD, which revealed twenty four articles (twenty two applications) in the 1990–1997 period. The applications are categorized into five areas: expert decision support systems for NPD project evaluation, knowledge-based systems (KBS) for product and process design, KBS for QFD, AI support for conceptual design and AI support for group decision making in concurrent engineering. Brief review of each application is provided. The articles are also grouped by NPD stages and seven NPD core elements (competencies and abilities). Further research areas are pointed out.  相似文献   

18.
The deployment of wireless sensor networks and mobile ad-hoc networks in applications such as emergency services, warfare and health monitoring poses the threat of various cyber hazards, intrusions and attacks as a consequence of these networks’ openness. Among the most significant research difficulties in such networks safety is intrusion detection, whose target is to distinguish between misuse and abnormal behavior so as to ensure secure, reliable network operations and services. Intrusion detection is best delivered by multi-agent system technologies and advanced computing techniques. To date, diverse soft computing and machine learning techniques in terms of computational intelligence have been utilized to create Intrusion Detection and Prevention Systems (IDPS), yet the literature does not report any state-of-the-art reviews investigating the performance and consequences of such techniques solving wireless environment intrusion recognition issues as they gain entry into cloud computing. The principal contribution of this paper is a review and categorization of existing IDPS schemes in terms of traditional artificial computational intelligence with a multi-agent support. The significance of the techniques and methodologies and their performance and limitations are additionally analyzed in this study, and the limitations are addressed as challenges to obtain a set of requirements for IDPS in establishing a collaborative-based wireless IDPS (Co-WIDPS) architectural design. It amalgamates a fuzzy reinforcement learning knowledge management by creating a far superior technological platform that is far more accurate in detecting attacks. In conclusion, we elaborate on several key future research topics with the potential to accelerate the progress and deployment of computational intelligence based Co-WIDPSs.  相似文献   

19.
李韵  黄辰林  王中锋  袁露  王晓川 《软件学报》2020,31(7):2040-2061
软件复杂性的增加给软件安全性带来极大的挑战.随着软件规模不断增大以及漏洞形态多样化,传统漏洞挖掘方法由于存在高误报率和高漏报率的问题,已无法满足复杂软件的安全性分析需求.近年来,随着人工智能产业的兴起,大量机器学习方法被尝试用于解决软件漏洞挖掘问题.首先,本文通过梳理基于机器学习的软件漏洞挖掘的现有研究工作,归纳了其技术特征与工作流程.接着,从其中核心的原始数据特征提取切入,以代码表征形式作为分类依据对现有研究工作进行分类阐述,并系统地进行了对比分析.最后依据对现有研究工作的整理总结,探讨了基于机器学习的软件漏洞挖掘领域面临的挑战,并展望了该领域的发展趋势.  相似文献   

20.

Cardiovascular diseases (CVDs) in India and globally are the major cause of mortality, as revealed by the World Health Organization (WHO). The irregularities in the pace of heartbeats, called cardiac arrhythmias or heart arrhythmias, are one of the commonly diagnosed CVDs caused by ischemic heart disease, hypertension, alcohol intake, and stressful lifestyle. Other than the listed CVDs, the abnormality in the cardiac rhythm caused by the long term mental stress (stimulated by Autonomic Nervous System (ANS)) is a challenging issue for researchers. Early detection of cardiac arrhythmias through automatic electronic techniques is an important research field since the invention of electrocardiogram (ECG or EKG) and advanced machine learning algorithms. ECG (EKG) provides the record of variations in electrical activity associated with the cardiac cycle, used by cardiologists and researchers as a gold standard to study the heart function. The present work is aimed to provide an extensive survey of work done by researchers in the area of automated ECG analysis and classification of regular & irregular classes of heartbeats by conventional and modern artificial intelligence (AI) methods. The artificial intelligence (AI) based methods have emerged popularly during the last decade for the automatic and early diagnosis of clinical symptoms of arrhythmias. In this work, the literature is explored for the last two decades to review the performance of AI and other computer-based techniques to analyze the ECG signals for the prediction of cardiac (heart rhythm) disorders. The existing ECG feature extraction techniques and machine learning (ML) methods used for ECG signal analysis and classification are compared using the performance metrics like specificity, sensitivity, accuracy, positive predictivity value, etc. Some popular AI methods, which include, artificial neural networks (ANN), Fuzzy logic systems, and other machine learning algorithms (support vector machines (SVM), k-nearest neighbor (KNN), etc.) are considered in this review work for the applications of cardiac arrhythmia classification. The popular ECG databases available publicly to evaluate the classification accuracy of the classifier are also mentioned. The aim is to provide the reader, the prerequisites, the methods used in the last two decades, and the systematic approach, all at one place to further purse a research work in the area of cardiovascular abnormalities detection using the ECG signal. As a contribution to the current work, future challenges for real-time remote ECG acquisition and analysis using the emerging technologies like wireless body sensor network (WBSN) and the internet of things (IoT) are identified.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号