首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

In the past few years, multiagent systems (MAS) have emerged as an active subfield of artificial intelligence (AI). Because of the inherent complexity of MAS, there is much interest in using machine learning (ML) techniques to help build multiagent systems. Robotic soccer is a particularly good domain for studying MAS and multiagent learning. Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher level skill that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. We also verify empirically that the learned skills are applicable to game situations.  相似文献   

2.

Lateral movement (LM) is a principal, increasingly common, tactic in the arsenal of advanced persistent threat (APT) groups and other less or more powerful threat actors. It concerns techniques that enable a cyberattacker, after establishing a foothold, to maintain ongoing access and penetrate further into a network in quest of prized booty. This is done by moving through the infiltrated network and gaining elevated privileges using an assortment of tools. Concentrating on the MS Windows platform, this work provides the first to our knowledge holistic methodology supported by an abundance of experimental results towards the detection of LM via supervised machine learning (ML) techniques. We specifically detail feature selection, data preprocessing, and feature importance processes, and elaborate on the configuration of the ML models used. A plethora of ML techniques are assessed, including 10 base estimators, one ensemble meta-estimator, and five deep learning models. Vis-à-vis the relevant literature, and by considering a highly unbalanced dataset and a multiclass classification problem, we report superior scores in terms of the F1 and AUC metrics, 99.41% and 99.84%, respectively. Last but not least, as a side contribution, we offer a publicly available, open-source tool, which can convert Windows system monitor logs to turnkey datasets, ready to be fed into ML models.

  相似文献   

3.

The design of gas turbines is a challenging area of cyber-physical systems where complex model-based simulations across multiple disciplines (e.g., performance, aerothermal) drive the design process. As a result, a continuously increasing amount of data is derived during system design. Finding new insights in such data by exploiting various machine learning (ML) techniques is a promising industrial trend since better predictions based on real data result in substantial product quality improvements and cost reduction. This paper presents a method that generates data from multi-paradigm simulation tools, develops and trains ML models for prediction, and deploys such prediction models into an active control system operating at runtime with limited computational power. We explore the replacement of existing traditional prediction modules with ML counterparts with different architectures. We validate the effectiveness of various ML models in the context of three (real) gas turbine bearings using over 150,000 data points for training, validation, and testing. We introduce code generation techniques for automated deployment of neural network models to industrial off-the-shelf programmable logic controllers.

  相似文献   

4.
The objective of this work is to identify a control algorithm that is capable of handling nonlinear behaviour (operating point dependent) witnessed in most industrial processes. To this end, the proposed solution is that of a supervisory multiple model control scheme, SMMC. This work demonstrates that the multiple model methodology can be recast into a Supervisory approach, whereby the supervisor is employed as a selector. This selector (supervisor) identifies the appropriate local-controller from a fixed family set. Unlike other supervisory techniques a multiple model observer (MMO) is proposed for the selection mechanism. Switching between local-controllers is accomplished bumplessly through a multiple model bumpless transfer scheme. Consequently, producing a continuous control signal as the process transverses between different operating regimes. The key issue in this application is the unique interaction between the local-controllers and the supervisor. This interaction is necessary to ensure global stability is maintained at all times, especially during switching. In short, the SMMC scheme enables the implementation of linear control theory, which is well accepted in industry, to standard nonlinear processes. The SMMC approach warrants the control design to extend beyond normal operating conditions that breakdown when standard linear control techniques are applied. The above notion is applied to a pilot-scale binary distillation column. In this example the column's distinct operating points describe the nonlinear behaviour. The results illustrate that as the distillation column shifted between different operating points the SMMC self-regulates accordingly. This self-regulation ensures that global stability and performance are maintained at an optimum. The entire SMMC design was implemented within a PC Windows-NT environment that was interfaced to an industrial DCS system.  相似文献   

5.
Digital twin (DT) and artificial intelligence (AI) technologies are powerful enablers for Industry 4.0 toward sustainable resilient manufacturing. Digital twins of machine tools and machining processes combine advanced digital techniques and production domain knowledge, facilitate the enhancement of agility, traceability, and resilience of production systems, and help machine tool builders achieve a paradigm shift from one-time products provision to on-going service delivery. However, the adaptability and accuracy of digital twins at the shopfloor level are restricted by heterogeneous data sources, modeling precision as well as uncertainties from dynamical industrial environments. This article proposes a novel modeling framework to address these inadequacies by in-depth integrating AI techniques and machine tool expertise using aggregated data along the product development process. A data processing procedure is constructed to contextualize metadata sources from the design, planning, manufacturing, and quality stages and link them into a digital thread. On this consistent data basis, a modeling pipeline is presented to incorporate production and machine tool prior knowledge into AI development pipeline, while considering the multi-fidelity nature of data sources in dynamic industrial circumstances. In terms of implementation, we first introduce our existing work for building digital twins of machine tool and manufacturing process. Within this infrastructure, we developed a hybrid learning-based digital twin for manufacturing process following proposed modeling framework and tested it in an external industrial project exemplarily for real-time workpiece quality monitoring. The result indicates that the proposed hybrid learning-based digital twin enables learning uncertainties of the interaction of machine tools and machining processes in real industrial environments, thus allows estimating and enhancing the modeling reliability, depending on the data quality and accessibility. Prospectively, it also contributes to the reparametrization of model parameters and to the adaptive process control.  相似文献   

6.
ABSTRACT

One of the current challenges is to reduce collisions between vehicles and animals on roads, such accidents resulting in environmental imbalance and large expenditures in public coffers. This paper presents the components of a simple animal detection system and also a methodology for animals detection in images provided by cameras installed on the roads. This methodology allows the features extraction of regions of the image and the use of Machine Learning (ML) techniques to classify the areas into two classes: animal and non-animal. Two ML techniques were compared using synthetic images, traversing the pixels of the image using five distinctive approaches. Results show that the KNN learning model is more reliable than Random Forest to identify animals on roads accurately.  相似文献   

7.
Participating members in a manufacturing supply chain (MSC) usually make use of individual knowledge for making independent decisions. Recent research, however, indicates that there is a need to handle such distributed knowledge in an integrated manner, especially under uncertain and fast changing environments. A multiagent system (MAS), a branch of distributed artificial intelligence, is a contemporary modelling technique for a distributed system like MSCs in the manufacturing domain. However recent researches indicate that MAS approaches have not adequately addressed the role of sharing tacit knowledge (TK) on MSC performance. This paper, therefore, aims to propose a framework that utilizes MAS techniques with a corresponding TK sharing mechanism dedicated to MSCs. We performed some experiments to simulate the proposed approach. The results showed significant improvements when comparing the proposed approach with another conventional MAS model. The results establish a starting point for researchers interested in enhancing MSC performance using TK management approach, and for managers of MSC to focus on the essentials of sharing TK.  相似文献   

8.
The ability of reasoning about temporal data, representing past, current and expected application states is an important function to be accomplished by Real-Time Knowledge-Based Systems (RTKBS). The application of Knowledge-based systems to real-time problems has to deal with dynamic time-constrained environments and assess two of the most important requirements in real-time systems: the ability to react rapidly to changes in the environment and the guarantee of a bound on the response time. This paper presents a temporal framework for reasoning about the future behaviour of a dynamic time-constrained problem. The proposed mechanism is integrated into a multiagent blackboard architecture and will provide a perspective of the temporal functionalities offered in the REAKT tool  相似文献   

9.
Rate-monotonic analysis for real-time industrial computing   总被引:2,自引:0,他引:2  
Issues of real-time resource management are pervasive throughout industrial computing. The underlying physical processes of many industrial computing applications impose explicit timing requirements on the tasks processed by the computer system. These timing requirements are an integral part of the correctness and safety of a real-time system. It is tempting to think that speed (for example, processor speeds or higher communication bandwidths) is the sole ingredient in meeting system timing requirements, but speed alone is not enough. Proper resource-management techniques also must be used to prevent, for example, situations in which long, low priority tasks block higher priority tasks with short deadlines. One guiding principle in real-time system resource management is predictability, the ability to determine for a given set of tasks whether the system will be able to meet all of the timing requirements of those tasks. Predictability calls for the development of scheduling models and analytic techniques to determine whether or not a real-time system can meet its timing requirements. The author illustrates an analysis methodology, rate monotonic analysis, for managing real-time requirements in a distributed industrial computing situation. The illustration is based on a comprehensive robotics example drawn from a typical industrial application  相似文献   

10.
A sliding-window k-NN query (k-NN/w query) continuously monitors incoming data stream objects within a sliding window to identify k closest objects to a query. It enables effective filtering of data objects streaming in at high rates from potentially distributed sources, and offers means to control the rate of object insertions into result streams. Therefore k-NN/w processing systems may be regarded as one of the prospective solutions for the information overload problem in applications that require processing of structured data in real-time, such as the Sensor Web. Existing k-NN/w processing systems are mainly centralized and cannot cope with multiple data streams, where data sources are scattered over the Internet. In this paper, we propose a solution for distributed continuous k-NN/w processing of structured data from distributed streams. We define a k-NN/w processing model for such setting, and design a distributed k-NN/w processing system on top of the Content-Addressable Network (CAN) overlay. An extensive evaluation using both real and synthetic data sets demonstrates the feasibility of the proposed solution because it balances the load among the peers, while the messaging overhead within the P2P network remains reasonable. Moreover, our results clearly show the solution is scalable for an increasing number of queries and peers.  相似文献   

11.
A definition for the reliability of inferential sensor predictions is provided. A data-driven Bayesian framework for real-time performance assessment of inferential sensors is proposed. The main focus is on characterizing the effect of operating space on the reliability of inferential sensor predictions. A holistic, quantitative measure of the reliability of the inferential sensor predictions is introduced. A methodology is provided to define objective prior probabilities over plausible classes of reliability based on the total misclassification cost. The real-time performance assessment of multi-model inferential sensors is also discussed. The application of the method does not depend on the identification techniques employed for model development. Furthermore, on-line implementation of the method is computationally efficient. The effectiveness of the method is demonstrated through simulation and industrial case studies.  相似文献   

12.
Murari  A.  Gelfusa  M.  Lungaroni  M.  Gaudio  P.  Peluso  E. 《Artificial Intelligence Review》2022,55(1):255-289

Classification, which means discrimination between examples belonging to different classes, is a fundamental aspect of most scientific and engineering activities. Machine Learning (ML) tools have proved to be very performing in this task, in the sense that they can achieve very high success rates. However, both “realism” and interpretability of their models are low, leading to modest increases of knowledge and limited applicability, particularly in applications related to nonlinear and complex systems. In this paper, a methodology is described, which, by applying ML tools directly to the data, allows formulating new scientific models that describe the actual “physics” determining the boundary between the classes. The proposed technique consists of a stack of different ML tools, each one applied to a specific subtask of the scientific analysis; all together they form a system, which combines all the major strands of machine learning, from rule based classifiers and Bayesian statistics to genetic programming and symbolic manipulation. To take into account the error bars of the measurements generating the data, an essential aspect of scientific inference, the novel concept of the Geodesic Distance on Gaussian manifolds is adopted. The properties of the methodology have been investigated with a series of systematic numerical tests for different types of classification problems. The potential of the approach to handle real data has been tested with various experimental databases, built using measurements collected in the investigations of complex systems. The obtained results indicate that the proposed method permits to find physically meaningful mathematical equations, which reflect the actual phenomena under study. The developed techniques therefore constitute a very useful information processing system to bridge the gap between data, machine learning models and scientific theories.

  相似文献   

13.
Anomaly detection is a crucial aspect for both safety and efficiency of modern process industries.This paper proposes a two-steps methodology for anomaly detection in industrial processes, adopting machine learning classification algorithms. Starting from a real-time collection of process data, the first step identifies the ongoing process phase, the second step classifies the input data as “Expected”, “Warning”, or “Critical”. The proposed methodology is extremely relevant where machines carry out several operations without the evidence of production phases. In this context, the difficulty of attributing the real-time measurements to a specific production phase affects the success of the condition monitoring. The paper proposes the comparison of the anomaly detection step with and without the process phase identification step, validating its absolute necessity. The methodology applies the decision forests algorithm, as a well-known anomaly detector from industrial data, and decision jungle algorithm, never tested before in industrial applications. A real case study in the pharmaceutical industry validates the proposed anomaly detection methodology, using a 10 months database of 16 process parameters from a granulation process.  相似文献   

14.
基于案例推理的软测量方法及在磨矿过程中的应用   总被引:5,自引:0,他引:5  
针对复杂工业过程中一些关键工艺参数难以用仪表进行在线检测的问题,提出了基于案例推理的软测量方法.案例表示由案例产生时间、工况描述、解及相似度组成;案例检索采用具有多相似度阈值计算的最近相邻策略;案例重用采用基于静态相似度阈值和基于动态相似度阈值两种算法,并给出了新的案例修正和存储策略.用该方法建立的磨矿粒度软测量模型已成功应用在某选矿厂磨矿过程中,应用结果表明提出的方法效果显著,具有推广应用前景.  相似文献   

15.
16.
An auction logistics center (ALC) is the facility that is dedicated to all logistics and physical distribution, and provides auction functions for goods trading. Adaptive planning and control has been a hot research topic and discussed a lot in the field of manufacturing. Adaptive auction logistics planning and control (ALPC) is urgently required at the ALC to support large trading volumes and shorten processing time. To solve real-life industrial challenges, this paper presents a generic system architecture and its implementation along with the following dimensions. Firstly, a cloud-enabled platform for auction logistics center (CALC) is presented. It is proposed to implement efficient and effective ALPC, and to increase the flexibility in terms of execution of logistics operations and auction processes. Secondly, through the integration of IoT (Internet of Things) and cloud computing technologies, the proposed CALC creates a ubiquitous environment at the ALC, and establishes auction logistics services for different key stakeholders. The adaptive ALPC can be achieved with real-time visibility and traceability. Finally, this study presents a prototype of CALC to verify the proposed methodology. The case study in this paper also shows the potential of CALC to streamline operating processes in auction logistics environment.  相似文献   

17.
多核实时操作系统相对于单核操作系统功能更多,使用也更为复杂。针对多核操作系统的配置、裁剪、移植带来不便的问题,提出一种多核实时操作系统的应用配置工具,该工具可以提高基于多核实时操作系统的应用开发效率,大幅降低出错率。首先,针对重庆邮电大学自主研发的多核控制操作系统(CMOS),对配置工具进行层次模块化设计,并根据CMOS需求设计一种可视化配置工具,完成界面生成引擎与代码自动生成;其次,为保证配置的逻辑正确性,提出了配置关联性检测。实验表明,多核操作系统配置工具的代码生成时间短、错误率低,适用于操作系统CMOS,从而验证了该配置工具的可行性;与开发人员自主查错方式相比,关联性检测提高了查错速率,能快速定位错误代码位置,保证配置文件生成的正确性,因此该配置工具可以有效促进CMOS多核操作系统的应用。  相似文献   

18.
针对染色车间染缸集群集中监控的问题,通过现场工业总线技术和以太网技术,建立了染缸过程控制、数据通信和集中临控三位一体的系统构架,并提出了一种多智能体的染缸集群监控方法.该方法实现了染缸集群的生产计划、作业任务、生产过程、染色工艺温度、染缸能源消耗等的实时监控.某大型印染车间的染缸集群监控系统应用结果表明,该系统取得了满意的实际效果.  相似文献   

19.
Due to the famous dimensionality curse problem, search in a high-dimensional space is considered as a "hard" problem. In this paper, a novel composite distance transformation method, which is called CDT, is proposed to support a fast k-nearest-neighbor (k-NN) search in high-dimensional spaces. In CDT, all (n) data points are first grouped into some clusters by a k-Means clustering algorithm. Then a composite distance key of each data point is computed. Finally, these index keys of such n data points are inserted by a partition-based B -tree. Thus, given a query point, its k-NN search in high-dimensional spaces is transformed into the search in the single dimensional space with the aid of CDT index. Extensive performance studies are conducted to evaluate the effectiveness and efficiency of the proposed scheme. Our results show-that this method outperforms the state-of-the-art high-dimensional search techniques, such as the X-Tree, VA-file, iDistance and NB-Tree.  相似文献   

20.
Uncertainty is an inherent characteristic in most industrial processes, and a variety of approaches including sensitivity analysis, robust optimization and stochastic programming have been proposed to deal with such uncertainty. Uncertainty in a steady state nonlinear real-time optimization (RTO) system and particularly making robust decisions under uncertainty in real-time has received little attention. This paper discusses various sources of uncertainty within such closed loop RTO systems and a method, based on stochastic programming, that explicitly incorporates uncertainty into the RTO problem is presented. The proposed method is limited to situations where uncertain parameters enter the constraints nonlinearly and uncertain economics enter the objective function linearly. Our approach is shown to significantly improve the probability of a feasible solution in comparison to more conventional RTO techniques. A gasoline blending example is used to demonstrate the proposed robust RTO approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号