首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A statistical study of a class of cellular evolutionary algorithms.   总被引:1,自引:0,他引:1  
Parallel evolutionary algorithms, over the past few years, have proven empirically worthwhile, but there seems to be a lack of understanding of their workings. In this paper we concentrate on cellular (fine-grained) models, our objectives being: (1) to introduce a suite of statistical measures, both at the genotypic and phenotypic levels, which are useful for analyzing the workings of cellular evolutionary algorithms; and (2) to demonstrate the application and utility of these measures on a specific example-the cellular programming evolutionary algorithm. The latter is used to evolve solutions to three distinct (hard) problems in the cellular-automata domain: density, synchronization, and random number generation. Applying our statistical measures, we are able to identify a number of trends common to all three problems (which may represent intrinsic properties of the algorithm itself), as well as a host of problem-specific features. We find that the evolutionary algorithm tends to undergo a number of phases which we are able to quantitatively delimit. The results obtained lead us to believe that the measures presented herein may prove useful in the general case of analyzing fine-grained evolutionary algorithms.  相似文献   

2.
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems’ black-box choices are made. This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms cannot manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implementation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular its advances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.  相似文献   

3.
张健  丁世飞  张楠  杜鹏  杜威  于文家 《软件学报》2019,30(7):2073-2090
概率图模型是目前机器学习研究的热点,基于概率图模型构造的生成模型已广泛应用于图像和语音处理等领域.受限玻尔兹曼机(restricted Boltzmann machines,简称RBMs)是一种概率无向图,在建模数据分布方面有重要的研究价值,RBMs既可以结合卷积算子构造深度判别模型,为深度网络提供统计力学的理论支持,也可以结合有向图构建生成模型,提供具有多峰分布的先验信息.主要综述了以RBMs为基础的概率图模型的相关研究.首先介绍了基于RBMs的机器学习模型的基本概念和训练算法,并讨论了基于极大似然估计的各训练算法的联系,比较了各算法的log似然损失;其次,综述了RBMs模型最新的研究进展,包括在目标函数中引入对抗损失和W距离,并构造基于RBMs先验的变分自编码模型(variational autoencoders,简称VAEs)、基于对抗损失的RBMs模型,并讨论了各实值RBMs模型之间的联系和区别;最后,综述了以RBMs为基础的模型在深度学习中的应用,并讨论了神经网络和RBMs模型在研究中存在的问题及未来的研究方向.  相似文献   

4.
High performance computation is critical for brain-machine interface (BMI) applications. Current BMI decoding algorithms are always implemented on personal computers (PC) which affect the performance of complex mapping models. In this paper, an FPGA implementation of Kalman filter (KF) algorithm is proposed as a new computational method. The neural ensemble activities are recorded from motor cortex of rats performing a lever-pressing task for water reward. Kalman filter, which is used for mapping neural activities to kinematic variables, is implemented both on PC (MATLAB-based) and FPGA. In FPGA architecture, the row/column-based method is adopted for the matrix operation instead of the traditional element-based method, parallel and pipelined structures are also used for efficient computation at the same time. The results show that the FPGA-based implementation runs 24.45 times faster than the PC-based counterpart while achieving the same accuracy. Such a hardware-based computational method provides a tool for high-performance computation, with profound implications for portable BMI application.  相似文献   

5.
Toward the border between neural and Markovian paradigms   总被引:1,自引:0,他引:1  
A new tendency in the design of modern signal processing methods is the creation of hybrid algorithms. This paper gives an overview of different signal processing algorithms situated halfway between Markovian and neural paradigms. A new systematic way to classify these algorithms is proposed. Four specific classes of models are described. The first one is made up of algorithms based upon either one of the two paradigms, but including some parts of the other one. The second class includes algorithms proposing a parallel or sequential cooperation of two independent Markovian and neural parts. The third class tends to show Markov models (MMs) as a special case of neural networks (NNs), or conversely NNs as a special case of MMs. These algorithms concentrate mainly on bringing together respective learning methods. The fourth class of algorithms are hybrids, neither purely Markovian nor neural. They can be seen as belonging to a more general class of models, presenting features from both paradigms. The first two classes essentially include models with structural modifications, while two later classes propose algorithmic modifications. For the sake of clarity, only main mathematical formulas are given. Specific applications are intentionally avoided to give a wider view of the subject. The references provide more details for interested readers.  相似文献   

6.
Three parallel physical optimization algorithms for allocating irregular data to multicomputer nodes are presented. They are based on simulated annealing, neural networks and genetic algorithms. All three algorithms deviate from the sequential versions in order to achieve acceptable speedups. The parallel simulated annealing (PSA) and neural network (PNN) algorithms include communication schemes that are adapted to the properties of the allocation problem and of the algorithms themselves for maintaining both good solutions and reasonable execution times. The parallel genetic algorithm (PGA) is based on a natural model of evolution. The performances of these algorithms are evaluated and compared. The three parallel algorithms maintain the good solution qualities of their sequential counterparts. Their comparison shows their suitability for different applications. For example, PGA yields the best solutions, but it is the slowest of the three. PNN is the fastest, but it yields lower quality solutions. PSA's performance lies in the middle.  相似文献   

7.
Android系统作为世界上最流行的智能手机系统,其用户正面临着来自恶意应用的诸多威胁。如何有效地检测Android恶意应用是非常严峻的问题。本文提出基于统计学特征的Android恶意应用检测方法。该方法收集5560个恶意应用和3000个良性应用的统计学特征作为训练数据集并采用聚类算法预处理恶意数据集以降低个体差异性对实验结果的影响。另一方面,该方法结合特征和多种机器学习算法(如线性回归、神经网络等)建立了检测模型。实验结果表明,该方法提供的两个模型在时间效率和检测精度上都明显优于对比模型。  相似文献   

8.
目的 细粒度图像分类是计算机视觉领域具有挑战性的课题,目的是将一个大的类别分为更详细的子类别,在工业和学术方面都有着十分广泛的研究需求。为了改善细粒度图像分类过程中不相关背景干扰和类别差异特征难以提取的问题,提出了一种将目标检测方法YOLOv3(you only look once)和双线性融合网络相结合的细粒度分类优化算法,以此提高细粒度图像分类的性能。方法 利用重新训练过的目标检测算法YOLOv3粗略确定目标在图像中的位置;使用背景抑制方法消除目标以外的信息干扰;利用融合不同通道、不同层级卷积层特征的方法对经典的细粒度分类算法双线性卷积神经网络(bilinear convolutional neural network,B-CNN)进行改进,优化分类性能,通过融合双线性网络中不同卷积层的特征向量,得到更加丰富的互补信息,从而提高细粒度分类精度。结果 实验结果表明,在CUB-200-2011(Caltech-UCSD Birds-200-2011)、Cars196和Aircrafts100数据集中,本文算法的分类准确率分别为86.3%、92.8%和89.0%,比经典的B-CNN细粒度分类算法分别提高了2.2%、1.5%和4.9%,验证了本文算法的有效性。同时,与已有细粒度图像分类算法相比也表现出一定的优势。结论 改进算法使用YOLOv3有效滤除了大量无关背景,通过特征融合方法来改进双线性卷积神经分类网络,丰富特征信息,使分类的结果更加精准。  相似文献   

9.
Implementation of intelligent and bio-inspired algorithms in industrial and real applications is arduous, time consuming and costly; in addition, many aspects of system from high level behavior of algorithm to energy consumption of targeted system must be considered simultaneously in the design process. Advancement of hardware platforms such as DSPs, FPGAs and ASICs in recent years has made it increasingly possible to implement computationally complex intelligent systems; on the other hand, however, the design and testing costs of these systems are high. Reusability and extendibility features of the developed models can decrease the total cost and time-to-market of an intelligent system. In this work, model driven development approach is utilized for implementation of emotional learning as a bio-inspired algorithm for embedded purposes. Recent studies show that emotion is a mechanism for fast decision making in human and other animals, and can be assumed as an expert system. Mathematical models have been developed for describing emotion in mammals from cognitive studies. Here brain emotional based learning intelligent controller (BELBIC), which is based on mammalian middle brain, is designed and implemented on FPGA and the obtained embedded emotional controller (E-BELBIC) is utilized for controlling real laboratorial overhead traveling crane in model-free and embedded manner. Short time-to-market, easy testing and error handling, separating concerns, improving reusability and extendibility of obtained models in similar applications are some benefits of the model driven development methodology.  相似文献   

10.
With the rapid growth of deep learning and neural network algorithms, various fields such as communication, Industrial automation, computer vision system and medical applications have seen the drastic improvements in recent years. However, deep learning and neural network models are increasing day by day, while model parameters are used for representing the models. Although the existing models use efficient GPU for accommodating these models, their implementation in the dedicated embedded devices needs more optimization which remains a real challenge for researchers. Thus paper, carries an investigation of deep learning frameworks, more particularly as review of adders implemented in the deep learning framework. A new pipelined hybrid merged adders (PHMAC) optimized for FPGA architecture which has more efficient in terms of area and power is presented. The proposed adders represent the integration of the principle of carry select and carry look ahead principle of adders in which LUT is re-used for the different inputs which consume less power and provide effective area utilization. The proposed adders were investigated on different FPGA architectures in which the power and area were analyzed. Comparison of the proposed adders with the other adders such as carry select adders (CSA), carry look ahead adder (CLA), Carry skip adders and Koggle Stone adders has been made and results have proved to be highly vital into a 50% reduction in the area, power and 45% when compared with above mentioned traditional adders.  相似文献   

11.
Ensemble learning has gained considerable attention in different tasks including regression, classification and clustering. Adaboost and Bagging are two popular approaches used to train these models. The former provides accurate estimations in regression settings but is computationally expensive because of its inherently sequential structure, while the latter is less accurate but highly efficient. One of the drawbacks of the ensemble algorithms is the high computational cost of the training stage. To address this issue, we propose a parallel implementation of the Resampling Local Negative Correlation (RLNC) algorithm for training a neural network ensemble in order to acquire a competitive accuracy like that of Adaboost and an efficiency comparable to that of Bagging. We test our approach on both synthetic and real datasets from the UCI and Statlib repositories for the regression task. In particular, our fine-grained parallel approach allows us to achieve a satisfactory balance between accuracy and parallel efficiency.  相似文献   

12.
The parallel substitution algorithm, which is a spatial model for representing fine-grained parallel computations, is used for constructing self-replicating structures in a cellular space. The use of this model allows one to create more compact (in terms of the number of cell states and transition rules) and structured self-reproduction programs compared to the classical cellular automaton model. Two parallel substitution algorithms for modeling the self-reproduction of a cellular structure having the shape of a rectangular loop are presented. One of them models the self-reproduction of the original structures from left to right, and the other, from left to right and from bottom to top.  相似文献   

13.
窦慧  张凌茗  韩峰  申富饶  赵健 《软件学报》2024,35(1):159-184
神经网络模型性能日益强大,被广泛应用于解决各类计算机相关任务,并表现出非常优秀的能力,但人类对神经网络模型的运行机制却并不完全理解.针对神经网络可解释性的研究进行了梳理和汇总,就模型可解释性研究的定义、必要性、分类、评估等方面进行了详细的讨论.从解释算法的关注点出发,提出一种神经网络可解释算法的新型分类方法,为理解神经网络提供一个全新的视角.根据提出的新型分类方法对当前卷积神经网络的可解释方法进行梳理,并对不同类别解释算法的特点进行分析和比较.同时,介绍了常见可解释算法的评估原则和评估方法.对可解释神经网络的研究方向与应用进行概述.就可解释神经网络面临的挑战进行阐述,并针对这些挑战给出可能的解决方向.  相似文献   

14.
Neurofuzzy modelling is ideally suited to many nonlinear system identification and data modelling applications. By combining the attractive attributes of fuzzy systems and neural networks transparent models of ill-defined systems can be identified. Available expert a priori knowledge is used to construct an initial model. Data modelling techniques from the neural network, statistical and conventional system identification communities are then used to adapt these models. As a result accurate parsimonious models which are transparent and easy to validate are identified. Recent advances in the datadriven identification algorithms have now made neurofuzzy modelling appropriate for high-dimensional problems for which the expert knowledge and data may be of a poor quality. In this paper neurofuzzy modelling techniques are presented. This powerful approach to system identification is demonstrated by its application to the identification of an Autonomous Underwater Vehicle (AUV).  相似文献   

15.
In this paper we analyze a fundamental issue which directly impacts the scalability of current theoretical neural network models to applicative embodiments, in both software as well as hardware. This pertains to the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the consequent chaotic manifestations in the absence of proper conditioning. The latter concern is particularly significant since the computational inertia of neural networks in general and our dynamical learning formalisms manifests itself substantially, only in massively parallel hardward—optical, VLSI or opto-electronic. We introduce a mathematical framework for systematically reconditioning additive-type models and derive a neuro-operator, based on the chaotic relaxation paradigm whose resulting dynamics are neither “concurrently” synchronous nor “sequentially” asynchronous. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are also computed to characterize the network dynamics and to ensure that throughput-limiting “emergent computational chaos” behavior in models reconditioned with concurrently asynchronous algorithms was eliminated.  相似文献   

16.
17.
Abstract: Artificial neural networks are bio-inspired mathematical models that have been widely used to solve complex problems. The training of a neural network is an important issue to deal with, since traditional gradient-based algorithms become easily trapped in local optimal solutions, therefore increasing the time taken in the experimental step. This problem is greater in recurrent neural networks, where the gradient propagation across the recurrence makes the training difficult for long-term dependences. On the other hand, evolutionary algorithms are search and optimization techniques which have been proved to solve many problems effectively. In the case of recurrent neural networks, the training using evolutionary algorithms has provided promising results. In this work, we propose two hybrid evolutionary algorithms as an alternative to improve the training of dynamic recurrent neural networks. The experimental section makes a comparative study of the algorithms proposed, to train Elman recurrent neural networks in time-series prediction problems.  相似文献   

18.
Adriel Lau 《Information Sciences》2009,179(10):1469-1482
This paper presents analytical models of Cryptosporidium parvum inactivation that have been evolved using immune programming. The objective of these models is to predict the reduction of infectivity associated with the disinfection by ozone and chlorine dioxide. To solve this problem, we introduce a modified immune programming approach together with corresponding implementation of the immune algorithm. The modeling results indicate that models obtained with immune programming outperform the traditional temperature corrected Chick-Watson models, as well as previously developed artificial neural network models. Detailed analysis of modeling errors, prediction power, and behavior of the models are included. Obtained models reveal that some input attributes have no effect on the prediction performance. This finding corresponds to the results previously obtained by saliency analysis of neural models. Results obtained in this study suggest that immune programming is becoming a mature technology which is ready for wide implementation in applications.  相似文献   

19.
神经网络计算机的实现是神经网络研究领域中一个重要课题。目前,神经网络的研究已形成了较为系统的理论模型与算法,但神经网络计算机的研究却至今没有重大突破,主要困难就在于网络规模过大,突触联系密度太高等等,为解决这个问题,文中基于分形理论,提出一种神经网络计算机的分形实现方案,给出了分形维数的计算公式,产从物理上实现了与体结构具有自相似性的分形子结构,在神经网络计算机的实现上作了有益的探索。  相似文献   

20.
Nonlinear model predictive control (NMPC) algorithms are based on various nonlinear models. A number of on-line optimization approaches for output-feedback NMPC based on various black-box models can be found in the literature. However, NMPC involving on-line optimization is computationally very demanding. On the other hand, an explicit solution to the NMPC problem would allow efficient on-line computations as well as verifiability of the implementation. This paper applies an approximate multi-parametric nonlinear programming approach to explicitly solve output-feedback NMPC problems for constrained nonlinear systems described by black-box models. In particular, neural network models are used and the optimal regulation problem is considered. A dual-mode control strategy is employed in order to achieve an offset-free closed-loop response in the presence of bounded disturbances and/or model errors. The approach is applied to design an explicit NMPC for regulation of a pH maintaining system. The verification of the NMPC controller performance is based on simulation experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号