首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   98篇
  免费   61篇
  国内免费   22篇
电工技术   21篇
综合类   5篇
化学工业   3篇
金属工艺   1篇
机械仪表   4篇
建筑科学   4篇
水利工程   3篇
石油天然气   3篇
无线电   6篇
一般工业技术   6篇
冶金工业   3篇
原子能技术   1篇
自动化技术   121篇
  2025年   14篇
  2024年   45篇
  2023年   30篇
  2022年   26篇
  2021年   27篇
  2020年   8篇
  2019年   6篇
  2017年   1篇
  2016年   3篇
  2014年   1篇
  2012年   1篇
  2011年   2篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2006年   5篇
  2005年   3篇
  2003年   1篇
  2002年   1篇
  2001年   1篇
  1986年   1篇
  1961年   1篇
排序方式: 共有181条查询结果,搜索用时 15 毫秒
1.
    
Dam displacements can effectively reflect its operational status, and thus establishing a reliable displacement prediction model is important for dam health monitoring. The majority of the existing data-driven models, however, focus on static regression relationships, which cannot capture the long-term temporal dependencies and adaptively select the most relevant influencing factors to perform predictions. Moreover, the emerging modeling tools such as machine learning (ML) and deep learning (DL) are mostly black-box models, which makes their physical interpretation challenging and greatly limits their practical engineering applications. To address these issues, this paper proposes an interpretable mixed attention mechanism long short-term memory (MAM-LSTM) model based on an encoder-decoder architecture, which is formulated in two stages. In the encoder stage, a factor attention mechanism is developed to adaptively select the highly influential factors at each time step by referring to the previous hidden state. In the decoder stage, a temporal attention mechanism is introduced to properly extract the key time segments by identifying the relevant hidden states across all the time steps. For interpretation purpose, our emphasis is placed on the quantification and visualization of factor and temporal attention weights. Finally, the effectiveness of the proposed model is verified using monitoring data collected from a real-world dam, where its accuracy is compared to a classical statistical model, conventional ML models, and homogeneous DL models. The comparison demonstrates that the MAM-LSTM model outperforms the other models in most cases. Furthermore, the interpretation of global attention weights confirms the physical rationality of our attention-based model. This work addresses the research gap in interpretable artificial intelligence for dam displacement prediction and delivers a model with both high-accuracy and interpretability.  相似文献   
2.
当前最流行的图像特征学习方法是深度神经网络,该类方法无需人工参与即可自动地通过特征学习提取高效的特征,用于分类识别等任务。然而,深度神经网络图像特征抽取方法目前也面临着诸多挑战,其有效性严重依赖大规模的数据,且通常被视为黑盒模型,解释性较差。针对上述挑战,以基于模糊规则推理的TSK模糊系统(TSK-FS)为基础,提出了一种适用于不同规模数据集且易于理解的特征学习方法——多粒度融合的模糊规则系统图像特征学习算法。该方法通过基于规则的TSK-FS抽取图像特征,因而特征学习过程是可以利用规则进行解释的。其次,多粒度扫描也使得其特征学习能力进一步提升。在不同规模的图像数据集上进行了充分的实验,实验结果表明该方法在图像数据集上具有较好的有效性。  相似文献   
3.
    
The use and creation of machine-learning-based solutions to solve problems or reduce their computational costs are becoming increasingly widespread in many domains. Deep Learning plays a large part in this growth. However, it has drawbacks such as a lack of explainability and behaving as a black-box model. During the last few years, Visual Analytics has provided several proposals to cope with these drawbacks, supporting the emerging eXplainable Deep Learning field. This survey aims to (i) systematically report the contributions of Visual Analytics for eXplainable Deep Learning; (ii) spot gaps and challenges; (iii) serve as an anthology of visual analytical solutions ready to be exploited and put into operation by the Deep Learning community (architects, trainers and end users) and (iv) prove the degree of maturity, ease of integration and results for specific domains. The survey concludes by identifying future research challenges and bridging activities that are helpful to strengthen the role of Visual Analytics as effective support for eXplainable Deep Learning and to foster the adoption of Visual Analytics solutions in the eXplainable Deep Learning community. An interactive explorable version of this survey is available online at https://aware-diag-sapienza.github.io/VA4XDL .  相似文献   
4.
With the increasing deployment of deep learning-based systems in various scenes,it is becoming important to conduct sufficient testing and evaluation of deep learning models to improve their interpretability and robustness.Recent studies have proposed different criteria and strategies for deep neural network(DNN)testing.However,they rarely conduct effective testing on the robustness of DNN models and lack interpretability.This paper proposes a new priority testing criterion,called DeepLogic,to analyze the robustness of the DNN models from the perspective of model interpretability.We first define the neural units in DNN with the highest average activation probability as\"interpretable logic units\".We analyze the changes in these units to evaluate the model's robustness by conducting adversarial attacks.After that,the interpretable logic units of the inputs are taken as context attri-butes,and the probability distribution of the softmax layer in the model is taken as internal attributes to establish a comprehensive test prioritization framework.The weight fusion of context and internal factors is carried out,and the test cases are sorted according to this priority.The experimental results on four popular DNN models using eight testing metrics show that our DeepLogic significantly outperforms existing state-of-the-art methods.  相似文献   
5.
    
Abstract

In this paper, we propose the use of subspace clustering to detect the states of dynamical systems from sequences of observations. In particular, we generate sparse and interpretable models that relate the states of aquatic drones involved in autonomous water monitoring to the properties (e.g., statistical distribution) of data collected by drone sensors. The subspace clustering algorithm used is called SubCMedians. A quantitative experimental analysis is performed to investigate the connections between i) learning parameters and performance, ii) noise in the data and performance. The clustering obtained with this analysis outperforms those generated by previous approaches.  相似文献   
6.
虹膜识别技术因唯一性、稳定性、非接触性、准确性等特性广泛应用于各类现实场景中. 然而, 现有的许多虹膜识别系统在认证过程中仍然容易遭受各种攻击的干扰, 导致安全性方面可能存在风险隐患. 在不同的攻击类型中, 呈现攻击(Presentation attacks, PAs)由于出现在早期的虹膜图像获取阶段, 且形式变化多端, 因而虹膜呈现攻击检测(Iris presentation attack detection, IPAD)成为虹膜识别技术中首先需要解决的安全问题之一, 得到了学术界和产业界的广泛重视. 本综述是目前已知第一篇虹膜呈现攻击检测领域的中文综述, 旨在帮助研究人员快速、全面地了解该领域的相关知识以及发展动态. 总体来说, 本文对虹膜呈现攻击检测的难点、术语和攻击类型、主流方法、公共数据集、比赛及可解释性等方面进行全面归纳. 具体而言, 首先介绍虹膜呈现攻击检测的背景、虹膜识别系统现存的安全漏洞与呈现攻击的目的. 其次, 按照是否使用额外硬件设备将检测方法分为基于硬件与基于软件的方法两大类, 并在基于软件的方法中按照特征提取的方式作出进一步归纳和分析. 此外, 还整理了开源方法、可申请的公开数据集以及概括了历届相关比赛. 最后, 对虹膜呈现攻击检测未来可能的发展方向进行了展望.  相似文献   
7.

相比于三比值等传统方法,基于机器学习算法的变压器故障诊断方法在诊断效率及准确性等方面具有一定的优势,但“黑箱模型”的本质属性决定了其决策过程及诊断结果的不可解释性。针对该问题,该文提出了一种基于油中溶解气体分析的可解释变压器故障诊断方法,采用树形夏普利加法解释(tree Shapely additive explanations,TreeSHAP)方法实现了基于树结构概率密度估计优化极端梯度提升(tree-structured parzen estimator-extreme gradient boosting,TPE-XGBoost)的变压器故障诊断模型的可解释性分析。首先,构建了涵盖油中溶解气体含量、比值及编码等多结构数据的24维故障特征集合,并筛选得到10个有效特征量。其次,提出了基于TPE-XGBoost算法的变压器故障诊断方法,采用树结构概率密度估计完成XGBoost模型的多参数同步优化,实现对故障类型的准确判断。最后,引入TreeSHAP理论开展变压器故障诊断模型的可解释性分析,实现了故障诊断决策过程及其影响因素的可视化,并获取了不同故障类型的关键特征量。研究表明,该文所述变压器故障诊断方法的平均准确率为90.23%,同时可反映特征量对模型决策的影响过程及程度。该方法具有较好的准确性、鲁棒性及可解释性,可为变压器运维检修提供针对性的指导建议。

  相似文献   
8.
目前,空间负荷预测研究对复杂时空关系的考虑不足。为此,文中提出一种基于多维、多源特征的区域级负荷超短期时空预测模型。首先,根据已有的区域级负荷进行元胞划分,构建考虑元胞相关性的图拓扑。其次,分别通过图注意力网络、一维卷积神经网络和门控循环单元,从空间、特征和时间维度提取有效特征,连接全连接层输出结果。最后,基于美国新英格兰地区的真实电力负荷数据进行仿真验证,并提取模型注意力权重,分析元胞之间的空间依赖性。结果表明,所提模型相比传统模型在不同预测步长上均具有更高的预测精度和稳定性,有效挖掘了区域级负荷的空间依赖性。  相似文献   
9.
在多岩性与多指标钻探数据收集的基础上,综合考虑解译精度与预报效果,借助机器学习工具,提出一种基于数字钻探与多尺度模型融合的隧道岩体完整性自动解译技术。首先,对原始钻探数据有针对性的进行降噪与等距分割(0.5,1,2 m)等预处理,形成多尺度、高质量机器学习数据集;然后,进行模型参数自动寻优、训练、评估与可解释性等操作,验证模型的准确性与可靠性;最后,采用加权平均的方法进行多尺度模型解译结果的融合,以增强该技术的工程实用效果。为方便实际工程应用,以上述技术为核心开发轻量化数字钻探智能解译平台,经多条灰岩与砂岩隧道应用结果表明:对比地质雷达与常规钻探解译,多尺度模型融合解译在解译效率、预测效果等方面总体表现优异,可为隧道施工的开挖与支护提供可靠的岩体完整性信息。  相似文献   
10.

基于数据驱动的电力系统暂态功角稳定评估虽然可以提供较为准确的结果,但其评估结果缺乏可解释性,导致难以应用于工程实际中。针对该问题,提出一种基于逐步特征增广梯度提升(gradient boosting enhanced with step-wise feature augmentation,AugBoost)的暂态功角稳定评估及可解释性分析方法。首先,通过训练AugBoost评估模型,建立电力系统输入特征与暂态功角稳定指标之间的映射关系;其次,将相量测量单元的实时量测数据传输到训练好的AugBoost评估模型中,提供实时评估结果;并根据沙普利值加性解释(Shapley additive explanations,SHAP)模型对评估结果和输入特征之间的关系进行解释,提高结果的可信度。最后,设计模型更新过程来提升评估模型面对电力系统运行工况变化的鲁棒性。在电力系统仿真软件PSS/E提供的23节点系统和1648节点系统上的仿真结果验证了所提方法的有效性。相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号