首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   26篇
  免费   3篇
  国内免费   1篇
电工技术   1篇
无线电   11篇
自动化技术   18篇
  2023年   5篇
  2022年   10篇
  2021年   7篇
  2020年   8篇
排序方式: 共有30条查询结果,搜索用时 15 毫秒
1.
Feature extraction for visible–infrared person re-identification (VI-ReID) is challenging because of the cross-modality discrepancy in the images taken by different spectral cameras. Most of the existing VI-ReID methods often ignore the potential relationship between features. In this paper, we intend to transform low-order person features into high-order graph features, and make full use of the hidden information between person features. Therefore, we propose a multi-hop attention graph convolution network (MAGC) to extract robust person joint feature information using residual attention mechanism while reducing the impact of environmental noise. The transfer of higher order graph features within MAGC enables the network to learn the hidden relationship between features. We also introduce the self-attention semantic perception layer (SSPL) which can adaptively select more discriminant features to further promote the transmission of useful information. The experiments on VI-ReID datasets demonstrate its effectiveness.  相似文献   
2.
随着软件数量的急剧增长以及种类的日益多样化,挖掘软件需求文本特征并对软件需求特征聚类,成为了软件工程领域的一大挑战。软件需求文本的聚类为软件开发过程提供了可靠的保障,同时降低了需求分析阶段的潜在风险和负面影响。然而,软件需求文本存在离散度高、噪声大和数据稀疏等特点,目前有关聚类的工作局限于单一类型的文本,鲜有考虑软件需求的功能语义。文中鉴于需求文本的特点和传统型聚类方法的局限性,提出了融合自注意力机制和多路金字塔卷积的软件需求聚类算法(SA-MPCN&SOM)。该方法通过自注意力机制捕获全局特征,然后基于多路金字塔卷积从不同窗口的通路深度挖掘需求文本特征,使得感知的文本片段逐倍增加,最终融合多路文本特征,利用SOM完成聚类。在软件需求数据上的实验表明,所提方法能较好地挖掘需求特征并对其聚类,性能上优于其他特征提取方式和聚类算法。  相似文献   
3.
近年来,基于骨架的人体动作识别任务因骨架数据的鲁棒性和泛化能力而受到了广泛关注。其中,将人体骨骼建模为时空图的图卷积网络取得了显著的性能。然而图卷积主要通过一系列3D卷积来学习长期交互联系,这种联系偏向于局部并且受到卷积核大小的限制,无法有效地捕获远程依赖关系。该文提出一种协作卷积Transformer网络(Co-ConvT),通过引入Transformer中的自注意力机制建立远程依赖关系,并将其与图卷积神经网络(GCNs)相结合进行动作识别,使模型既能通过图卷积神经网络提取局部信息,也能通过Transformer捕获丰富的远程依赖项。另外,Transformer的自注意力机制在像素级进行计算,因此产生了极大的计算代价,该模型通过将整个网络分为两个阶段,第1阶段使用纯卷积来提取浅层空间特征,第2阶段使用所提出的ConvT块捕获高层语义信息,降低了计算复杂度。此外,原始Transformer中的线性嵌入被替换为卷积嵌入,获得局部空间信息增强,并由此去除了原始模型中的位置编码,使模型更轻量。在两个大规模权威数据集NTU-RGB+D和Kinetics-Skeleton上进行实验验证,该模型分别达到了88.1%和36.6%的Top-1精度。实验结果表明,该模型的性能有了很大的提高。  相似文献   
4.
Underwater image processing technologies have always been challenging tasks due to the complex underwater environment. Images captured under water are not only affected by the water itself, but also by the diverse suspended particles that increase the effect of absorption and scattering. Moreover, these particles themselves are usually imaged on the picture, causing the spot noise signal to interfere with the target objects. To address this issue, we propose a novel deep neural network for removing the spot noise from underwater images. Its main idea is to train a generative adversarial network (GAN) to transform the noisy image to clean image. Based on the deep encoder and decoder framework, the skip connections are introduced to combine the features of low-level and high-level to help recover the original image. Meanwhile, the self-attention mechanism is employed to the generative network to capture global dependencies in the feature maps, which can generate the image with fine details at every location. Furthermore, we apply the spectral normalization to both the generative and discriminative networks to stabilize the training process. Experiments evaluated on synthetic and real-world images show that the proposed method outperforms many recent state-of-the-art methods in terms of quantitative and visual quality. Besides, the results also demonstrate that the proposed method has the good ability to remove the spot noise from underwater images while preserving sharp edge and fine details.  相似文献   
5.
硅藻训练样本量较少时,检测精度偏低,为此在小样本目标检测模型TFA(Two-stage Fine-tuning Approach)的基础上提出一种融合多尺度多头自注意力(MMS)和在线难例挖掘(OHEM)的小样本硅藻检测模型(MMSOFDD)。首先,结合ResNet-101与多头自注意力机制构造一个基于Transformer的特征提取网络BoTNet-101,以充分利用硅藻图像的局部和全局信息;然后,改进多头自注意力为MMS,消除了原始多头自注意力的处理目标尺度单一的局限性;最后,引入OHEM到模型预测器中,并对硅藻进行识别与定位。把所提模型与其他小样本目标检测模型在自建硅藻数据集上进行消融及对比实验。实验结果表明:与TFA相比,MMSOFDD的平均精度均值(mAP)为69.60%,TFA为63.71%,MMSOFDD提高了5.89个百分点;与小样本目标检测模型Meta R-CNN和FSIW相比,Meta R-CNN和FSIW的mAP分别为61.60%和60.90%,所提模型的mAP分别提高了8.00个百分点和8.70个百分点。而且,MMSOFDD在硅藻训练样本量少的条件下能够有效地提高检测模型对硅藻的检测精度。  相似文献   
6.
在长文本数据中存在很多与主题不相关词汇,导致这些文本数据具有信息容量大、特征表征不突出等特点。增加这些文本中关键词汇的特征影响,是提高文本分类器性能需要解决的问题。提出一种结合自注意力机制的循环卷积神经网络文本分类模型RCNN_A。注意力机制对文本词向量计算其对正确分类类别的贡献度,得到注意力矩阵,将注意力矩阵和词向量矩阵相结合作为后续结构的输入。实验结果表明,RCNN_A在10类搜狗新闻数据集上,得到了97.35%的分类正确率,比Bi-LSTM(94.75%)、Bi-GRU(94.25%)、TextCNN(93.31%)、RCNN(95.75%)具有更好的文本分类表现。通过在深度神经网络模型中引入注意力机制,能够有效提升文本分类器性能。  相似文献   
7.
注意力机制近年来在多个自然语言任务中得到广泛应用,但在句子级别的情感分类任务中仍缺乏相应的研究。文中利用自注意力在学习句子中重要局部特征方面的优势,结合长短期记忆网络(Long Short-Term Model,LSTM),提出了一种基于注意力机制的神经网络模型(Attentional LSTM,AttLSTM),并将其应用于句子的情感分类。AttLSTM首先通过LSTM学习句子中词的上文信息;接着利用自注意力函数从句子中学习词的位置信息,并构造相应的位置权重向量矩阵;然后通过加权平均得到句子的最终语义表示;最后利用多层感知器进行分类和输出。实验结果表明,AttLSTM在公开的二元情感分类语料库Movie Reviews(MR),Stanford Sentiment Treebank(SSTb2)和Internet Movie Database(IMDB)上的准确率最高,分别为82.8%,88.3%和91.3%;在多元情感分类语料库SSTb5上取得50.6%的准确率。  相似文献   
8.
针对当前恶意代码检测方法严重依赖人工提取特征和无法提取恶意代码深层特征的问题,提出一种基于双向长短时记忆(Bidirectional Long Short Term Memory,Bi-LSTM)模型和自注意力的恶意代码检测方法。采用Bi-LSTM自动学习恶意代码样本字节流序列,输出各时间步的隐状态;利用自注意力机制计算各时间步隐状态的线性加权和作为序列的深层特征;通过全连接神经网络层和Softmax层输出深层特征的预测概率。实验结果表明该方法切实可行,相较于次优结果,准确率提高了12.32%,误报率降低了66.42%。  相似文献   
9.
The rapid development of deep learning has prompted the development of video action detection technology. However, the accuracy of current video action detection algorithms can be improved further. Previous work has improved feature extraction by optimizing the network structure. In addition, the features of the candidate regions have been optimized by changing the representation of the regions. Although these methods have achieved promising results, they fail to consider the correlation among different candidate regions, generating uninformative (even redundant) candidate regions, and thus usually decrease the detection performance in practice. To address this problem, in this paper we propose a self-attention mechanism for candidate regions, which can help pursue the most informative regions. We obtain the region correlation by simultaneously determining the spatial and temporal correlation among different candidate regions. In addition, we focus on how to apply the correlation to optimize the original candidate region features and improve video action detection accuracy. The experimental results show the promising improvement achieved by our method over the state-of-the-art solutions.  相似文献   
10.
Remaining useful life (RUL) prediction plays a significant role in the prognostic and health management (PHM) of rotating machineries. A good health indicator (HI) can ensure the accuracy and reliability of RUL prediction. However, numerous existing deep learning-based HI construction approaches rely heavily on the prior knowledge, and they are difficult to capture the key information in the process of machinery degradation from raw signals, thereby affecting the performance of RUL prediction. To tackle the aforementioned problem, a new supervised multi-head self-attention autoencoder (SMSAE) is proposed for extracting the HI that effectively reflects the degraded state of rotating machinery. By embedding the multi-head self-attention (MS) module into autoencoder and imposing the constraint of power function-type labels on the hidden variable, SMSAE can directly extract the HIs from raw vibration signals. As the current HI evaluation indexes don’t consider the global monotonicity and variation law of HI, two improved monotonicity and robustness indexes are designed for the better evaluation of HI. With the proposed HI, a two-stage residual life prediction framework based on similarity is developed. Extensive experiments have been performed on an actual wind turbine gearbox bearing dataset and a well-known open commercial modular aero-propulsion system simulation (C-MAPSS) dataset. The comparative results verify that the constructed SMSAE HI has better comprehensive performance than the typical HIs, and the proposed prediction method is competitive with the state-of-the-art methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号