首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   87篇
  免费   36篇
  国内免费   53篇
电工技术   6篇
综合类   5篇
矿业工程   1篇
水利工程   1篇
武器工业   1篇
无线电   13篇
一般工业技术   4篇
自动化技术   145篇
  2024年   2篇
  2023年   22篇
  2022年   67篇
  2021年   56篇
  2020年   18篇
  2019年   1篇
  2016年   1篇
  2015年   1篇
  2008年   1篇
  2007年   2篇
  2006年   1篇
  2005年   1篇
  2004年   1篇
  2000年   1篇
  1986年   1篇
排序方式: 共有176条查询结果,搜索用时 0 毫秒
171.
Online reviews significantly influence decision-making in many aspects of society. The integrity of internet evaluations is crucial for both consumers and vendors. This concern necessitates the development of effective fake review detection techniques. The goal of this study is to identify fraudulent text reviews. A comparison is made on shill reviews vs. genuine reviews over sentiment and readability features using semi-supervised language processing methods with a labeled and balanced Deceptive Opinion dataset. We analyze textual features accessible in internet reviews by merging sentiment mining approaches with readability. Overall, the research improves fake review screening by using various transformer models such as Bidirectional Encoder Representation from Transformers (BERT), Robustly Optimized BERT (Roberta), XLNET (Transformer-XL) and XLM-Roberta (Cross-lingual Language model–Roberta). This proposed research extracts and classifies features from product reviews to increase the effectiveness of review filtering. As evidenced by the investigation, the application of transformer models improves the performance of spam review filtering when related to existing machine learning and deep learning models.  相似文献   
172.
为研究水利领域知识图谱构建中基于文本的知识自动抽取方法,本文以水文模型的名称、模拟要素、应用流域、计算时段、精度、继承-发展关系、研发人、研发单位等知识抽取为例,以883篇水文模型领域中文期刊论文为数据源,构建了BERT-Base-Chinese模型、LAC(Lexical Analysis of Chinese)工具、模式识别联合的多策略水文模型命名实体识别方法。本文采用五位序列标注法(BMOES)方法对期刊论文进行人工标注等处理,建立知识抽取的输入数据集,用于BERT模型训练以及多策略识别方法的性能评价。识别结果显示:多策略识别方法对8种水文模型领域命名实体识别结果精确率和召回率的调和平均数(F1值)均达到90%以上;针对不同实体类别,采取不同的命名实体识别方法较单BERT模型识别方法能有效提高识别性能。本文提出的方法可为水利领域其他场景的知识抽取提供参考,为领域知识图谱构建提供支撑。  相似文献   
173.
在方面级情感文本中存在部分不含情感词的评论句,对其情感的研究被称为方面级隐式情感分析.现有分析模型在预训练过程中可能会丢失与方面词相关的上下文信息,并且不能准确提取上下文中深层特征.本文首先构造了方面词感知BERT预训练模型,通过将方面词引入到基础BERT的输入嵌入结构中,生成与方面词信息相关的词向量;然后构造了语境感知注意力机制,对由编码层得到的深层隐藏向量,将其中的语义和句法信息引入到注意力权重计算过程,使注意力机制能更加准确地分配权重到与方面词相关的上下文.对比实验结果表明,本文模型的效果优于基线模型.  相似文献   
174.
三元组抽取的目的是从非结构化的文本中获取实体与实体间的关系,并应用于下游任务。嵌入机制对三元组抽取模型的性能有很大影响,嵌入向量应包含与关系抽取任务密切相关的丰富语义信息。在中文数据集中,字词之间包含的信息有很大区别,为了改进由分词错误产生的语义信息丢失问题,设计了融合混合嵌入与关系标签嵌入的三元组联合抽取方法(HEPA),提出了采用字嵌入与词嵌入结合的混合嵌入方法,降低由分词错误产生的误差;在实体抽取层中添加关系标签嵌入机制,融合文本与关系标签,利用注意力机制来区分句子中实体与不同关系标签的相关性,由此提高匹配精度;采用指针标注的方法匹配实体,提高了对关系重叠三元组的抽取效果。在公开的Du IE数据集上进行了对比实验,相较于表现最好的基线模型(Cas Rel),HEPA的F1值提升了2.8%。  相似文献   
175.
The selection of industry partners for Research and Development (R&D) is a challenging task for many organizations. Present methods for partner-selection, based on patents, publications or company databases, do often fail for highly specialized SMEs. Our approach aims at calculating the technological similarity for partner discovery. We apply methods from Natural Language Processing (NLP) on companies’ website texts. We show that the deep-learning language model BERT outperforms other methods at this task. Tested against expert-proven ground truth, it achieves an F1-score up to 0.90. Our results imply that website texts are useful for the purpose of estimating the similarity between companies. We see great potential in the scalability of the semantic analysis of company website texts.  相似文献   
176.
BERT is a representative pre-trained language model that has drawn extensive attention for significant improvements in downstream Natural Language Processing (NLP) tasks. The complex architecture and massive parameters bring BERT competitive performance but also result in slow speed at model inference time. To speed up BERT inference, FastBERT realizes adaptive inference with an acceptable drop in accuracy based on knowledge distillation and the early-exit technique. However, many factors may limit the performance of FastBERT, such as the teacher classifier that is not knowledgeable enough, the batch size shrinkage and the redundant computation of student classifiers. To overcome these limitations, we propose a new BERT inference method with GPU-Efficient Exit Prediction (GEEP). GEEP leverages the shared exit loss to simplify the training process of FastBERT from two steps into only one step and makes the teacher classifier more knowledgeable by feeding diverse Transformer outputs to the teacher classifier. In addition, the exit layer prediction technique is proposed to utilize a GPU hash table to handle the token-level exit layer distribution and to sort test samples by predicted exit layers. In this way, GEEP can avoid batch size shrinkage and redundant computation of student classifiers. Experimental results on twelve public English and Chinese NLP datasets prove the effectiveness of the proposed approach. The source codes of GEEP will be released to the public upon paper acceptance.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号