首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7068篇
  免费   988篇
  国内免费   924篇
电工技术   143篇
技术理论   1篇
综合类   688篇
化学工业   157篇
金属工艺   48篇
机械仪表   94篇
建筑科学   1967篇
矿业工程   875篇
能源动力   175篇
轻工业   62篇
水利工程   337篇
石油天然气   199篇
武器工业   7篇
无线电   394篇
一般工业技术   233篇
冶金工业   177篇
原子能技术   11篇
自动化技术   3412篇
  2024年   40篇
  2023年   115篇
  2022年   299篇
  2021年   352篇
  2020年   330篇
  2019年   215篇
  2018年   198篇
  2017年   254篇
  2016年   345篇
  2015年   394篇
  2014年   591篇
  2013年   358篇
  2012年   611篇
  2011年   549篇
  2010年   573篇
  2009年   514篇
  2008年   509篇
  2007年   547篇
  2006年   421篇
  2005年   383篇
  2004年   295篇
  2003年   257篇
  2002年   184篇
  2001年   149篇
  2000年   109篇
  1999年   85篇
  1998年   64篇
  1997年   45篇
  1996年   38篇
  1995年   32篇
  1994年   26篇
  1993年   21篇
  1992年   15篇
  1991年   9篇
  1990年   11篇
  1989年   7篇
  1988年   5篇
  1987年   6篇
  1986年   5篇
  1985年   1篇
  1984年   2篇
  1983年   5篇
  1982年   2篇
  1981年   2篇
  1979年   1篇
  1977年   2篇
  1975年   1篇
  1957年   2篇
  1954年   1篇
排序方式: 共有8980条查询结果,搜索用时 15 毫秒
91.
基于向量投影的KNN文本分类算法   总被引:2,自引:0,他引:2  
针对KNN算法分类时间过长的缺点,分析了提高分类效率的方法.在KNN算法基础上,结合向量投影理论以及iDistance索引结构,提出了一种改进的KNN算法--PKNN.该算法通过比较待分类样本和训练样本的一维投影距离,获得最有可能的临近样本点,减小了参与计算的训练样本数,因此可以减少每次分类的计算量.实验结果表明,PKNN算法可以明显提高KNN算法的效率,PKNN算法的原理决定其适合大容量高维文本分类.  相似文献   
92.
介绍文本分类的研究背景、关键技术;总结经典文本分类方法;讨论目前新涌现的分本分类模型及面临的问题,并对文本分类趋势进行展望。  相似文献   
93.
面向短文本的命名实体识别   总被引:1,自引:0,他引:1  
王丹  樊兴华 《计算机应用》2009,29(1):143-145,
针对短文本命名实体识别这项紧缺任务,提出了一种面向短文本的快速有效的命名实体识别方法。该方法主要分成三步:第一步,针对短文本表达不规范特性对命名实体识别的干扰,采取去干扰字符,化繁为简等规范化操作。第二步,针对短文本语意不完整特性,提出用HMM(隐马尔可夫模型)以词性做观察值进行初步命名实体识别。第三步,据初步识别结果,构建拼音同指关系库来识别潜在实体。在由8464篇短文本构成的测试集上运行的实验表明,该方法能较好地进行短文本命名实体识别。  相似文献   
94.
Traditionally, direct marketing companies have relied on pre-testing to select the best offers to send to their audience. Companies systematically dispatch the offers under consideration to a limited sample of potential buyers, rank them with respect to their performance and, based on this ranking, decide which offers to send to the wider population. Though this pre-testing process is simple and widely used, recently the industry has been under increased pressure to further optimize learning, in particular when facing severe time and learning space constraints. The main contribution of the present work is to demonstrate that direct marketing firms can exploit the information on visual content to optimize the learning phase. This paper proposes a two-phase learning strategy based on a cascade of regression methods that takes advantage of the visual and text features to improve and accelerate the learning process. Experiments in the domain of a commercial Multimedia Messaging Service (MMS) show the effectiveness of the proposed methods and a significant improvement over traditional learning techniques. The proposed approach can be used in any multimedia direct marketing domain in which offers comprise both a visual and text component.
Giuseppe TribulatoEmail:

Sebastiano Battiato   was born in Catania, Italy, in 1972. He received the degree in Computer Science (summa cum laude) in 1995 and his Ph.D in Computer Science and Applied Mathematics in 1999. From 1999 to 2003 he has lead the “Imaging” team c/o STMicroelectronics in Catania. Since 2004 he works as a Researcher at Department of Mathematics and Computer Science of the University of Catania. His research interests include image enhancement and processing, image coding and camera imaging technology. He published more than 90 papers in international journals, conference proceedings and book chapters. He is co-inventor of about 15 international patents. He is reviewer for several international journals and he has been regularly a member of numerous international conference committees. He has participated in many international and national research projects. He is an Associate Editor of the SPIE Journal of Electronic Imaging (Specialty: digital photography and image compression). He is director of ICVSS (International Computer Vision Summer School). He is a Senior Member of the IEEE. Giovanni Maria Farinella   is currently contract researcher at Dipartimento di Matematica e Informatica, University of Catania, Italy (IPLAB research group). He is also associate member of the Computer Vision and Robotics Research Group at University of Cambridge since 2006. His research interests lie in the fields of computer vision, pattern recognition and machine learning. In 2004 he received his degree in Computer Science (egregia cum laude) from University of Catania. He was awarded a Ph.D. (Computer Vision) from the University of Catania in 2008. He has co-authored several papers in international journals and conferences proceedings. He also serves as reviewer numerous international journals and conferences. He is currently the co-director of the International Summer School on Computer Vision (ICVSS). Giovanni Giuffrida   is an assistant professor at University of Catania, Italy. He received a degree in Computer Science from the University of Pisa, Italy in 1988 (summa cum laude), a Master of Science in Computer Science from the University of Houston, Texas, in 1992, and a Ph.D. in Computer Science, from the University of California in Los Angeles (UCLA) in 2001. He has an extensive experience in both the industrial and academic world. He served as CTO and CEO in the industry and served as consultant for various organizations. His research interest is on optimizing content delivery on new media such as Internet, mobile phones, and digital tv. He published several papers on data mining and its applications. He is a member of ACM and IEEE. Catarina Sismeiro   is a senior lecturer at Imperial College Business School, Imperial College London. She received her Ph.D. in Marketing from the University of California, Los Angeles, and her Licenciatura in Management from the University of Porto, Portugal. Before joining Imperial College Catarina had been and assistant professor at Marshall School of Business, University of Southern California. Her primary research interests include studying pharmaceutical markets, modeling consumer behavior in interactive environments, and modeling spatial dependencies. Other areas of interest are decision theory, econometric methods, and the use of image and text features to predict the effectiveness of marketing communications tools. Catarina’s work has appeared in innumerous marketing and management science conferences. Her research has also been published in the Journal of Marketing Research, Management Science, Marketing Letters, Journal of Interactive Marketing, and International Journal of Research in Marketing. She received the 2003 Paul Green Award and was the finalist of the 2007 and 2008 O’Dell Awards. Catarina was also a 2007 Marketing Science Institute Young Scholar, and she received the D. Antonia Adelaide Ferreira award and the ADMES/MARKTEST award for scientific excellence. Catarina is currently on the editorial boards of the Marketing Science journal and the International Journal of Research in Marketing. Giuseppe Tribulato   was born in Messina, Italy, in 1979. He received the degree in Computer Science (summa cum laude) in 2004 and his Ph.D in Computer Science in 2008. From 2005 he has lead the research team at Neodata Group. His research interests include data mining techniques, recommendation systems and customer targeting.   相似文献   
95.
本文在对文本分类的问题,关键技术及系统结构进行介绍的基础上,详细阐述了一种利用带动力项的BP神经网络作为分类器的中文文本自动分类方法.该法采用归一化TFIDF算法对特征向量进行权值计算,并使用期望交叉熵统计方法对特征向量集进行精简.此外,我们在TanCorp12数据集上测试了特征项数目和训练次数对于分类器的宏平均和微平均性能的影响.  相似文献   
96.
This paper discusses the basic design of the encoding scheme described by the Text Encoding Initiative'sGuidelines for Electronic Text Encoding and Interchange (TEI document number TEI P3, hereafter simplyP3 orthe Guidelines). It first reviews the basic design goals of the TEI project and their development during the course of the project. Next, it outlines some basic notions relevant for the design of any markup language and uses those notions to describe the basic structure of the TEI encoding scheme. It also describes briefly the core tag set defined in chapter 6 of P3, and the default text structure defined in chapter 7 of that work. The final section of the paper attempts an evaluation of P3 in the light of its original design goals, and outlines areas in which further work is still needed.C. M. Sperberg-McQueen is a Senior Research Programmer at the academic computer center of the University of Illinois at Chicago; his interests include medieval Germanic languages and literatures and the theory of electronic text markup. Since 1988 he has been editor in chief of the ACH/ACL/ALLC Text Encoding Initiative. Lou Burnard is Director of the Oxford Text Archive at Oxford University Computing Services, with interests in electronic text and database technology. He is European Editor of the Text Encoding Initiative's Guidelines.  相似文献   
97.
In this paper, we concentrate on justifying the decisions we made in developing the TEI recommendations for feature structure markup. The first four sections of this paper present the justification for the recommended treatment of feature structures, of features and their values, and of combinations of features or values and of alternations and negations of features and their values. Section 5 departs briefly from the linguistic focus to argue that the markup scheme developed for feature structures is in fact a general-purpose mechanism that can be used for a wide range of applications. Section 6 describes an auxiliary document called a feature system declaration that is used to document and validate a system of feature-structure markup. The seventh and final section illustrates the use of the recommended markup scheme with two examples, lexical tagging and interlinear text analysis.Terry Langendoen is Professor and Head of the Department of Linguistics at The University of Arizona. He was Chair of the TEI Committee on Analysis and Interpretation. He received his PhD in Linguistics from the Massachusetts Institute of Technology in 1964, and held teaching positions at The Ohio State University and the City University of New York (Brooklyn College and the Graduate Center) before moving to Arizona in 1988. He is author, co-author, or co-editor of six books in linguistics, and of numerous articles.Gary Simons is Director of the Academic Computing Department of the Summer Institute of Linguistics, Dallas, TX. He served on the TEI Committee on Analysis and Interpretation. He received his PhD in Linguistics (with minor emphasis in Computer Science) from Cornell University in 1979. Before taking up his current position in 1984, he spent five years in the Solomon Islands doing field work with SIL. He is author, co-author, or co-editor of eight books in the fields of linguistics and linguistic computing.The initial feature-structure recommendations were formulated by the Analysis and Interpretation Committee at a meeting in Tucson, Arizona in March 1990, following suggestions by Mitch Marcus and Beatrice Santorini. The authors received valuable help in the further revision and refinement of the recommendations from Steven Zepp.  相似文献   
98.
语音文本自动对齐技术广泛应用于语音识别与合成、内容制作等领域,其主要目的是将语音和相应的参考文本在语句、单词、音素等级别的单元进行对齐,并获得语音与参考文本之间的时间对位信息.最新的先进对齐方法大多基于语音识别,一方面,准确率受限于语音识别效果,识别字错误率高时文语对齐精度明显下降,识别字错误率对对齐精度影响较大;另一方面,这种对齐方法不能有效处理不完全匹配的长篇幅语音和文本的对齐.该文提出一种基于锚点和韵律信息的文语对齐方法,通过基于边界锚点加权的片段标注将语料划分为对齐段和未对齐段,针对未对齐段使用双门限端点检测方法提取韵律信息,并检测语句边界,降低了基于语音识别的对齐方法对语音识别效果的依赖程度.实验结果表明,与目前先进的基于语音识别的文语对齐方法比较,即使在识别字错误率为0.52时,该文所提方法的对齐准确率仍能提升45%以上;在音频文本不匹配程度为0.5时,该文所提方法能提高3%.  相似文献   
99.
目前商标分卡处理方法是先进行文本检测再进行区域分类, 最后对不同的区域进行拆分组合形成商标分卡. 这种分步式的处理耗时长, 并且因为误差的叠加会导致最终结果准确率下降. 针对这一问题, 本文提出了多任务的网络模型TextCls, 通过设计多任务学习模型来提升商标分卡的检测和分类模块的推理速度和精确率. 该模型包含一个特征提取网络, 以及文本检测和区域分类两个任务分支. 其中, 文本检测分支采用分割网络学习像素分类图, 然后使用像素聚合获得文本框, 像素分类图主要是学习文本像素和背景像素的信息; 区域分类分支对区域特征细分为中文、英文和图形, 着重学习不同类型区域的特征. 两个分支通过共享特征提取网络, 像素信息和区域特征相互促进学习, 最终两个任务的精确率得以提升. 为了弥补商标图像的文本检测数据集的缺失以及验证TextCls的有效性, 本文还收集并标注了一个由2000张商标图像构成的文本检测数据集trademark_text (https://github.com/kongbailongtian/trademark_text), 结果表明: 与最佳的文本检测算法相比, 本文的文本检测分支将精确率由94.44%提升至95.16%, 调和平均值F1 score达92.12%; 区域分类分支的F1 score也由97.09%提升至98.18%.  相似文献   
100.
近年来,机器学习被逐渐运用到基于社交媒体文本数据的抑郁症检测中并凸显重要应用价值。为梳理其应用现状和发展方向,对用于抑郁症检测的社交媒体文本数据集、数据预处理和机器学习方法进行整理分类。在数据特征表示方面,对比分析了基础特征表示、静态词嵌入和语境词嵌入。全面分析了利用不同基础特征和不同算法类型的传统机器学习以及深度学习进行抑郁症检测的性能和特点。总结并建议未来在中文数据集的创建、模型的可解释性、基于隐喻的检测和轻量级预训练模型方面做进一步的探索。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号