首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2561篇
  免费   362篇
  国内免费   300篇
电工技术   130篇
技术理论   1篇
综合类   349篇
化学工业   64篇
金属工艺   22篇
机械仪表   99篇
建筑科学   190篇
矿业工程   35篇
能源动力   17篇
轻工业   66篇
水利工程   46篇
石油天然气   32篇
武器工业   13篇
无线电   243篇
一般工业技术   134篇
冶金工业   38篇
原子能技术   20篇
自动化技术   1724篇
  2024年   12篇
  2023年   35篇
  2022年   52篇
  2021年   84篇
  2020年   80篇
  2019年   61篇
  2018年   66篇
  2017年   77篇
  2016年   84篇
  2015年   92篇
  2014年   180篇
  2013年   162篇
  2012年   211篇
  2011年   239篇
  2010年   198篇
  2009年   203篇
  2008年   191篇
  2007年   222篇
  2006年   164篇
  2005年   136篇
  2004年   149篇
  2003年   114篇
  2002年   94篇
  2001年   60篇
  2000年   55篇
  1999年   50篇
  1998年   30篇
  1997年   22篇
  1996年   20篇
  1995年   20篇
  1994年   15篇
  1993年   3篇
  1992年   2篇
  1991年   9篇
  1990年   3篇
  1989年   5篇
  1988年   5篇
  1987年   2篇
  1986年   4篇
  1985年   1篇
  1984年   1篇
  1983年   3篇
  1981年   1篇
  1980年   1篇
  1972年   1篇
  1965年   1篇
  1964年   1篇
  1960年   1篇
  1956年   1篇
排序方式: 共有3223条查询结果,搜索用时 31 毫秒
81.
研究了AXML文档安全重写判定问题,即判定给定AXML文档通过触发其包含的服务调用生成的文档集合是否能够全部重写为符合目标模式的文档实例.基于树自动机理论,定义了用于抽象AXML文档的树自动机--ATA机(AXML treeautomata),ATA机等价于给定AXML文档通过触发其包含的服务调用所能生成的文档集合.基于ATA机,提出一个AXML文档安全重写判定算法,表明了该算法的正确性及有效性.  相似文献   
82.
王自强  钱旭 《计算机应用》2009,29(2):416-418
为了高效地解决Web文档分类问题,提出了一种基于核鉴别分析方法KDA和SVM的文档分类算法。该算法首先利用KDA对训练集中的高维Web文档空间进行降维,然后在降维后的低维特征空间中利用乘性更新规则优化的SVM进行分类预测。采用了文档分类领域两个著名的数据集Reuters-21578和20-Newsgroup进行实验,实验结果表明该算法不仅获得了更高的分类准确率,而且具有较少的运行时间。  相似文献   
83.
This paper deals with multimedia information access. We propose two new approaches for hybrid text-image information processing that can be straightforwardly generalized to the more general multimodal scenario. Both approaches fall in the trans-media pseudo-relevance feedback category. Our first method proposes using a mixture model of the aggregate components, considering them as a single relevance concept. In our second approach, we define trans-media similarities as an aggregation of monomodal similarities between the elements of the aggregate and the new multimodal object. We also introduce the monomodal similarity measures for text and images that serve as basic components for both proposed trans-media similarities. We show how one can frame a large variety of problem in order to address them with the proposed techniques: image annotation or captioning, text illustration and multimedia retrieval and clustering. Finally, we present how these methods can be integrated in two applications: a travel blog assistant system and a tool for browsing the Wikipedia taking into account the multimedia nature of its content.
Gabriela CsurkaEmail:

Dr. Julien Ah-Pine   joined the XRCE Grenoble as Research Engineer in 2007. He is part of the Textual and Visual Pattern Analysis group and his current research activities are related to multi-modal information retrieval and machine learning. He received his PhD degree in mathematics from Pierre and Marie Curie University (University of Paris 6). From 2003 to 2007, he was with Thales Communications, working on relational analysis, data and text mining methods and social choice theory. Dr. Marco Bressan   is Area Manager of the Textual and Visual Pattern Analysis area at Xerox Research Centre Europe. His main research interests are statistical learning and classification; image and video semantic scene understanding; image enhancement and aesthetics; object detection and recognition, particularly when dealing with uncontrolled environments. Prior to Xerox, several of his contributions in these fields were applied to a variety of scenarios including biometric solutions, data mining, CBIR and industrial vision. Dr. Bressan holds a BA in Applied Mathematics from the University of Buenos Aires, a M.Sc. in Computer Vision from the Computer Vision Centre in Spain and a Ph.D. in Computer Science and Artificial Intelligence from the Autonomous University of Barcelona. He is an active member of the network of Argentinean researchers abroad and one of the founders of the network of computer vision and cognitive science researchers. Stephane Clinchant   is Ph.D. Student at University Joseph Fourier (Grenoble, France) and at the Xerox Research Centre Europe, that he joined in 2005. Before joining XRCE, Stephane obtained a Master Degree in Computer Sciences in 2005 from the Ecole Nationale Superieure d’Electrotechnique, d’Informatique, d’Hydraulique et des Telecommunications (France). His current research interests mainly focus on Machine Learning for Natural Language Processing and Multimedia Information Access. Dr. Gabriela Csurka   is a research scientist in the Textual and Visual Pattern Analysis team at Xerox Research Centre Europe (XRCE). She obtained her Ph.D. degree (1996) in Computer Science from University of Nice Sophia - Antipolis. Before joining XRCE in 2002, she worked in fields such as stereo vision and projective reconstruction at INRIA (Sophia Antipolis, Rhone Alpes and IRISA) and image and video watermarking at University of Geneva and Institute Eurécom, Sophia Antipolis. Author of several publications in main journals and international conferences, she is also an active reviewer both for journals and conferences. Her current research interest concerns the exploration of new technologies for image content and aesthetic analysis, cross-modal image categorization and semantic based image segmentation. Yves Hoppenot   is in charge of the development and integration of new technologies in our European research Technology Showroom. He is a software expert for the production, office and services sectors. Yves joined the Xerox Research Centre Europe in 2001. He graduated from the Ecole National Superieure des Telecommunications, Brest in France, and received a Master of Science degree from the Tampere University of Technology in Finland. Dr. Jean-Michel Renders   joined the XRCE Grenoble as Research Engineer in 2001. His current research interests mainly focus on Machine Learning techniques applied to Statistical Natural Language Processing and Text Mining. Before joining XRCE, Jean-Michel obtained a PhD in Applied Sciences from the University of Brussels in 1993. He started his research activities in 1988, in the field of Robotics Dynamics and Control. Then, he joined the Joint Research Center of the European Communities to work on biologial metaphors (Genetic Algorithms, Neural Networks and Immune Networks) applied to process control. After spending one year as Visiting Scientist at York University (England), he spent 4 years applying Artificial Intelligence and Machine Learning Techniques in Industry (Tractebel - Suez). Then, he worked as Data Mining Senior Consultant and led projects in most major Belgian banks and utilities.   相似文献   
84.
Focused crawlers have as their main goal to crawl Web pages that are relevant to a specific topic or user interest, playing an important role for a great variety of applications. In general, they work by trying to find and crawl all kinds of pages deemed as related to an implicitly declared topic. However, users are often not simply interested in any document about a topic, but instead they may want only documents of a given type or genre on that topic to be retrieved. In this article, we describe an approach to focused crawling that exploits not only content-related information but also genre information present in Web pages to guide the crawling process. This approach has been designed to address situations in which the specific topic of interest can be expressed by specifying two sets of terms, the first describing genre aspects of the desired pages and the second related to the subject or content of these pages, thus requiring no training or any kind of preprocessing. The effectiveness, efficiency and scalability of the proposed approach are demonstrated by a set of experiments involving the crawling of pages related to syllabi of computer science courses, job offers in the computer science field and sale offers of computer equipments. These experiments show that focused crawlers constructed according to our genre-aware approach achieve levels of F1 superior to 88%, requiring the analysis of no more than 65% of the visited pages in order to find 90% of the relevant pages. In addition, we experimentally analyze the impact of term selection on our approach and evaluate a proposed strategy for semi-automatic generation of such terms. This analysis shows that a small set of terms selected by an expert or a set of terms specified by a typical user familiar with the topic is usually enough to produce good results and that such a semi-automatic strategy is very effective in supporting the task of selecting the sets of terms required to guide a crawling process.  相似文献   
85.
Semplore: A scalable IR approach to search the Web of Data   总被引:1,自引:0,他引:1  
The Web of Data keeps growing rapidly. However, the full exploitation of this large amount of structured data faces numerous challenges like usability, scalability, imprecise information needs and data change. We present Semplore, an IR-based system that aims at addressing these issues. Semplore supports intuitive faceted search and complex queries both on text and structured data. It combines imprecise keyword search and precise structured query in a unified ranking scheme. Scalable query processing is supported by leveraging inverted indexes traditionally used in IR systems. This is combined with a novel block-based index structure to support efficient index update when data changes. The experimental results show that Semplore is an efficient and effective system for searching the Web of Data and can be used as a basic infrastructure for Web-scale Semantic Web search engines.  相似文献   
86.
There is more to legal knowledge representation than knowledge-bases. It is valuable to look at legal knowledge representation and its implementation across the entire domain of computerisation of law, rather than focussing on sub-domains such as legal expert systems. The DataLex WorkStation software and applications developed using it are used to provide examples. Effective integration of inferencing, hypertext and text retrieval can overcome some of the limitations of these current paradigms of legal computerisation which are apparent when they are used on a stand-alone basis. Effective integration of inferencing systems is facilitated by use of a (quasi) natural language knowledge representation, and the benefits of isomorphism are enhanced. These advantages of integration apply to all forms of inferencing, including document generation and casebased inferencing. Some principles for development of integrated legal decision support systems are proposed.  相似文献   
87.
Cloud storage is essential for managing user data to store and retrieve from the distributed data centre. The storage service is distributed as pay a service for accessing the size to collect the data. Due to the massive amount of data stored in the data centre containing similar information and file structures remaining in multi-copy, duplication leads to increase storage space. The potential deduplication system doesn’t make efficient data reduction because of inaccuracy in finding similar data analysis. It creates a complex nature to increase the storage consumption under cost. To resolve this problem, this paper proposes an efficient storage reduction called Hash-Indexing Block-based Deduplication (HIBD) based on Segmented Bind Linkage (SBL) Methods for reducing storage in a cloud environment. Initially, preprocessing is done using the sparse augmentation technique. Further, the preprocessed files are segmented into blocks to make Hash-Index. The block of the contents is compared with other files through Semantic Content Source Deduplication (SCSD), which identifies the similar content presence between the file. Based on the content presence count, the Distance Vector Weightage Correlation (DVWC) estimates the document similarity weight, and related files are grouped into a cluster. Finally, the segmented bind linkage compares the document to find duplicate content in the cluster using similarity weight based on the coefficient match case. This implementation helps identify the data redundancy efficiently and reduces the service cost in distributed cloud storage.  相似文献   
88.
With the increasing popularity of mobile devices and the wide adoption of mobile Apps, an increasing concern of privacy issues is raised. Privacy policy is identified as a proper medium to indicate the legal terms, such as the general data protection regulation (GDPR), and to bind legal agreement between service providers and users. However, privacy policies are usually long and vague for end users to read and understand. It is thus important to be able to automatically analyze the document structures of privacy policies to assist user understanding. In this work we create a manually labelled corpus containing 231 privacy policies (of more than 566,000 words and 7,748 annotated paragraphs). We benchmark our data corpus with 3 document classification models and achieve more than 82% on F1-score.  相似文献   
89.
通过恶意文档来传播恶意软件在现代互联网中是非常普遍的,这也是众多机构面临的最高风险之一。PDF文档是全世界应用最广泛的文档类型,因此由其引发的攻击数不胜数。使用机器学习方法对恶意文档进行检测是流行且有效的途径,在面对攻击者精心设计的样本时,机器学习分类器的鲁棒性有可能暴露一定的问题。在计算机视觉领域中,对抗性学习已经在许多场景下被证明是一种有效的提升分类器鲁棒性的方法。对于恶意文档检测而言,我们仍然缺少一种用于针对各种攻击场景生成对抗样本的综合性方法。在本文中,我们介绍了PDF文件格式的基础知识,以及有效的恶意PDF文档检测器和对抗样本生成技术。我们提出了一种恶意文档检测领域的对抗性学习模型来生成对抗样本,并使用生成的对抗样本研究了多检测器假设场景的检测效果(及逃避有效性)。该模型的关键操作为关联特征提取和特征修改,其中关联特征提取用于找到不同特征空间之间的关联,特征修改用于维持样本的稳定性。最后攻击算法利用基于动量迭代梯度的思想来提高生成对抗样本的成功率和效率。我们结合一些具有信服力的数据集,严格设置了实验环境和指标,之后进行了对抗样本攻击和鲁棒性提升测试。实验结果证明,该模型可以保持较高的对抗样本生成率和攻击成功率。此外,该模型可以应用于其他恶意软件检测器,并有助于检测器鲁棒性的优化。  相似文献   
90.
针对传统的图流行排序显著性目标检测算法存在先验信息单一,显著目标检测不完整的问题,提出一种新的基于背景先验与中心先验的显著性目标检测算法。首先将图像边界节点作为背景种子进行流行排序获得粗略的前景区域,将其再次流行排序得到初步显著图;然后利用Harris角点检测、聚类实现中心先验显著性检测,捕获中心显著信息;最后在初步显著图上融合图像中心显著性,得到最终显著图。本文对综合指标、精确率-召回率曲线、F-measure值以及平均绝对误差(mean absolute error,MAE)值进行实验评估,在公开数据集MSRA-10K和ECSSD上进行的实验结果表明:对比10种主流算法,本文算法在不同的评估指标上都具有较好的表现,且能准确地突出显著目标,提升背景抑制效果。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号