全文获取类型
收费全文 | 1203篇 |
免费 | 43篇 |
国内免费 | 72篇 |
专业分类
电工技术 | 6篇 |
综合类 | 28篇 |
化学工业 | 6篇 |
金属工艺 | 3篇 |
机械仪表 | 10篇 |
建筑科学 | 27篇 |
矿业工程 | 4篇 |
能源动力 | 3篇 |
轻工业 | 5篇 |
水利工程 | 1篇 |
无线电 | 97篇 |
一般工业技术 | 34篇 |
冶金工业 | 1篇 |
自动化技术 | 1093篇 |
出版年
2025年 | 7篇 |
2024年 | 9篇 |
2023年 | 10篇 |
2022年 | 19篇 |
2021年 | 42篇 |
2020年 | 44篇 |
2019年 | 18篇 |
2018年 | 28篇 |
2017年 | 33篇 |
2016年 | 46篇 |
2015年 | 37篇 |
2014年 | 70篇 |
2013年 | 48篇 |
2012年 | 72篇 |
2011年 | 86篇 |
2010年 | 54篇 |
2009年 | 75篇 |
2008年 | 96篇 |
2007年 | 74篇 |
2006年 | 87篇 |
2005年 | 55篇 |
2004年 | 56篇 |
2003年 | 50篇 |
2002年 | 32篇 |
2001年 | 27篇 |
2000年 | 25篇 |
1999年 | 23篇 |
1998年 | 9篇 |
1997年 | 9篇 |
1996年 | 8篇 |
1995年 | 6篇 |
1994年 | 6篇 |
1993年 | 5篇 |
1992年 | 4篇 |
1991年 | 2篇 |
1989年 | 3篇 |
1987年 | 4篇 |
1986年 | 3篇 |
1985年 | 1篇 |
1984年 | 3篇 |
1983年 | 1篇 |
1982年 | 7篇 |
1981年 | 6篇 |
1980年 | 8篇 |
1979年 | 1篇 |
1978年 | 2篇 |
1977年 | 3篇 |
1974年 | 1篇 |
1973年 | 1篇 |
1972年 | 1篇 |
排序方式: 共有1318条查询结果,搜索用时 15 毫秒
81.
大数据时代的到来,为网络用户提供更加丰富的信息资源。但是,信息的筛选无疑成为信息获取的拦路虎。全文搜索引擎,采用对搜索信息建立本地索引、对搜索关键词的分词等技术处理后,进行模糊查询。在查全率、查准率方面,全文检索为网民用户提供方便。围绕全文搜索搭建的流程,解析整个过程,主要有:网络数据的抓取、数据的分析、索引的建立、搜索。 相似文献
82.
Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related mortality worldwide. New insights into the pathogenesis of this lethal disease are urgently needed. Chromosomal copy number alterations (CNAs) can lead to activation of oncogenes and inactivation of tumor suppressors in human cancers. Thus, identification of cancer-specific CNAs will not only provide new insight into understanding the molecular basis of tumor genesis but also facilitate the identification of HCC biomarkers using CNA. 相似文献
83.
Leonardo Rocha Fernando Mourão Hilton Mota Thiago Salles Marcos André Gonçalves Wagner Meira Jr. 《Information Systems》2013
The management of a huge and growing amount of information available nowadays makes Automatic Document Classification (ADC), besides crucial, a very challenging task. Furthermore, the dynamics inherent to classification problems, mainly on the Web, make this task even more challenging. Despite this fact, the actual impact of such temporal evolution on ADC is still poorly understood in the literature. In this context, this work concerns to evaluate, characterize and exploit the temporal evolution to improve ADC techniques. As first contribution we highlight the proposal of a pragmatical methodology for evaluating the temporal evolution in ADC domains. Through this methodology, we can identify measurable factors associated to ADC models degradation over time. Going a step further, based on such analyzes, we propose effective and efficient strategies to make current techniques more robust to natural shifts over time. We present a strategy, named temporal context selection, for selecting portions of the training set that minimize those factors. Our second contribution consists of proposing a general algorithm, called Chronos, for determining such contexts. By instantiating Chronos, we are able to reduce uncertainty and improve the overall classification accuracy. Empirical evaluations of heuristic instantiations of the algorithm, named WindowsChronos and FilterChronos, on two real document collections demonstrate the usefulness of our proposal. Comparing them against state-of-the-art ADC algorithms shows that selecting temporal contexts allows improvements on the classification accuracy up to 10%. Finally, we highlight the applicability and the generality of our proposal in practice, pointing out this study as a promising research direction. 相似文献
84.
Over the last decade, enhanced suffix arrays (ESA) have replaced suffix trees in many applications. Algorithms based on ESAs require less space, while allowing the same time efficiency as those based on suffix trees. However, this is only true when a suffix structure is used as a static index. Suffix trees can be updated faster than suffix arrays, which is a clear advantage in applications that require dynamic indexing. We show that for some dynamic applications a suffix array and the derived LCP-interval tree can be used in such a way that the actual index updates are not necessary. We demonstrate this in the case of grammar text compression with longest first substitution and provide the source code. The proposed algorithm has O(N2) worst case time complexity but runs in O(N) time in practice. 相似文献
85.
《Advanced Engineering Informatics》2015,29(4):1072-1082
Text in images and video contains important information for visual content understanding, indexing, and recognizing. Extraction of this information involves preprocessing, localization and extraction of the text from a given image. In this paper, we propose a novel expiration code detection and recognition algorithm by using Gabor features and collaborative representation based classification. The proposed system consists of four steps: expiration code location, character isolation, Gabor features extraction and characters recognition. For expiration code detection, the Gabor energy (GE) and the maximum energy difference (MED) are extracted. The performance of the recognition algorithm is tested over three Gabor features: GE, magnitude response (MR) and imaginary response (IR). The Gabor features are classified based on collaborative representation based classifier (GCRC). To encompass all frequencies and orientations, downsampling and principal component analysis (PCA) are applied in order to reduce the features space dimensionality. The effectiveness of the proposed localization algorithm is highlighted and compared with other existing methods. Extensive testing shows that the suggested detection scheme outperforms existing methods in terms of detection rate for large image database. Also, GCRC show very competitive results compared with Gabor feature sparse representation based classification (GSRC). Also, the proposed system outperforms the nearest neighbor (NN) classifier and the collaborative representation based classification (CRC). 相似文献
86.
Jonas Poelmans Sergei O. Kuznetsov Dmitry I. Ignatov Guido Dedene 《Expert systems with applications》2013,40(16):6601-6623
This is the first part of a large survey paper in which we analyze recent literature on Formal Concept Analysis (FCA) and some closely related disciplines using FCA. We collected 1072 papers published between 2003 and 2011 mentioning terms related to Formal Concept Analysis in the title, abstract and keywords. We developed a knowledge browsing environment to support our literature analysis process. We use the visualization capabilities of FCA to explore the literature, to discover and conceptually represent the main research topics in the FCA community. In this first part, we zoom in on and give an extensive overview of the papers published between 2003 and 2011 on developing FCA-based methods for knowledge processing. We also give an overview of the literature on FCA extensions such as pattern structures, logical concept analysis, relational concept analysis, power context families, fuzzy FCA, rough FCA, temporal and triadic concept analysis and discuss scalability issues. 相似文献
87.
This paper presents a novel over-sampling method based on document content to handle the class imbalance problem in text classification. The new technique, COS-HMM (Content-based Over-Sampling HMM), includes an HMM that is trained with a corpus in order to create new samples according to current documents. The HMM is treated as a document generator which can produce synthetical instances formed on what it was trained with.To demonstrate its achievement, COS-HMM is tested with a Support Vector Machine (SVM) in two medical documental corpora (OHSUMED and TREC Genomics), and is then compared with the Random Over-Sampling (ROS) and SMOTE techniques. Results suggest that the application of over-sampling strategies increases the global performance of the SVM to classify documents. Based on the empirical and statistical studies, the new method clearly outperforms the baseline method (ROS), and offers a greater performance than SMOTE in the majority of tested cases. 相似文献
88.
An adaptive local binarization method for document images based on a novel thresholding method and dynamic windows 总被引:1,自引:0,他引:1
Bilal Bataineh Siti Norul Huda Sheikh Abdullah Khairuddin Omar 《Pattern recognition letters》2011,32(14):1805-1813
Binary image representation is essential format for document analysis. In general, different available binarization techniques are implemented for different types of binarization problems. The majority of binarization techniques are complex and are compounded from filters and existing operations. However, the few simple thresholding methods available cannot be applied to many binarization problems. In this paper, we propose a local binarization method based on a simple, novel thresholding method with dynamic and flexible windows. The proposed method is tested on selected samples called the DIBCO 2009 benchmark dataset using specialized evaluation techniques for binarization processes. To evaluate the performance of our proposed method, we compared it with the Niblack, Sauvola and NICK methods. The results of the experiments show that the proposed method adapts well to all types of binarization challenges, can deal with higher numbers of binarization problems and boosts the overall performance of the binarization. 相似文献
89.
Roberto Grossi 《Theoretical computer science》2011,412(27):2964-2973
Suffix arrays are a key data structure for solving a run of problems on texts and sequences, from data compression and information retrieval to biological sequence analysis and pattern discovery. In their simplest version, they can just be seen as a permutation of the elements in {1,2,…,n}, encoding the sorted sequence of suffixes from a given text of length n, under the lexicographic order. Yet, they are on a par with ubiquitous and sophisticated suffix trees. Over the years, many interesting combinatorial properties have been devised for this special class of permutations: for instance, they can implicitly encode extra information, and they are a well characterized subset of the n! permutations. This paper gives a short tutorial on suffix arrays and their compressed version to explore and review some of their algorithmic features, discussing the space issues related to their usage in text indexing, combinatorial pattern matching, and data compression. 相似文献
90.