全文获取类型
收费全文 | 1393篇 |
免费 | 57篇 |
国内免费 | 98篇 |
专业分类
电工技术 | 15篇 |
综合类 | 50篇 |
化学工业 | 6篇 |
金属工艺 | 3篇 |
机械仪表 | 11篇 |
建筑科学 | 28篇 |
矿业工程 | 5篇 |
能源动力 | 3篇 |
轻工业 | 4篇 |
水利工程 | 1篇 |
武器工业 | 2篇 |
无线电 | 121篇 |
一般工业技术 | 39篇 |
冶金工业 | 4篇 |
自动化技术 | 1256篇 |
出版年
2024年 | 1篇 |
2023年 | 9篇 |
2022年 | 19篇 |
2021年 | 43篇 |
2020年 | 44篇 |
2019年 | 23篇 |
2018年 | 32篇 |
2017年 | 36篇 |
2016年 | 55篇 |
2015年 | 48篇 |
2014年 | 88篇 |
2013年 | 68篇 |
2012年 | 95篇 |
2011年 | 108篇 |
2010年 | 62篇 |
2009年 | 94篇 |
2008年 | 112篇 |
2007年 | 103篇 |
2006年 | 95篇 |
2005年 | 61篇 |
2004年 | 70篇 |
2003年 | 65篇 |
2002年 | 36篇 |
2001年 | 29篇 |
2000年 | 29篇 |
1999年 | 24篇 |
1998年 | 12篇 |
1997年 | 9篇 |
1996年 | 8篇 |
1995年 | 6篇 |
1994年 | 6篇 |
1993年 | 5篇 |
1992年 | 4篇 |
1991年 | 2篇 |
1989年 | 3篇 |
1988年 | 1篇 |
1987年 | 4篇 |
1986年 | 3篇 |
1985年 | 2篇 |
1984年 | 3篇 |
1983年 | 1篇 |
1982年 | 7篇 |
1981年 | 6篇 |
1980年 | 8篇 |
1979年 | 1篇 |
1978年 | 2篇 |
1977年 | 3篇 |
1974年 | 1篇 |
1973年 | 1篇 |
1972年 | 1篇 |
排序方式: 共有1548条查询结果,搜索用时 8 毫秒
41.
42.
Finite mixture models have been applied for different computer vision, image processing and pattern recognition tasks. The majority of the work done concerning finite mixture models has focused on mixtures for continuous data. However, many applications involve and generate discrete data for which discrete mixtures are better suited. In this paper, we investigate the problem of discrete data modeling using finite mixture models. We propose a novel, well motivated mixture that we call the multinomial generalized Dirichlet mixture. The novel model is compared with other discrete mixtures. We designed experiments involving spatial color image databases modeling and summarization, and text classification to show the robustness, flexibility and merits of our approach. 相似文献
43.
In this paper we formulate a least squares version of the recently proposed twin support vector machine (TSVM) for binary classification. This formulation leads to extremely simple and fast algorithm for generating binary classifiers based on two non-parallel hyperplanes. Here we attempt to solve two modified primal problems of TSVM, instead of two dual problems usually solved. We show that the solution of the two modified primal problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in TSVM. Classification using nonlinear kernel also leads to systems of linear equations. Our experiments on publicly available datasets indicate that the proposed least squares TSVM has comparable classification accuracy to that of TSVM but with considerably lesser computational time. Since linear least squares TSVM can easily handle large datasets, we further went on to investigate its efficiency for text categorization applications. Computational results demonstrate the effectiveness of the proposed method over linear proximal SVM on all the text corpuses considered. 相似文献
44.
Mining large amounts of unstructured data for extracting meaningful, accurate, and actionable information, is at the core of a variety of research disciplines including computer science, mathematical and statistical modelling, as well as knowledge engineering. In particular, the ability to model complex scenarios based on unstructured datasets is an important step towards an integrated and accurate knowledge extraction approach. This would provide a significant insight in any decision making process driven by Big Data analysis activities. However, there are multiple challenges that need to be fully addressed in order to achieve this, especially when large and unstructured data sets are considered.In this article we propose and analyse a novel method to extract and build fragments of Bayesian networks (BNs) from unstructured large data sources. The results of our analysis show the potential of our approach, and highlight its accuracy and efficiency. More specifically, when compared with existing approaches, our method addresses specific challenges posed by the automated extraction of BNs with extensive applications to unstructured and highly dynamic data sources.The aim of this work is to advance the current state-of-the-art approaches to the automated extraction of BNs from unstructured datasets, which provide a versatile and powerful modelling framework to facilitate knowledge discovery in complex decision scenarios. 相似文献
45.
近年来,宽带接入技术正以十分惊人的速度发展.与此同时,移动互联网技术也日益成熟,即时消息业务已成为移动互联时代的应用热点.在互联网中XMPP和SIMPLE被广泛使用,但其并不能很好的适用于移动互联网.采用发布/订阅模型的MQTT协议是一种轻量级的消息传输协议,具有低功耗、节省流量和可扩展性强的优点.本文首先分析了XMPP和SIMPLE协议的不足之处,研究了MQTT协议的消息格式以及协议的使用方式,之后对即时消息业务进行了设计和实现.并在功能和性能上进行了相关的测试和分析. 相似文献
46.
Zhixing Li 《Pattern recognition letters》2011,32(3):441-448
Text representation is a necessary procedure for text categorization tasks. Currently, bag of words (BOW) is the most widely used text representation method but it suffers from two drawbacks. First, the quantity of words is huge; second, it is not feasible to calculate the relationship between words. Semantic analysis (SA) techniques help BOW overcome these two drawbacks by interpreting words and documents in a space of concepts. However, existing SA techniques are not designed for text categorization and often incur huge computing cost. This paper proposes a concise semantic analysis (CSA) technique for text categorization tasks. CSA extracts a few concepts from category labels and then implements concise interpretation on words and documents. These concepts are small in quantity and great in generality and tightly related to the category labels. Therefore, CSA preserves necessary information for classifiers with very low computing cost. To evaluate CSA, experiments on three data sets (Reuters-21578, 20-NewsGroup and Tancorp) were conducted and the results show that CSA reaches a comparable micro- and macro-F1 performance with BOW, if not better one. Experiments also show that CSA helps dimension sensitive learning algorithms such as k-nearest neighbor (kNN) to eliminate the “Curse of Dimensionality” and as a result reaches a comparable performance with support vector machine (SVM) in text categorization applications. In addition, CSA is language independent and performs equally well both in Chinese and English. 相似文献
47.
An adaptive local binarization method for document images based on a novel thresholding method and dynamic windows 总被引:1,自引:0,他引:1
Bilal Bataineh Siti Norul Huda Sheikh Abdullah Khairuddin Omar 《Pattern recognition letters》2011,32(14):1805-1813
Binary image representation is essential format for document analysis. In general, different available binarization techniques are implemented for different types of binarization problems. The majority of binarization techniques are complex and are compounded from filters and existing operations. However, the few simple thresholding methods available cannot be applied to many binarization problems. In this paper, we propose a local binarization method based on a simple, novel thresholding method with dynamic and flexible windows. The proposed method is tested on selected samples called the DIBCO 2009 benchmark dataset using specialized evaluation techniques for binarization processes. To evaluate the performance of our proposed method, we compared it with the Niblack, Sauvola and NICK methods. The results of the experiments show that the proposed method adapts well to all types of binarization challenges, can deal with higher numbers of binarization problems and boosts the overall performance of the binarization. 相似文献
48.
Identifying time periods with a burst of activities related to a topic has been an important problem in analyzing time-stamped documents. In this paper, we propose an approach to extract a hot spot of a given topic in a time-stamped document set. Topics can be basic, containing a simple list of keywords, or complex. Logical relationships such as and, or, and not are used to build complex topics from basic topics. A concept of presence measure of a topic based on fuzzy set theory is introduced to compute the amount of information related to the topic in the document set. Each interval in the time period of the document set is associated with a numeric value which we call the discrepancy score. A high discrepancy score indicates that the documents in the time interval are more focused on the topic than those outside of the time interval. A hot spot of a given topic is defined as a time interval with the highest discrepancy score. We first describe a naive implementation for extracting hot spots. We then construct an algorithm called EHE (Efficient Hot Spot Extraction) using several efficient strategies to improve performance. We also introduce the notion of a topic DAG to facilitate an efficient computation of presence measures of complex topics. The proposed approach is illustrated by several experiments on a subset of the TDT-Pilot Corpus and DBLP conference data set. The experiments show that the proposed EHE algorithm significantly outperforms the naive one, and the extracted hot spots of given topics are meaningful. 相似文献
49.
Roberto Grossi 《Theoretical computer science》2011,412(27):2964-2973
Suffix arrays are a key data structure for solving a run of problems on texts and sequences, from data compression and information retrieval to biological sequence analysis and pattern discovery. In their simplest version, they can just be seen as a permutation of the elements in {1,2,…,n}, encoding the sorted sequence of suffixes from a given text of length n, under the lexicographic order. Yet, they are on a par with ubiquitous and sophisticated suffix trees. Over the years, many interesting combinatorial properties have been devised for this special class of permutations: for instance, they can implicitly encode extra information, and they are a well characterized subset of the n! permutations. This paper gives a short tutorial on suffix arrays and their compressed version to explore and review some of their algorithmic features, discussing the space issues related to their usage in text indexing, combinatorial pattern matching, and data compression. 相似文献
50.