全文获取类型
收费全文 | 48406篇 |
免费 | 3913篇 |
国内免费 | 1991篇 |
专业分类
电工技术 | 2548篇 |
技术理论 | 8篇 |
综合类 | 3018篇 |
化学工业 | 8401篇 |
金属工艺 | 2680篇 |
机械仪表 | 3218篇 |
建筑科学 | 3646篇 |
矿业工程 | 1675篇 |
能源动力 | 1426篇 |
轻工业 | 2732篇 |
水利工程 | 777篇 |
石油天然气 | 3808篇 |
武器工业 | 443篇 |
无线电 | 5264篇 |
一般工业技术 | 5826篇 |
冶金工业 | 2390篇 |
原子能技术 | 486篇 |
自动化技术 | 5964篇 |
出版年
2024年 | 153篇 |
2023年 | 862篇 |
2022年 | 1232篇 |
2021年 | 1923篇 |
2020年 | 1525篇 |
2019年 | 1285篇 |
2018年 | 1449篇 |
2017年 | 1610篇 |
2016年 | 1431篇 |
2015年 | 1921篇 |
2014年 | 2350篇 |
2013年 | 2877篇 |
2012年 | 2932篇 |
2011年 | 3307篇 |
2010年 | 2765篇 |
2009年 | 2679篇 |
2008年 | 2762篇 |
2007年 | 2517篇 |
2006年 | 2599篇 |
2005年 | 2286篇 |
2004年 | 1436篇 |
2003年 | 1348篇 |
2002年 | 1223篇 |
2001年 | 1098篇 |
2000年 | 1210篇 |
1999年 | 1420篇 |
1998年 | 1117篇 |
1997年 | 937篇 |
1996年 | 842篇 |
1995年 | 758篇 |
1994年 | 624篇 |
1993年 | 465篇 |
1992年 | 352篇 |
1991年 | 256篇 |
1990年 | 213篇 |
1989年 | 147篇 |
1988年 | 131篇 |
1987年 | 81篇 |
1986年 | 56篇 |
1985年 | 29篇 |
1984年 | 25篇 |
1983年 | 25篇 |
1982年 | 22篇 |
1981年 | 14篇 |
1980年 | 11篇 |
1979年 | 4篇 |
1978年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Fudong Nian Teng Li Xinyu Wu Qingwei Gao Feifeng Li 《Multimedia Tools and Applications》2016,75(5):2435-2452
Efficient near-duplicate image detection is important for several applications that feature extraction and matching need to be taken online. Most image representations targeting at conventional image retrieval problems are either computationally expensive to extract and match, or limited in robustness. Aiming at this problem, in this paper, we propose an effective and efficient local-based representation method to encode an image as a binary vector, which is called Local-based Binary Representation (LBR). Local regions are extracted densely from the image, and each region is converted to a simple and effective feature describing its texture. A statistical histogram can be calculated over all the local features, and then it is encoded to a binary vector as the holistic image representation. The proposed binary representation jointly utilizes the local region texture and global visual distribution of the image, based on which a similarity measure can be applied to detect near-duplicate image effectively. The binary encoding scheme can not only greatly speed up the online computation, but also reduce memory cost in real applications. In experiments the precision and recall, as well as computational time of the proposed method are compared with other state-of-the-art image representations and LBR shows clear advantages on online near-duplicate image detection and video keyframe detection tasks. 相似文献
992.
In this paper, a hierarchical dependency context model (HDCM) is firstly proposed to exploit the statistical correlations of DCT (Discrete Cosine Transform) coefficients in H.264/AVC video coding standard, in which the number of non-zero coefficients in a DCT block and the scanned position are used to capture the magnitude varying tendency of DCT coefficients. Then a new binary arithmetic coding using hierarchical dependency context model (HDCMBAC) is proposed. HDCMBAC associates HDCM with binary arithmetic coding to code the syntax elements for a DCT block, which consist of the number of non-zero coefficients, significant flag and level information. Experimental results demonstrate that HDCMBAC can achieve similar coding performance as CABAC at low and high QPs (quantization parameter). Meanwhile the context modeling and the arithmetic decoding in HDCMBAC can be carried out in parallel, since the context dependency only exists among different parts of basic syntax elements in HDCM. 相似文献
993.
994.
995.
Lingyu?Yan Fuhao?Zou Rui?GuoEmail author Lianli?Gao Ke?Zhou Chunzhi?Wang 《World Wide Web》2016,19(2):217-229
Currently, research on content based image copy detection mainly focuses on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very time-consuming and unscalable. Hence, we need to pay much attention to the efficiency of image detection. In this paper, we propose a fast feature aggregating method for image copy detection which uses machine learning based hashing to achieve fast feature aggregation. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach. 相似文献
996.
Lina Yao Quan Z. Sheng Anne H. H. Ngu Byron J. Gao Xue Li Sen Wang 《World Wide Web》2016,19(6):1125-1149
Automatic annotation is an essential technique for effectively handling and organizing Web objects (e.g., Web pages), which have experienced an unprecedented growth over the last few years. Automatic annotation is usually formulated as a multi-label classification problem. Unfortunately, labeled data are often time-consuming and expensive to obtain. Web data also accommodate much richer feature space. This calls for new semi-supervised approaches that are less demanding on labeled data to be effective in classification. In this paper, we propose a graph-based semi-supervised learning approach that leverages random walks and ? 1 sparse reconstruction on a mixed object-label graph with both attribute and structure information for effective multi-label classification. The mixed graph contains an object-affinity subgraph, a label-correlation subgraph, and object-label edges with adaptive weight assignments indicating the assignment relationships. The object-affinity subgraph is constructed using ? 1 sparse graph reconstruction with extracted structural meta-text, while the label-correlation subgraph captures pairwise correlations among labels via linear combination of their co-occurrence similarity and kernel-based similarity. A random walk with adaptive weight assignment is then performed on the constructed mixed graph to infer probabilistic assignment relationships between labels and objects. Extensive experiments on real Yahoo! Web datasets demonstrate the effectiveness of our approach. 相似文献
997.
Complex queries are widely used in current Web applications. They express highly specific information needs, but simply aggregating the meanings of primitive visual concepts does not perform well. To facilitate image search of complex queries, we propose a new image reranking scheme based on concept relevance estimation, which consists of Concept-Query and Concept-Image probabilistic models. Each model comprises visual, web and text relevance estimation. Our work performs weighted sum of the underlying relevance scores, a new ranking list is obtained. Considering the Web semantic context, we involve concepts by leveraging lexical and corpus-dependent knowledge, such as Wordnet and Wikipedia, with co-occurrence statistics of tags in our Flickr corpus. The experimental results showed that our scheme is significantly better than the other existing state-of-the-art approaches. 相似文献
998.
Recently, uncertain graph data management and mining techniques have attracted significant interests and research efforts due to potential applications such as protein interaction networks and social networks. Specifically, as a fundamental problem, subgraph similarity all-matching is widely applied in exploratory data analysis. The purpose of subgraph similarity all-matching is to find all the similarity occurrences of the query graph in a large data graph. Numerous algorithms and pruning methods have been developed for the subgraph matching problem over a certain graph. However, insufficient efforts are devoted to subgraph similarity all-matching over an uncertain data graph, which is quite challenging due to high computation costs. In this paper, we define the problem of subgraph similarity maximal all-matching over a large uncertain data graph and propose a framework to solve this problem. To further improve the efficiency, several speed-up techniques are proposed such as the partial graph evaluation, the vertex pruning, the calculation model transformation, the incremental evaluation method and the probability upper bound filtering. Finally, comprehensive experiments are conducted on real graph data to test the performance of our framework and optimization methods. The results verify that our solutions can outperform the basic approach by orders of magnitudes in efficiency. 相似文献
999.
Knowledge collaboration (KC) is an important strategy measure to improve knowledge management, focusing on not only efficiency of knowledge cooperation, but also adding value of intellectual capital and social capital. In virtual teams, many factors, such as team’s network characteristics, collaborative culture, and individual collaborative intention, affect the performance of KC. By discussing the nature of KC, this paper presents that the performance of can be measured from two aspects: effectiveness of collaboration and efficiency of cooperation. Among them, effectiveness of collaboration is measured through value added and efficiency of cooperation is measured through accuracy and timeliness. Then the paper discusses the factors affecting the performance of KC from network characteristics, individual attributes and team attributes. The results show that network characteristics, individual attributes and team attributes in virtual team have significant impacts on the performance of KC. 相似文献
1000.
Nan Guo Tianhan Gao Hwagyoo Park 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2016,20(5):1781-1791
Attributes proof in anonymous credential systems is an effective way to balance security and privacy in user authentication; however, the linear complexity of attributes proof causes the existing anonymous credential systems far away from being practical, especially on resource-limited smart devices. For efficiency considerations, we present a novel pairing-based anonymous credential system which solves the linear complexity of attributes proof based on aggregate signature scheme. We propose two extended signature schemes, BLS+ and BGLS+, to be cryptographical building blocks for constructing anonymous credentials in the random oracle model. Identity-like information of message holder is encoded in a signature in order that the message holder can prove the possession of the input message along with the validity of a signature. We present issuance protocol for anonymous credentials embedding weak attributes which are referred to what cannot identify a user in a population. Users can prove any combination of attributes all at once by aggregating the corresponding individual credentials into one. The attributes proof protocols on AND and OR relation over multiple attributes are also given. The performance analysis shows that the aggregation-based anonymous credential system outperforms both the conventional Camenisch–Lysyanskaya pairing-based system and the accumulator-based system when prove AND and OR relation over multiple attributes, and the size of credential and public parameters are shorter as well. 相似文献