首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20434篇
  免费   356篇
  国内免费   157篇
电工技术   271篇
综合类   9篇
化学工业   2623篇
金属工艺   675篇
机械仪表   242篇
建筑科学   497篇
矿业工程   54篇
能源动力   344篇
轻工业   687篇
水利工程   124篇
石油天然气   115篇
无线电   968篇
一般工业技术   1925篇
冶金工业   1902篇
原子能技术   190篇
自动化技术   10321篇
  2022年   130篇
  2021年   163篇
  2020年   130篇
  2019年   139篇
  2018年   176篇
  2017年   165篇
  2016年   204篇
  2015年   181篇
  2014年   469篇
  2013年   904篇
  2012年   1149篇
  2011年   3553篇
  2010年   1499篇
  2009年   1404篇
  2008年   1114篇
  2007年   969篇
  2006年   829篇
  2005年   897篇
  2004年   822篇
  2003年   831篇
  2002年   524篇
  2001年   181篇
  2000年   174篇
  1999年   188篇
  1998年   317篇
  1997年   213篇
  1996年   189篇
  1995年   148篇
  1994年   157篇
  1993年   146篇
  1992年   137篇
  1991年   116篇
  1990年   120篇
  1989年   150篇
  1988年   109篇
  1987年   125篇
  1986年   133篇
  1985年   141篇
  1984年   181篇
  1983年   127篇
  1982年   122篇
  1981年   101篇
  1980年   90篇
  1979年   99篇
  1978年   82篇
  1977年   102篇
  1976年   98篇
  1975年   87篇
  1974年   92篇
  1973年   75篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
931.
Copyright protection and information security have become serious problems due to the ever growing amount of digital data over the Internet. Reversible data hiding is a special type of data hiding technique that guarantees not only the secret data but also the cover media can be reconstructed without any distortion. Traditional schemes are based on spatial, discrete cosine transformation (DCT) and discrete wavelet transformation (DWT) domains. Recently, some vector quantization (VQ) based reversible data hiding schemes have been proposed. This paper proposes an improved reversible data hiding scheme based on VQ-index residual value coding. Experimental results show that our scheme outperforms two recently proposed schemes, namely side-match vector quantization (SMVQ)-based data hiding and modified fast correlation vector quantization (MFCVQ)-based data hiding.  相似文献   
932.
In this paper, we report our experience on the use of phrases as basic features in the email classification problem. We performed extensive empirical evaluation using our large email collections and tested with three text classification algorithms, namely, a naive Bayes classifier and two k-NN classifiers using TF-IDF weighting and resemblance respectively. The investigation includes studies on the effect of phrase size, the size of local and global sampling, the neighbourhood size, and various methods to improve the classification accuracy. We determined suitable settings for various parameters of the classifiers and performed a comparison among the classifiers with their best settings. Our result shows that no classifier dominates the others in terms of classification accuracy. Also, we made a number of observations on the special characteristics of emails. In particular, we observed that public emails are easier to classify than private ones.  相似文献   
933.
Versioning is an important aspect of web service development, which has not been adequately addressed so far. In this article, we propose extensions to WSDL and UDDI to support versioning of web service interfaces at development-time and run-time. We address service-level and operation-level versioning, service endpoint mapping, and version sequencing. We also propose annotation extensions for developing versioned web services in Java. We have tested the proposed solution for versioning in two real-world environments and identified considerable improvements in service development and maintenance efficiency, improved service reuse, and simplified governance.  相似文献   
934.
Software product line development has emerged as a leading approach for software reuse. This paper describes an approach to manage natural-language requirements specifications in a software product line context. Variability in such product line specifications is modeled and managed using a feature model. The proposed approach has been introduced in the Swedish defense industry. We present a multiple-case study covering two different product lines with in total eight product instances. These were compared to experiences from previous projects in the organization employing clone-and-own reuse. We conclude that the proposed product line approach performs better than clone-and-own reuse of requirements specifications in this particular industrial context.  相似文献   
935.
A Web server, when overloaded, shows a severe degradation of goodput initially, with the eventual settling of goodput as load increases further. Traditional performance models have failed to capture this behavior. In this paper, we propose an analytical model, which is a two-stage and layered queuing model of the Web server, which is able to reproduce this behavior. We do this by explicitly modelling the overhead processing, the user abandonment and retry behavior, and the contention for resources, for the FIFO and LIFO queuing disciplines. We show that LIFO provides better goodput in most overload situations. We compare our model predictions with experimental results from a test bed and find that our results match well with measurements.  相似文献   
936.
Aspect-oriented modeling (AOM) allows software designers to describe features that address pervasive concerns separately as aspects, and to systematically incorporate the features into a design model using model composition techniques. The goal of this paper is to analyze the performance effects of different security features that may be represented as aspect models. This is part of a larger research effort to integrate methodologies and tools for the analysis of security and performance properties early in the software development process. In this paper, we describe an extension to the AOM approach that provides support for performance analysis. We use the performance analysis techniques developed previously in the PUMA project, which take as input UML models annotated with the standard UML Profile for Schedulability, Performance and Time (SPT), and transform them first into Core Scenario Model (CSM), and then into different performance models. The composition of the aspects with the primary (base) model is performed at the CSM level. A new formal definition of CSM properties and operations is described as a foundation for scenario-based weaving. The proposed approach is illustrated with an example that utilizes two standards, TPC-W and SSL.  相似文献   
937.
The nitrogen dioxide (NO2) sensing capability of polypyrrole (PPy) was enhanced dramatically after functionalized with iron(III)phthalocyanine-4,4′,4″,4-tetrasulfonic acid monosodium salt (FePcTSA). The incorporated phthalocyanine was confirmed by different characterization techniques such as UV–vis spectroscopy, FTIR, GFAAS, EDAX, etc. The resistance of the functionalized PPy decreased spontaneously during exposure to NO2 gas at room temperature. This material exhibited excellent stability, reversibility, and reproducibility. The lowest response time (t50) thus obtained is 47 s with a highest response factor (ΔR/R0 × 100) of 50.25.  相似文献   
938.
Most methods for foreground region detection in videos are challenged by the presence of quasi-stationary backgrounds—flickering monitors, waving tree branches, moving water surfaces or rain. Additional difficulties are caused by camera shake or by the presence of moving objects in every image. The contribution of this paper is to propose a scene-independent and non-parametric modeling technique which covers most of the above scenarios. First, an adaptive statistical method, called adaptive kernel density estimation (AKDE), is proposed as a base-line system that addresses the scene dependence issue. After investigating its performance we introduce a novel general statistical technique, called recursive modeling (RM). The RM overcomes the weaknesses of the AKDE in modeling slow changes in the background. The performance of the RM is evaluated asymptotically and compared with the base-line system (AKDE). A wide range of quantitative and qualitative experiments is performed to compare the proposed RM with the base-line system and existing algorithms. Finally, a comparison of various background modeling systems is presented as well as a discussion on the suitability of each technique for different scenarios.  相似文献   
939.
940.
More than half the literature on software effort estimation (SEE) focuses on comparisons of new estimation methods. Surprisingly, there are no studies comparing state of the art latest methods with decades-old approaches. Accordingly, this paper takes five steps to check if new SEE methods generated better estimates than older methods. Firstly, collect effort estimation methods ranging from “classical” COCOMO (parametric estimation over a pre-determined set of attributes) to “modern” (reasoning via analogy using spectral-based clustering plus instance and feature selection, and a recent “baseline method” proposed in ACM Transactions on Software Engineering). Secondly, catalog the list of objections that lead to the development of post-COCOMO estimation methods. Thirdly, characterize each of those objections as a comparison between newer and older estimation methods. Fourthly, using four COCOMO-style data sets (from 1991, 2000, 2005, 2010) and run those comparisons experiments. Fifthly, compare the performance of the different estimators using a Scott-Knott procedure using (i) the A12 effect size to rule out “small” differences and (ii) a 99 % confident bootstrap procedure to check for statistically different groupings of treatments. The major negative result of this paper is that for the COCOMO data sets, nothing we studied did any better than Boehms original procedure. Hence, we conclude that when COCOMO-style attributes are available, we strongly recommend (i) using that data and (ii) use COCOMO to generate predictions. We say this since the experiments of this paper show that, at least for effort estimation, how data is collected is more important than what learner is applied to that data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号