首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10677篇
  免费   29篇
  国内免费   139篇
电工技术   138篇
综合类   1篇
化学工业   293篇
金属工艺   436篇
机械仪表   53篇
建筑科学   68篇
矿业工程   2篇
能源动力   52篇
轻工业   69篇
水利工程   30篇
石油天然气   8篇
无线电   380篇
一般工业技术   166篇
冶金工业   205篇
原子能技术   91篇
自动化技术   8853篇
  2022年   6篇
  2021年   7篇
  2020年   6篇
  2019年   19篇
  2018年   13篇
  2017年   8篇
  2016年   12篇
  2015年   7篇
  2014年   220篇
  2013年   191篇
  2012年   776篇
  2011年   3086篇
  2010年   1146篇
  2009年   1017篇
  2008年   699篇
  2007年   609篇
  2006年   469篇
  2005年   588篇
  2004年   531篇
  2003年   592篇
  2002年   285篇
  2001年   14篇
  2000年   7篇
  1999年   37篇
  1998年   129篇
  1997年   59篇
  1996年   34篇
  1995年   15篇
  1994年   20篇
  1993年   17篇
  1992年   13篇
  1991年   14篇
  1990年   16篇
  1989年   13篇
  1988年   11篇
  1987年   9篇
  1986年   15篇
  1985年   9篇
  1984年   26篇
  1983年   12篇
  1982年   8篇
  1981年   11篇
  1977年   4篇
  1976年   10篇
  1975年   5篇
  1974年   5篇
  1971年   4篇
  1969年   6篇
  1968年   4篇
  1967年   5篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
171.
Recently, a chaos-based image encryption scheme called RCES (also called RSES) was proposed. This paper analyses the security of RCES, and points out that it is insecure against the known/chosen-plaintext attacks: the number of required known/chosen plain-images is only one or two to succeed an attack. In addition, the security of RCES against the brute-force attack was overestimated. Both theoretical and experimental analyses are given to show the performance of the suggested known/chosen-plaintext attacks. The insecurity of RCES is due to its special design, which makes it a typical example of insecure image encryption schemes. A number of lessons are drawn from the reported cryptanalysis of RCES, consequently suggesting some common principles for ensuring a high level of security of an image encryption scheme.  相似文献   
172.
Web proxy caches are used to reduce the strain of contemporary web traffic on web servers and network bandwidth providers. In this research, a novel approach to web proxy cache replacement which utilizes neural networks for replacement decisions is developed and analyzed. Neural networks are trained to classify cacheable objects from real world data sets using information known to be important in web proxy caching, such as frequency and recency. Correct classification ratios between 0.85 and 0.88 are obtained both for data used for training and data not used for training. Our approach is compared with Least Recently Used (LRU), Least Frequently Used (LFU) and the optimal case which always rates an object with the number of future requests. Performance is evaluated in simulation for various neural network structures and cache conditions. The final neural networks achieve hit rates that are 86.60% of the optimal in the worst case and 100% of the optimal in the best case. Byte-hit rates are 93.36% of the optimal in the worst case and 99.92% of the optimal in the best case. We examine the input-to-output mappings of individual neural networks and analyze the resulting caching strategy with respect to specific cache conditions.  相似文献   
173.
174.
175.
A wireless sensor network (WSN) is composed of tens or hundreds of spatially distributed autonomous nodes, called sensors. Sensors are devices used to collect data from the environment related to the detection or measurement of physical phenomena. In fact, a WSN consists of groups of sensors where each group is responsible for providing information about one or more physical phenomena (e.g., group for collecting temperature data). Sensors are limited in power, computational capacity, and memory. Therefore, a query engine and query operators for processing queries in WSNs should be able to handle resource limitations such as memory and battery life. Adaptability has been explored as an alternative approach when dealing with these conditions. Adaptive query operators (algorithms) can adjust their behavior in response to specific events that take place during data processing. In this paper, we propose an adaptive in-network aggregation operator for query processing in sensor nodes of a WSN, called ADAGA (ADaptive AGgregation Algorithm for sensor networks). The ADAGA adapts its behavior according to memory and energy usage by dynamically adjusting data-collection and data-sending time intervals. ADAGA can correctly aggregate data in WSNs with packet replication. Moreover, ADAGA is able to predict non-performed detection values by analyzing collected values. Thus, ADAGA is able to produce results as close as possible to real results (obtained when no resource constraint is faced). The results obtained through experiments prove the efficiency of ADAGA.  相似文献   
176.
Adaptive random testing (ART) has recently been proposed to enhance the failure-detection capability of random testing. In ART, test cases are not only randomly generated, but also evenly spread over the input domain. Various ART algorithms have been developed to evenly spread test cases in different ways. Previous studies have shown that some ART algorithms prefer to select test cases from the edge part of the input domain rather than from the centre part, that is, inputs do not have equal chance to be selected as test cases. Since we do not know where the failure-causing inputs are prior to testing, it is not desirable for inputs to have different chances of being selected as test cases. Therefore, in this paper, we investigate how to enhance some ART algorithms by offsetting the edge preference, and propose a new family of ART algorithms. A series of simulations have been conducted and it is shown that these new algorithms not only select test cases more evenly, but also have better failure detection capabilities.  相似文献   
177.
Many empirical studies have found that software metrics can predict class error proneness and the prediction can be used to accurately group error-prone classes. Recent empirical studies have used open source systems. These studies, however, focused on the relationship between software metrics and class error proneness during the development phase of software projects. Whether software metrics can still predict class error proneness in a system’s post-release evolution is still a question to be answered. This study examined three releases of the Eclipse project and found that although some metrics can still predict class error proneness in three error-severity categories, the accuracy of the prediction decreased from release to release. Furthermore, we found that the prediction cannot be used to build a metrics model to identify error-prone classes with acceptable accuracy. These findings suggest that as a system evolves, the use of some commonly used metrics to identify which classes are more prone to errors becomes increasingly difficult and we should seek alternative methods (to the metric-prediction models) to locate error-prone classes if we want high accuracy.  相似文献   
178.
The transition from Java 1.4 to Java 1.5 has provided the programmer with more flexibility due to the inclusion of several new language constructs, such as parameterized types. This transition is expected to increase the number of class clusters exhibiting different combinations of class characteristics. In this paper we investigate how the number and distribution of clusters are expected to change during this transition. We present the results of an empirical study were we analyzed applications written in both Java 1.4 and 1.5. In addition, we show how the variability of the combinations of class characteristics may affect the testing of class members.  相似文献   
179.
Predicting defect-prone software modules using support vector machines   总被引:2,自引:0,他引:2  
Effective prediction of defect-prone software modules can enable software developers to focus quality assurance activities and allocate effort and resources more efficiently. Support vector machines (SVM) have been successfully applied for solving both classification and regression problems in many applications. This paper evaluates the capability of SVM in predicting defect-prone software modules and compares its prediction performance against eight statistical and machine learning models in the context of four NASA datasets. The results indicate that the prediction performance of SVM is generally better than, or at least, is competitive against the compared models.  相似文献   
180.
Parametric software cost estimation models are based on mathematical relations, obtained from the study of historical software projects databases, that intend to be useful to estimate the effort and time required to develop a software product. Those databases often integrate data coming from projects of a heterogeneous nature. This entails that it is difficult to obtain a reasonably reliable single parametric model for the range of diverging project sizes and characteristics. A solution proposed elsewhere for that problem was the use of segmented models in which several models combined into a single one contribute to the estimates depending on the concrete characteristic of the inputs. However, a second problem arises with the use of segmented models, since the belonging of concrete projects to segments or clusters is subject to a degree of fuzziness, i.e. a given project can be considered to belong to several segments with different degrees.This paper reports the first exploration of a possible solution for both problems together, using a segmented model based on fuzzy clusters of the project space. The use of fuzzy clustering allows obtaining different mathematical models for each cluster and also allows the items of a project database to contribute to more than one cluster, while preserving constant time execution of the estimation process. The results of an evaluation of a concrete model using the ISBSG 8 project database are reported, yielding better figures of adjustment than its crisp counterpart.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号