首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3638篇
  免费   17篇
  国内免费   11篇
综合类   3篇
化学工业   104篇
金属工艺   16篇
机械仪表   7篇
建筑科学   10篇
矿业工程   14篇
能源动力   8篇
轻工业   2篇
水利工程   1篇
石油天然气   4篇
无线电   99篇
一般工业技术   19篇
冶金工业   26篇
原子能技术   80篇
自动化技术   3273篇
  2024年   1篇
  2021年   1篇
  2020年   2篇
  2019年   2篇
  2017年   2篇
  2016年   7篇
  2015年   3篇
  2014年   6篇
  2013年   12篇
  2012年   6篇
  2011年   228篇
  2010年   743篇
  2009年   632篇
  2008年   396篇
  2007年   375篇
  2006年   292篇
  2005年   343篇
  2004年   355篇
  2003年   193篇
  2002年   1篇
  2001年   2篇
  2000年   48篇
  1999年   9篇
  1998年   1篇
  1996年   4篇
  1976年   1篇
  1974年   1篇
排序方式: 共有3666条查询结果,搜索用时 15 毫秒
61.
This paper presents a statistical approach to estimating the performance of a superscalar processor. Traditional trace-driven simulators can take a large amount time to conduct a performance evaluation of a machine, especially as the number of instructions increases. The result of this type of simulation is typically tied to the particular trace that was run. Elements such as dependencies, delays, and stalls are all a direct result of the particular trace being run, and can differ from trace to trace. This paper describes a model designed to separate simulation results from a specific trace. Rather than running a trace-driven simulation, a statistical model is employed, more specifically a Poisson distribution, to predict how these types of delay affects performance. Through the use of this statistical model, a performance evaluation can be conducted using a general code model, with specific stall rates, rather than a particular code trace. This model allows simulations to quickly run tens of millions of instructions and evaluate the performance of a particular micro-architecture while at the same time, allowing the flexibility to change the structure of the architecture.  相似文献   
62.
Recently, a chaos-based image encryption scheme called RCES (also called RSES) was proposed. This paper analyses the security of RCES, and points out that it is insecure against the known/chosen-plaintext attacks: the number of required known/chosen plain-images is only one or two to succeed an attack. In addition, the security of RCES against the brute-force attack was overestimated. Both theoretical and experimental analyses are given to show the performance of the suggested known/chosen-plaintext attacks. The insecurity of RCES is due to its special design, which makes it a typical example of insecure image encryption schemes. A number of lessons are drawn from the reported cryptanalysis of RCES, consequently suggesting some common principles for ensuring a high level of security of an image encryption scheme.  相似文献   
63.
64.
A wireless sensor network (WSN) is composed of tens or hundreds of spatially distributed autonomous nodes, called sensors. Sensors are devices used to collect data from the environment related to the detection or measurement of physical phenomena. In fact, a WSN consists of groups of sensors where each group is responsible for providing information about one or more physical phenomena (e.g., group for collecting temperature data). Sensors are limited in power, computational capacity, and memory. Therefore, a query engine and query operators for processing queries in WSNs should be able to handle resource limitations such as memory and battery life. Adaptability has been explored as an alternative approach when dealing with these conditions. Adaptive query operators (algorithms) can adjust their behavior in response to specific events that take place during data processing. In this paper, we propose an adaptive in-network aggregation operator for query processing in sensor nodes of a WSN, called ADAGA (ADaptive AGgregation Algorithm for sensor networks). The ADAGA adapts its behavior according to memory and energy usage by dynamically adjusting data-collection and data-sending time intervals. ADAGA can correctly aggregate data in WSNs with packet replication. Moreover, ADAGA is able to predict non-performed detection values by analyzing collected values. Thus, ADAGA is able to produce results as close as possible to real results (obtained when no resource constraint is faced). The results obtained through experiments prove the efficiency of ADAGA.  相似文献   
65.
Adaptive random testing (ART) has recently been proposed to enhance the failure-detection capability of random testing. In ART, test cases are not only randomly generated, but also evenly spread over the input domain. Various ART algorithms have been developed to evenly spread test cases in different ways. Previous studies have shown that some ART algorithms prefer to select test cases from the edge part of the input domain rather than from the centre part, that is, inputs do not have equal chance to be selected as test cases. Since we do not know where the failure-causing inputs are prior to testing, it is not desirable for inputs to have different chances of being selected as test cases. Therefore, in this paper, we investigate how to enhance some ART algorithms by offsetting the edge preference, and propose a new family of ART algorithms. A series of simulations have been conducted and it is shown that these new algorithms not only select test cases more evenly, but also have better failure detection capabilities.  相似文献   
66.
Many empirical studies have found that software metrics can predict class error proneness and the prediction can be used to accurately group error-prone classes. Recent empirical studies have used open source systems. These studies, however, focused on the relationship between software metrics and class error proneness during the development phase of software projects. Whether software metrics can still predict class error proneness in a system’s post-release evolution is still a question to be answered. This study examined three releases of the Eclipse project and found that although some metrics can still predict class error proneness in three error-severity categories, the accuracy of the prediction decreased from release to release. Furthermore, we found that the prediction cannot be used to build a metrics model to identify error-prone classes with acceptable accuracy. These findings suggest that as a system evolves, the use of some commonly used metrics to identify which classes are more prone to errors becomes increasingly difficult and we should seek alternative methods (to the metric-prediction models) to locate error-prone classes if we want high accuracy.  相似文献   
67.
The transition from Java 1.4 to Java 1.5 has provided the programmer with more flexibility due to the inclusion of several new language constructs, such as parameterized types. This transition is expected to increase the number of class clusters exhibiting different combinations of class characteristics. In this paper we investigate how the number and distribution of clusters are expected to change during this transition. We present the results of an empirical study were we analyzed applications written in both Java 1.4 and 1.5. In addition, we show how the variability of the combinations of class characteristics may affect the testing of class members.  相似文献   
68.
Parametric software cost estimation models are based on mathematical relations, obtained from the study of historical software projects databases, that intend to be useful to estimate the effort and time required to develop a software product. Those databases often integrate data coming from projects of a heterogeneous nature. This entails that it is difficult to obtain a reasonably reliable single parametric model for the range of diverging project sizes and characteristics. A solution proposed elsewhere for that problem was the use of segmented models in which several models combined into a single one contribute to the estimates depending on the concrete characteristic of the inputs. However, a second problem arises with the use of segmented models, since the belonging of concrete projects to segments or clusters is subject to a degree of fuzziness, i.e. a given project can be considered to belong to several segments with different degrees.This paper reports the first exploration of a possible solution for both problems together, using a segmented model based on fuzzy clusters of the project space. The use of fuzzy clustering allows obtaining different mathematical models for each cluster and also allows the items of a project database to contribute to more than one cluster, while preserving constant time execution of the estimation process. The results of an evaluation of a concrete model using the ISBSG 8 project database are reported, yielding better figures of adjustment than its crisp counterpart.  相似文献   
69.
The use of Source Code Author Profiles (SCAP) represents a new, highly accurate approach to source code authorship identification that is, unlike previous methods, language independent. While accuracy is clearly a crucial requirement of any author identification method, in cases of litigation regarding authorship, plagiarism, and so on, there is also a need to know why it is claimed that a piece of code is written by a particular author. What is it about that piece of code that suggests a particular author? What features in the code make one author more likely than another? In this study, we describe a means of identifying the high-level features that contribute to source code authorship identification using as a tool the SCAP method. A variety of features are considered for Java and Common Lisp and the importance of each feature in determining authorship is measured through a sequence of experiments in which we remove one feature at a time. The results show that, for these programs, comments, layout features and package-related naming influence classification accuracy whereas user-defined naming, an obvious programmer related feature, does not appear to influence accuracy. A comparison is also made between the relative feature contributions in programs written in the two languages.  相似文献   
70.
Standardisation initiatives (ISO and IEC) try to answer the problem of managing heterogeneous information, scattered within organizations, by formalising the knowledge related to products technical data. While the product is the centred object from which, along its lifecycle, all enterprise systems, either inside a single enterprise or between cooperating networked enterprises, have a specific view, we may consider it as active as far as it participates to the decisions making by providing knowledge about itself. This paper proposes a novel approach, postulating that the product, represented by its technical data, may be considered as interoperable per se with the many applications involved in manufacturing enterprises as far as it embeds knowledge about itself, as it stores all its technical data, provided that these are embedded on a common model. The matter of this approach is to formalise of all technical data and concepts contributing to the definition of a Product Ontology, embedded into the product itself and making it interoperable with applications, minimising loss of semantics.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号