首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5788篇
  免费   259篇
  国内免费   8篇
电工技术   42篇
综合类   7篇
化学工业   1200篇
金属工艺   104篇
机械仪表   117篇
建筑科学   212篇
矿业工程   16篇
能源动力   94篇
轻工业   606篇
水利工程   34篇
石油天然气   4篇
无线电   351篇
一般工业技术   973篇
冶金工业   1559篇
原子能技术   32篇
自动化技术   704篇
  2023年   36篇
  2022年   63篇
  2021年   113篇
  2020年   93篇
  2019年   76篇
  2018年   168篇
  2017年   160篇
  2016年   172篇
  2015年   143篇
  2014年   163篇
  2013年   370篇
  2012年   308篇
  2011年   326篇
  2010年   244篇
  2009年   211篇
  2008年   266篇
  2007年   210篇
  2006年   193篇
  2005年   154篇
  2004年   129篇
  2003年   116篇
  2002年   101篇
  2001年   85篇
  2000年   99篇
  1999年   88篇
  1998年   525篇
  1997年   296篇
  1996年   198篇
  1995年   112篇
  1994年   112篇
  1993年   108篇
  1992年   23篇
  1991年   42篇
  1990年   25篇
  1989年   41篇
  1988年   34篇
  1987年   30篇
  1986年   18篇
  1985年   35篇
  1984年   24篇
  1983年   22篇
  1982年   35篇
  1981年   35篇
  1980年   22篇
  1979年   17篇
  1978年   21篇
  1977年   44篇
  1976年   46篇
  1975年   18篇
  1973年   12篇
排序方式: 共有6055条查询结果,搜索用时 15 毫秒
141.
Designing and evaluating an energy efficient Cloud   总被引:1,自引:1,他引:0  
Cloud infrastructures have recently become a center of attention. They can support dynamic operational infrastructures adapted to the requirements of distributed applications. As large-scale distributed systems reach enormous sizes in terms of equipment, the energy consumption issue becomes one of the main challenges for large-scale integration. Like any other large-scale distributed system, Clouds face an increasing demand in energy. In this paper, we explore the energy issue by analyzing how much energy virtualized environments cost. We provide an energy-efficient framework dedicated to Cloud architectures and we validate it through different experimentations on a modern multicore platform. We show on a realistic example that our infrastructure could save 25% of the Cloud nodes’ electrical consumption.  相似文献   
142.
Erosive runoff is a recurring problem and is a source of sometimes deadly muddy floods in the Pays de Caux (France). The risk results from a conjunction of natural factors and human activity. Efficient actions against runoff in agricultural watersheds are well known. However they are still difficult to implement as they require co-operation between stakeholders. Local actors thus need tools to help them understand the collective consequences of their individual decisions and help to initiate a process of negotiation between them. We decided to use a participatory approach called companion modelling (ComMod), and, in close collaboration with one of the first group of local stakeholders, to create a role-playing game (RPG) to facilitate negotiations on the future management of erosive runoff. This paper describes and discusses the development of the RPG and its use with other groups of local stakeholders within the framework of two game sessions organized by two different watershed management committees. During the joint construction step, stakeholders shared their viewpoints about the environment, agents, rules, and how to model runoff in preparation for the creation of the RPG. During the RPG sessions, two groups of eight players, including farmers, mayors and watershed advisors, were confronted with disastrous runoff in a fictive agricultural watershed. Results showed that they managed to reduce runoff by 20–50% by engaging a dialogue about grass strips, storage ponds and management of the intercrop period. However, further progress is still needed to better control runoff through the implementation of better agricultural practices because, during the RPG sessions, the watershed advisors did not encourage farmers to do so. Because of the complexity of management problems, results of jointly constructing the game and the RPG sessions showed that modelling and simulation can be a very useful way of accompanying the collective learning process. This new way of working was welcomed by the participants who expressed their interest in organizing further RPG sessions.  相似文献   
143.
This paper proposes a model of a three phase electrical inverter with a LC output filter in delta connection used in a renewable energy supply system. The concept of inverse bond graph via bicausality is used for the control law design. The control law robustness is tested by connecting passive and active (induction machine) loads.  相似文献   
144.
Structural code coverage criteria have been studied since the early seventies, and now they are well supported by commercial and open-source tools and are commonly embedded in several advanced industrial processes. Most industrial applications still refer to simple criteria, like statement and branch coverage, and consider complex criteria, like modified condition decision coverage, only rarely and often driven by the requirements of certification agencies. The industrial value of structural criteria is limited by the difficulty of achieving high coverage, due to both the complexity of deriving test cases that execute specific uncovered elements and the presence of many infeasible elements in the code. In this paper, we propose a technique that both generates test cases that execute yet uncovered branches and identifies infeasible branches that can be eliminated from the computation of the branch coverage. In this way, we can increase branch coverage to closely approximate full coverage, thus improving its industrial value. The algorithm combines symbolic analysis, abstraction refinement, and a novel technique named coarsening, to execute unexplored branches, identify infeasible ones, and mitigate the state space explosion problem. In the paper, we present the technique and illustrate its effectiveness through a set of experimental results obtained with a prototype implementation.  相似文献   
145.
We explore one aspect of the structure of a codified legal system at the national level using a new type of representation to understand the strong or weak dependencies between the various fields of law. In Part I of this study, we analyze the graph associated with the network in which each French legal code is a vertex and an edge is produced between two vertices when a code cites another code at least one time. We show that this network distinguishes from many other real networks from a high density, giving it a particular structure that we call concentrated world and that differentiates a national legal system (as considered with a resolution at the code level) from small-world graphs identified in many social networks. Our analysis then shows that a few communities (groups of highly wired vertices) of codes covering large domains of regulation are structuring the whole system. Indeed we mainly find a central group of influent codes, a group of codes related to social issues and a group of codes dealing with territories and natural resources. The study of this codified legal system is also of interest in the field of the analysis of real networks. In particular we examine the impact of the high density on the structural characteristics of the graph and on the ways communities are searched for. Finally we provide an original visualization of this graph on an hemicyle-like plot, this representation being based on a statistical reduction of dissimilarity measures between vertices. In Part II (a following paper) we show how the consideration of the weights attributed to each edge in the network in proportion to the number of citations between two vertices (codes) allows deepening the analysis of the French legal system.  相似文献   
146.
This paper reexamines the construction of indicators of standards of living, by focussing on the challenges raised by the subjectivity and the multidimensionality of living conditions. For that purpose, we apply Choquet integral-based multiattribute value theory to the elicitation, from rankings of multiattribute hypothetical societies, of individual preferences on different dimensions of living conditions. A simple application of the proposed approach highlights that preferences on multiattribute societies cannot, in general, be represented by an additive value model, as there exist complementarities and redundancies between different dimensions of standards of living. Our elicitation exercise reveals also a strong heterogeneity of individual preferences on hypothetical societies. Finally, we explore how elicited preferences can be used to cast a new light on the ranking of actual societies.  相似文献   
147.
In this paper, we have proposed a new feature selection method called kernel F-score feature selection (KFFS) used as pre-processing step in the classification of medical datasets. KFFS consists of two phases. In the first phase, input spaces (features) of medical datasets have been transformed to kernel space by means of Linear (Lin) or Radial Basis Function (RBF) kernel functions. By this way, the dimensions of medical datasets have increased to high dimension feature space. In the second phase, the F-score values of medical datasets with high dimensional feature space have been calculated using F-score formula. And then the mean value of calculated F-scores has been computed. If the F-score value of any feature in medical datasets is bigger than this mean value, that feature will be selected. Otherwise, that feature is removed from feature space. Thanks to KFFS method, the irrelevant or redundant features are removed from high dimensional input feature space. The cause of using kernel functions transforms from non-linearly separable medical dataset to a linearly separable feature space. In this study, we have used the heart disease dataset, SPECT (Single Photon Emission Computed Tomography) images dataset, and Escherichia coli Promoter Gene Sequence dataset taken from UCI (University California, Irvine) machine learning database to test the performance of KFFS method. As classification algorithms, Least Square Support Vector Machine (LS-SVM) and Levenberg–Marquardt Artificial Neural Network have been used. As shown in the obtained results, the proposed feature selection method called KFFS is produced very promising results compared to F-score feature selection.  相似文献   
148.
In this paper, we have made medical application of a new artificial immune system named the information gain based artificial immune recognition system (IG-AIRS) which minimizes the negative effects of taking into account all attributes in calculating Euclidean distance in shape–space representation which is used in many artificial immune systems. For medical data, thyroid disease data set was applied in the performance analysis of our proposed system. Our proposed system reached 95.90% classification accuracy with 10-fold CV method. This result ensured that IG-AIRS would be helpful in diagnosing thyroid function based on laboratory tests, and would open the way to various ill diagnoses support by using the recent clinical examination data, and we are actually in progress.  相似文献   
149.
Most algorithms in probabilistic sampling-based path planning compute collision-free paths made of straight line segments lying in the configuration space. Due to the randomness of sampling, the paths make detours that need to be optimized. The contribution of this paper is to propose a basic gradient-based algorithm that transforms a polygonal collision-free path into a shorter one. While requiring only collision checking, and not any time-consuming obstacle distance computation nor geometry simplification, we constrain only part of the configuration variables that may cause a collision, and not entire configurations. Thus, parasite motions that are not useful for the problem resolution are reduced without any assumption. Experimental results include navigation and manipulation tasks, eg a manipulator arm-filling boxes and a PR2 robot working in a kitchen environment. Comparisons with a random shortcut optimizer and a partial shortcut have also been studied.  相似文献   
150.
The Linux kernel does not export a stable, well-defined kernel interface, complicating the development of kernel-level services, such as device drivers and file systems. While there does exist a set of functions that are exported to external modules, this set of functions frequently changes, and the functions have implicit, ill-documented preconditions. No specific debugging support is provided. We present Diagnosys, an approach to automatically constructing a debugging interface for the Linux kernel. First, a designated kernel maintainer uses Diagnosys to identify constraints on the use of the exported functions. Based on this information, developers of kernel services can then use Diagnosys to generate a debugging interface specialized to their code. When a service including this interface is tested, it records information about potential problems. This information is preserved following a kernel crash or hang. Our experiments show that the generated debugging interface provides useful log information and incurs a low performance penalty.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号