首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7744篇
  免费   346篇
  国内免费   48篇
电工技术   217篇
综合类   22篇
化学工业   1660篇
金属工艺   166篇
机械仪表   184篇
建筑科学   239篇
矿业工程   8篇
能源动力   516篇
轻工业   764篇
水利工程   89篇
石油天然气   153篇
武器工业   4篇
无线电   951篇
一般工业技术   1387篇
冶金工业   452篇
原子能技术   77篇
自动化技术   1249篇
  2024年   17篇
  2023年   179篇
  2022年   432篇
  2021年   537篇
  2020年   379篇
  2019年   379篇
  2018年   483篇
  2017年   338篇
  2016年   381篇
  2015年   233篇
  2014年   370篇
  2013年   596篇
  2012年   393篇
  2011年   452篇
  2010年   287篇
  2009年   246篇
  2008年   231篇
  2007年   206篇
  2006年   174篇
  2005年   154篇
  2004年   125篇
  2003年   103篇
  2002年   117篇
  2001年   63篇
  2000年   65篇
  1999年   76篇
  1998年   123篇
  1997年   107篇
  1996年   71篇
  1995年   78篇
  1994年   51篇
  1993年   51篇
  1992年   40篇
  1991年   23篇
  1990年   28篇
  1989年   42篇
  1988年   46篇
  1987年   28篇
  1986年   28篇
  1985年   42篇
  1984年   49篇
  1983年   41篇
  1982年   27篇
  1981年   21篇
  1980年   28篇
  1979年   23篇
  1978年   18篇
  1977年   21篇
  1976年   30篇
  1974年   16篇
排序方式: 共有8138条查询结果,搜索用时 15 毫秒
51.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
52.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
53.
In this study, we wanted to discriminate between two groups of people. The database used in this study contains 20 patients with Parkinson’s disease and 20 healthy people. Three types of sustained vowels (/a/, /o/ and /u/) were recorded from each participant and then the analyses were done on these voice samples. Firstly, an initial feature vector extracted from time, frequency and cepstral domains. Then we used linear and nonlinear feature extraction techniques, principal component analysis (PCA), and nonlinear PCA. These techniques reduce the number of parameters and choose the most effective acoustic features used for classification. Support vector machine with its different kernel was used for classification. We obtained an accuracy up to 87.50 % for discrimination between PD patients and healthy people.  相似文献   
54.
Standard genetic algorithms (SGAs) are investigated to optimise discrete-time proportional-integral-derivative (PID) controller parameters, by three tuning approaches, for a multivariable glass furnace process with loop interaction. Initially, standard genetic algorithms (SGAs) are used to identify control oriented models of the plant which are subsequently used for controller optimisation. An individual tuning approach without loop interaction is considered first to categorise the genetic operators, cost functions and improve searching boundaries to attain the desired performance criteria. The second tuning approach considers controller parameters optimisation with loop interaction and individual cost functions. While, the third tuning approach utilises a modified cost function which includes the total effect of both controlled variables, glass temperature and excess oxygen. This modified cost function is shown to exhibit improved control robustness and disturbance rejection under loop interaction.  相似文献   
55.
Internet of Things (IoT) connects billions of devices in an Internet-like structure. Each device encapsulated as a real-world service which provides functionality and exchanges information with other devices. This large-scale information exchange results in new interactions between things and people. Unlike traditional web services, internet of services is highly dynamic and continuously changing due to constant degrade, vanish and possibly reappear of the devices, this opens a new challenge in the process of resource discovery and selection. In response to increasing numbers of services in the discovery and selection process, there is a corresponding increase in number of service consumers and consequent diversity of quality of service (QoS) available. Increase in both sides’ leads to the diversity in the demand and supply of services, which would result in the partial match of the requirements and offers. This paper proposed an IoT service ranking and selection algorithm by considering multiple QoS requirements and allowing partially matched services to be counted as a candidate for the selection process. One of the applications of IoT sensory data that attracts many researchers is transportation especially emergency and accident services which is used as a case study in this paper. Experimental results from real-world services showed that the proposed method achieved significant improvement in the accuracy and performance in the selection process.  相似文献   
56.
This article draws on three case studies of drip irrigation adoption in Morocco to consider the water–energy–food nexus concept from a bottom-up perspective. Findings indicate that small farmers' adoption of drip irrigation is conditional, that water and energy efficiency does not necessarily reduce overall consumption, and that adoption of drip irrigation (and policies supporting it) can create winners and losers. The article concludes that, although the water–energy–food WEF nexus concept may offer useful insights, its use in policy formulation should be tempered with caution. Technical options that appear beneficial at the conceptual level can have unintended consequences in practice, and policies focused on issues of scarcity and efficiency may exacerbate other dimensions of poverty and inequality.  相似文献   
57.
58.
59.
Ahmed A  Rehan M  Iqbal N 《ISA transactions》2011,50(2):249-255
This paper proposes the design of anti-windup compensator gain for stability of actuator input constrained state delay systems using constrained pole-position of the closed-loop. Based on Delay-Dependent Lyapunov-Krasovskii functionals and local sector conditions, a new LMI characterization is derived that ensures closed-loop asymptotic stability of constrained state delay systems while accounting upper bound fixed state delay and largest lower bound of the system’s pole-position in the formulation of anti-windup gain. Besides, at saturation, the method significantly nullifies the inherent slow dynamics present in the system. It is shown in the comparative numerical examples that the LMI formulation draws stability with improved time-domain performance.  相似文献   
60.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号